jsheff

Zika, the Pope, and the Non-Identity Problem

I’m in Seattle for the Works-In-Progress in Intellectual Property Conference (WIPIP […WIPIP good!]), where I’ll be presenting a new piece of my long-running book project, Valuing Progress. This presentation deals with issues I take up in a chapter on “Progress for Future Persons.” And almost on cue, we have international news that highlights exactly the same issues.

In light of the potential risk of serious birth defects associated with the current outbreak of the Zika virus in Latin America, Pope Francis has suggested in informal comments that Catholics might be justified in avoiding pregnancy until the danger passes–a position that some are interpreting to be in tension with Church teachings on contraception. The moral issue the Pope is responding to here is actually central to an important debate in moral philosophy over the moral status of future persons, and it is this debate that I’m leveraging in my own work to discuss whether and how we ought to take account of future persons in designing our policies regarding knowledge creation. This debate centers on a puzzle known as the Non-Identity Problem.

First: the problem in a nutshell. Famously formulated by Derek Parfit in his 1984 opus Reasons and Persons, the Non-Identity Problem presents a contradiction in three moral intuitions many of us share: (1) that an act is only wrong if it wrongs (or perhaps harms) some person; (2) that it is not wrong to bring someone into existence so long as their life remains worth living; and (3) a choice which has the effect of foregoing the creation of one life and inducing the creation of a different, happier life is morally correct. The problem Parfit pointed out is that many real-world cases require us to reject one of these three propositions. The Pope’s comments on Zika present exactly this kind of case.

The choice facing potential mothers in Zika-affected regions today is essentially choice 3. They could delay their pregnancies until after the epidemic passes in the hopes of avoiding the birth defects potentially associated with Zika. Or they could become pregnant and potentially give birth to a child who will suffer from some serious life-long health problems, but still (we might posit) have a life worth living. And if we think–as the reporter who elicited Pope Francis’s news-making comments seemed to think–that delaying pregnancy in this circumstance is “the lesser of two evils,” we must reject either Proposition 1 or Proposition 2. That is, a mother’s choice to give birth to a child who suffers from some birth defect that nevertheless leaves that child’s life worth living cannot be wrong on grounds that it wrongs that child, because the alternative is for that child not to exist at all. And it is a mistake to equate that child with the different child who might be born later–and healthier–if the mother waits to conceive until after the risk posed by Zika has passed. They are, after all, different (potential future) people.

So what does this have to do with Intellectual Property? Well, quite a bit–or so I will argue. Parfit’s point about future people can be generalized to future states of the world, in at least two ways.

One way has resonances with the incommensurability critique of welfarist approaches to normative evaluation: if our policies lead to creation of certain innovations, and certain creative or cultural works, and the non-creation of others, we can certainly say that the future state of the world will be different as a result of our policies than it would have been under alternative policies. But it is hard for us to say in the abstract that this difference has a normative valence: that the world will be better or worse for the creation of one quantum of knowledge rather than another. This is particularly true for cultural works.

The second and more troubling way of generalizing the Non-Identity Problem was in fact taken up by Parfit himself (Reasons and Persons at 361):

Screen Shot 2016-02-19 at 9.01.10 AM

What happens if we try to compare these two states of the world–and future populations–created by our present policies? Assuming that we do not reject Proposition 3–that we think the difference in identity between future persons determined by our present choices does not prevent us from imbuing that choice with moral content–we ought to be able to do the same to future populations. All we need is some metric for what makes life worth living, and some way of aggregating that metric across populations. Parfit called this approach to normative evaluation of states of the world the “Impersonal Total Principle,” and he built out of it  a deep challenge to consequentialist moral theory at the level of populations, encapsulated in what he called the Repugnant Conclusion (Reasons and Persons, at 388):

Screen Shot 2016-02-19 at 9.09.57 AM

If, like Parfit, we find this conclusion repugnant, it may be that we must reject Proposition 2–the reporter’s embedded assumption about the Pope’s views on contraception in the age of Zika. This, in turn, requires us to take Propositions 1 and 3–and the Non-Identity Problem in general–more seriously. It may, in fact, require us to find some basis other than aggregate welfare (or some hypothesized “Impersonal Total”) to normatively evaluate future states of the world, and determine moral obligations in choosing among those future states.

The Repugnant Conclusion is especially relevant to policy choices we make around medical innovations. Many of the choices we make when setting policies in this area have determinative effects on what people may come into existence in the future, and what the quality of their lives will be. But we lack any coherent account of how we ought to weigh the interests of these future people, and as Parfit’s work suggests, such a coherent account may not in fact be available. For example, if we have to choose between directing resources toward curing one of two life-threatening diseases, the compounding effects of such a cure over the course of future generations will result in the non-existence of many people who could have been brought into being had we chosen differently (and conversely, the existence of many people who would not have existed but for our policy choice). If we take the non-identity problem seriously, and fear the repugnant conclusion, identifying plausible normative criteria for guiding such a policy choice is a pressing concern.

I don’t think the extant alternatives are especially promising. The typical welfarist approach to the problem avoids the repugnant conclusion by essentially assuming that future persons don’t matter relative to present persons. The mechanism for this assumption is the discount rate incorporated into most social welfare functions, according to which the well-being of future people quickly and asymptotically approaches zero in our calculation of aggregate welfare. Parfit himself noted that such discounting leads to morally implausible results–for example, it would lead us to conclude we should generate a small amount of energy today through a cheap process that generates toxic waste that will kill billions of people hundreds of years from now. (Reasons and Persons, appx. F)

Another alternative, adopted by many in the environmental policy community (which has been far better at incorporating the insights of the philosophical literature on future persons than the intellectual property community, even though we both deal with social phenomena that are inherently oriented toward the relatively remote future), is that we ought to adopt an independent norm of conservation.  This approach is sometimes justified with rights-talk: it posits that whatever future persons come into being, they have a right to a certain basic level of resources, health, or opportunity. When dealing with a policy area that deals with potential depletion of resources to the point where human life becomes literally impossible, such rights-talk may indeed be helpful. But when weighing trade-offs with less-than-apocalyptic effects on future states of the world, such as most of the trade-offs we face in knowledge-creation policy, rights-talk does a lot less work.

The main approach adopted by those who consider medical research policy–quantification of welfare effects according to Quality-Adjusted-Life-Years (QALYs)–attempts to soften the sharp edge of the repugnant conclusion by considering not only the marginal quantity of life that results from a particular policy intervention (as compared with available alternatives), but also the quality of that added life. This is, for example, the approach of Terry Fisher and Talha Syed in their forthcoming work  on medical funding for populations in developing countries. But there is reason to believe that such quality-adjustment, while practically necessary, is theoretically suspect. In particular, Parfit’s student Larry Temkin has made powerful arguments that we lack a coherent basis to compare the relative effects on welfare of a mosquito bite and a course of violent torture, to say nothing of the relative effects of two serious medical conditions. If Temkin is right, then what is intended as an effort to account for quality of future lives in policymaking begins to look more like an exercise in imposing the normative commitments of policymakers on the future state of the world.

I actually embrace this conclusion. My own developing view is that theory runs out very quickly when evaluating present policies based on their effect on future states of the world. If this is right–that a coherent theoretical account of our responsibility to future generations is simply not possible–then whatever normative content informs our consideration of policies with respect to their effects on future states of the world is probably going to be exogenous to normative or moral theory–that is, it will be based on normative or moral preferences (or, to be more charitable, commitments or axioms). This does not strike me as necessarily a bad thing, but it does require us to be particularly attentive to how we resolve disputes among holders of inconsistent preferences. This is especially true because the future has no way to communicate its preferences to us: as I argued in an earlier post, there is no market for human flourishing. It may be that we have to choose among future states of the world according to idiosyncratic and contestable normative commitments; if that’s true then it is especially important that the social choice institutions to which we entrust such choices reflect appropriate allocations of authority. Representing the interests of future persons in those institutions is a particularly difficult problem: it demands that we in the present undertake difficult other-regarding deliberation in formulating and expressing our own normative commitments, and that the institutions themselves facilitate and respond to the results of that deliberation. Suffice it to say, I have serious doubts that intellectual property regimes–which at their best incentivize knowledge-creation in response to the predictable demands of relatively better-resourced members of society over a relatively short time horizon–satisfy these conditions.

Trademarks and Economic Activity

There’s an increasing amount of empirical data available on trademark registration systems. The USPTO released a comprehensive dataset three years ago, and there are less complete and less user-friendly data sources available from other national and regional offices–though some offices make it a bit tricky to get their data, and others restrict access or charge for their data products. As with most trends in legal scholarship, the empirical turn has come late to the study of trademarks. Part of this is because the scholarly community is small, and not as quantitatively-minded as other disciplines. Part of it is because it’s not clear what questions regarding trademarks we might look to empirical evidence to answer. I’ve published a study of the impact of the federal antidilution statute on federal registration (spoiler alert: it adds to the cost of registration but doesn’t seem to affect outcomes), but that’s a pretty narrow issue. What else could we learn from this kind of data?

One possibility is to examine the link between trademarks and economic activity. People who make a living from commerce involving intellectual property like to emphasize how important IP protection is to the economy, though the numbers they throw around are a bit dubious. But if we were serious about it, could we rigorously draw some link between trademarks–which are the most common and ubiquitous form of intellectual property in the economy–and economic performance?

I’ve been thinking about how we might do so, so I brought my modest quantitative analytical skills to bear on the best data currently available: the USPTO’s dataset. I thought I’d just look to see whether there is any relationship between trademark activity (in this case, applications for federal trademark registrations) and economic activity (in this case, real GDP). And it seems that there is one…kind of.

TM Apps vs USGDP

The GDP data from the St. Louis Fed is reported quarterly and seasonally adjusted; I compiled the trademark application data on a quarterly basis and calculated a 4-quarter moving average as a seasonal smoothing kludge. We see that trademark application activity is strongly seasonal, and that it tends to roughly track GDP trends–perhaps with a bit of a lag. The lag is interesting if more rigorous analysis bears it out: it seems to suggest that trademarks, rather than driving economic activity, are merely a lagging indicator of that activity.

The big exception is the late 1990s to the early 2000s. As Barton Beebe documented in his first look at USPTO data, this spike in trademark activity seems to correspond with the dot-com boom and bust. (Registration rates also dropped during this period–lots of these applications were low-quality or quickly abandoned.) It’s interesting to see that this huge discontinuity in trademark application activity doesn’t correlate with anywhere near as big an impact in the overall economy. We could speculate about why that might be–it probably has something to do with the “gold-rush” scramble to occupy a new, untapped field of commerce, and I suspect it also reflects (poorly) on the value of the early web to the overall economy.

This is an example of the kind of analysis these new data sources might be useful for–and it’s not that tricky to carry out. Building this chart was a couple hours’ work, and I’m no expert. A more rigorous econometric model is beyond my expertise, but I’m sure it could be done (I’m less sure what we could learn from it). What other kinds of questions might we look to trademark data to answer?

LV Loss is About the EU, Not Handbags

Some of my IP friends are posting today about Louis Vuitton’s loss last week of a trademark fight over its checkerboard pattern in the EU General Court. This was news in Europe when it happened (the IPKat, a great resource for EU IP happenings, reported on it at the time), but it was only picked up on in the popular US fashion press today (here and here and here, for example).

LV is a very vigorous (some would say bullying) trademark litigant here in the US. And so there may be a tendency to chalk up this opinion to their pattern of overreaching on substantive trademark law. But it’s always a good idea to read the actual decisions (here and here). Because when you do, a somewhat different picture emerges.

To my eye these cases are not so much about trademark law as they are about the legal and economic structure of the EU (in the particular context of community-wide IP rights). The key language (paragraph 84 in both opinions) is:

“It follows from the unitary character of the Community trade mark that, in order to be accepted for registration, a sign must have distinctive character throughout the European Union.”

In other words, to get community-wide protection a mark must serve as a trademark in every member state, not just a few, or even a majority. This creates a higher evidentiary burden for LV, but potentially not an insurmountable one. It also provides an incentive for manufacturers and merchants not to ignore the peripheral EU countries when marketing their products. It is, in other words, less about trademarks than it is about trade. But in any case, it’s a fascinating issue for those who are interested in the increasing internationalization of IP rights and regimes.

Faith-Based vs. Value-Based IP: On the Lemley-Merges Debate

The splash in the IP academy today is Mark Lemley’s posting last night of a somewhat polemical essay he has forthcoming in the UCLA Law Review. In it he criticizes a number of IP scholars–principally his former Berkeley colleague Rob Merges–for turning to moral-rights-based arguments in favor of strong intellectual property protections as mounting empirical evidence fails to present a compelling case for their preferred policies. The “faith-based” epithet is intentionally provocative, but the money graf comes at the end (footnotes omitted):

But if you are one of the faithful, I probably haven’t persuaded you. The psychology literature suggests that while people are willing to be corrected about factual inaccuracies—things they think are true but are not—they are essentially impervious to correction once the thing that turns out to be untrue crosses the line into a belief. And that leads me to the last—and, to me, most worrisome—problem with faith-based IP. If you are a true believer, we have nothing to say to each other. I don’t mean by that that I am giving up on you, deciding that you’re not worth my time to persuade. Rather, I mean that we simply cannot speak the same language. There is no principled way to compare one person’s claim to lost freedom to another’s claim to a right to ownership. Nor is there a way to weigh your claim of moral entitlement against evidence that the exercise of that right actually reduces creativity by others. Faith-based IP is at its base a religion and not a science because it does not admit the prospect of being proven wrong. The inevitable result of a move toward faith-based IP is that we will make policy based on our instincts without being able to engage in a meaningful conversation about the wisdom of that policy.

The accusation Mark is making here is of epistemic closure: that his antagonists are unwilling to entertain the possibility that they are mistaken, or to candidly weigh evidence that would tend to prove such a mistake. For an academic, them’s fightin’ words. And I think they’re unfortunate. I think the problem here is neither epistemic nor methodological; it’s political (in a non-pejorative sense).

I suspect we are dealing with two academic camps that simply value different things in different measure, as humans are wont to do. This disagreement might lead to the conclusion that the two sides “have nothing to say to each other.” But we might also conclude that the apparent absence of a shared language between moral theorists and consequentialists is precisely the type of problem academics in an applied discipline like law are particularly well-suited to solve, by looking beneath the language each camp uses to identify the ideas and disagreements underneath, and frame the issues in a language that both sides can engage on the merits. Indeed, that’s precisely what I’ve been working on lately.

For example: Mark takes a very strong position in his essay against “rights-talk” in IP: the idea that the market interventions IP law makes in favor of authors and inventors (and those in privity with them) are “some kind of prepolitical right to which inventors and creators are entitled” “regardless of the social cost that granting that right imposes” (pp. 10, 15). But I think very few if any IP scholars–even Rob–are willing to take such an extreme theoretical position in favor of strong IP rights. Rob’s foray into rights-based justification for IP rights is hardly the stuff of doctrinaire deontological theory; it is suffused with concerns over consequences that a strict Kantian might shrug off as either the rational implication of self-consistent moral duties or the sphere of practical reason rather than moral theory (take Rob’s entire chapter on what he calls “The Proportionality Principle,” for example).

I think it is clear that Rob thinks very highly of authors and inventors, and is willing to privilege them over users and consumers in many contexts where Mark would prefer to allow competition to do its consumer-friendly work at the expense of the professional creative class. But it isn’t clear that in choosing the language of Kant and Rawls to justify his preference, Rob is shutting himself off to evidence that would persuade him that any of Mark’s particular policy preferences are well-founded, any more than I think Mark would dismiss out of hand the idea that some of the benefits of creative activity might be overlooked in particular forms of cost-benefit analysis. Instead, I think these two scholars are simply disagreeing over the appropriate domain of empirical inquiry–chiefly with respect to the measurement of value.

The line between the empirical and the normative is not so clear here. Take a seemingly simple example: How much is fan fiction worth to society? How should we even go about trying to answer this question? Is revealed preference through market transactions a reliable empirical measure in this context? Is there some way to measure imputed foregone income of fanfic authors, and if so would that be a good measure? Might there be some legitimate value in the freedom of fans to express themselves through transformative works that can’t be measured economically or even empirically?  And on the other side of the ledger, how much value is generated by giving the commercial author the right to control the production of such works? Again, are the commercial author’s own preferences revealed through market transactions a useful measure? Is the psychological reward authors feel when exercising control over their own creations something that we can empirically measure? If so, should we count it? And If so, how do we weigh that psychological reward against the psychological reward experienced by fanfic authors? And finally, what value do those of us who neither create commercial works nor use them as a basis for our own expression derive from a world in which fanfiction is liberally allowed? Or strictly controlled? Again, how would we measure that value? How would we compare it to the other forms of value already discussed?

We can ask similar questions about most areas of innovation and creativity–from pharmaceutical patents to federal research grants to streaming audio royalties. Mark would have us obtain the relevant data and follow where it leads, which sounds good in the abstract. But the first step in answering any question in these areas of policy empirically is figuring out whether there is even anything useful to measure, or whether the relevant questions are too deeply enmeshed in questions of subjective value that empirical measurement cannot meaningfully capture. These are, in the language of moral philosophy, the problems of interpersonal comparison and aggregation. And the debate over them is an old one, in law, in philosophy, and even in literature; it goes back at least to Dworkin and Posner; to Parfit and Scanlon (and, yes, Rawls); to Dostoyevsky and to Captain Spock. And ultimately, as I noted above and will be arguing in a book I’m currently working on, these problems are not theoretical or empirical; they are political (in the non-pejorative sense). That is, when two parties fundamentally disagree over questions of value that cannot be resolved empirically, the only tools we can feasibly use to resolve the disagreement are political ones. (Of course, this observation thrusts us into the knot of dilemmas handed down to us from Arrow, Sen, and the rest of the social choice theorists, but that is a separate issue for another day.)

There are good reasons why a thinker in the area of IP might dispute the relevance of empirical evidence on many of the questions we deal with–though it would be the poor scholar indeed who shuts out relevant empirical data entirely on those issues to which it is relevant. I don’t think Rob has done that; I think he and Mark simply disagree as to which of the questions that we must confront in setting innovation policy can be helpfully answered by reference to empirical evidence. We may not ultimately agree on that particular problem–on whether a particular predicate for a policy decision is properly considered empirical or normative. Indeed, I don’t agree entirely with Rob or with Mark on that problem, let alone on the policy positions that they might derive from their particular approaches to the task.  But I am quite certain that this is something we can indeed talk about in a common language if we turn the temperature down a bit.

Institutional Competence: SCOTUS Dings CAFC

Others with more of a dog in the fight over Federal Circuit deference to district courts on matters of patent claim construction will have more (and more interesting) things to say about today’s opinion in Teva v. Sandoz. I’ll only note one particular passage in Justice Breyer’s majority opinion that caught my eye, on pages 7-8 of the slip opinion:
Finally, practical considerations favor clear error re­view. We have previously pointed out that clear error review is “particularly” important where patent law is at issue because patent law is “a field where so much de­pends upon familiarity with specific scientific problems and principles not usually contained in the general store­house of knowledge and experience.” Graver Tank & Mfg.Co. v. Linde Air Products Co., 339 U. S. 605, 610 (1950). A district court judge who has presided over, and listened to, the entirety of a proceeding has a comparatively greater opportunity to gain that familiarity than an appeals court judge who must read a written transcript or perhaps just those portions to which the parties have referred. Cf. Lighting Ballast, 744 F. 3d, at 1311 (O’Malley, J., dissent­ing) (Federal Circuit judges “lack the tools that district courts have available to resolve factual disputes fairly and accurately,” such as questioning the experts, examining the invention in operation, or appointing a court-appointed expert); Anderson, 470 U. S., at 574 (“The trial judge’s major role is the determination of fact, and with experi­ence in fulfilling that role comes expertise”).
It seems to me that this reasoning is a fairly direct challenge to the raison d’être of the Federal Circuit.  Learned Hand himself complained that the technical knowledge and expertise necessary to oversee the operation of the patent laws were beyond the grasp of most generalist Article III judges, and this was among the weightier reasons underlying the creation of our only federal appeals court whose jurisdiction is defined by subject matter. But judging by the Supreme Court docket (and the ruminations of some fairly capable generalist federal appellate judges), the argument for a specialist patent court is increasingly under assault.
Of course, it is trendy to take pot-shots at the Federal Circuit, and at the patent system generally. And the Supreme Court has been admonishing the CAFC–in subtle and not-so-subtle ways–for years; the quoted language from the Teva opinion is just the latest in a long line of examples. But the status quo has its defenders, and it does not seem likely that Congress will be loosening the Federal Circuit’s grip on patent law any time soon. So in the meantime, we’re left in the awkward position of continuing to rely on an institution whose comparative competence is increasingly called into question. Which, regardless of your view of the merits of a specialist court, can begin to wear on that court’s perceived legitimacy.

The Nice Classification and Law’s Expressive Function

Happy New Year! For trademark lawyers, today marks the entry into force of the 2015 Version of the 10th Edition of the Nice Classification. This is the classification system that trademark owners refer to in identifying what types of goods or services they are claiming a right to use their marks with. (Trademark law allows for concurrent use by different users in sufficiently distinct product or service categories–think Delta Faucets and Delta Airlines). Just scanning the USPTO’s helpful list of “noteworthy changes” in the 2015 version, I’m reminded how much trademark law is a window into society, and how it can be example of what Cass Sunstein called the “expressive function” of the law.

Glancing through the list, we see that e-cigarette fluids are now firmly associated with smoking and tobacco instead of chemistry; that 3D-printers are considered less a scientific curiosity and more a useful tool; that the government is no longer quite so particular about categorizing sex toys according to precisely how they get you off.

Of course, against this apparently progressive list of changes are some more troubling indicia of an increasingly stratified and commodified consumer culture. We must now be careful to distinguish custom tailoring from mere clothing repair. We apparently need separate categories for all the various specialty mitts one might use for different household tasks–whereas once a washcloth could be used in the shower or on your car, now you need two different specially-designed gloves to achieve both tasks–and be sure you don’t confuse either with the different specialty mitt you use in the kitchen. And because even in our social interactions we’d rather spend money than time and effort, there is now legal recognition for branded gift wrapping services.

So that’s where we’re headed in 2015. Whether law is leading or following, I’ll leave you to decide.

Don’t Hate the Player, Hate the Game

Via engadget, here’s an IP-related story that brings me back to middle school. A redditor who (currently) goes under the handle XsimonbelmontX has clearly spent a lot of time building and testing a board game based on one of my favorite 8-bit side-scrolling platformers from the 1980s, Castlevania:

It’s an impressive bit of work. The game involves co-operative play, and in addition to the progressive board game format it seems to incorporate elements of card-based role playing games like Magic: The Gathering in addition to die-based roleplaying elements reminiscent of Dungeons & Dragons. Responses on Reddit are quite positive; the general tenor is captured by the current leading comment, from redditor “Canadianized” which reads: “I would buy the fuck out of this.”

But of course, Canadianized can’t buy XsimonbelmontX’s game, because it’s not available for sale–there’s just the one prototype.  And some redditors, predictably, blame IP (see here, and here). But the IP story appears to be somewhat complicated.

Continue reading…

There Is No Market for Human Flourishing

A blog post by Santa Clara economist and law professor David Friedman came across one of my social media feeds today, and it resonated with a lot of issues I’ve been thinking about lately. Friedman is taking issue with a particular corner of the climate change debate: the uncertainty over the sign and magnitude of future economic effects of climate change.  This uncertainty, he argues, counsels caution: we know it would be very costly to intervene right now; but waiting will allow for more precise assessment of the potential costs of climate change, a longer time frame for spreading those costs over the human population, and–importantly–potential to distribute those costs in a more appropriate way.  Here are the key passages for my purposes:

Diking against a meter of sea level change could be a serious problem for Bangladesh if it happened tomorrow. If Bangladesh follows the pattern of China, where GDP per capita has increased twenty fold since Mao’s death, by the time it happens they can pay the cost out of small change.

William Nordhaus, an economist who has specialized in climate issues … reported his estimate of how much greater the cost of climate change would be if we waited fifty years to deal with it instead of taking the optimal action at once.  The number was $4.1 trillion. He took that as an argument for action, writing that “Wars have been started over smaller sums.”
As I pointed out in a post here responding to Nordhaus, the cost is spread over the entire world and a long period of time. Annualized, it comes to something under .1% of world GNP.

(Emphasis and link to Nordhaus article added–JNS)

I have no specialized insight into the climate change debate, I have no particular beef against Prof. Friedman or his work, and I carry no particular brief for Prof. Nordhaus or his work. But I think this particular post is a very good encapsulation of how profoundly unhelpful the welfare economics approach is in analyzing problems of social cooperation over long time scales and across disparate populations, and I thought I’d take the opportunity to try to explain why.

Continue reading…

Bleg: Seeking Research on “Disparaging” Trademarks Under Lanham Act 2(a)

Because I don’t have enough to do, I have taken on a time-sensitive research project for an ABA task force examining the provisions of the Lanham Act barring registrations of “scandalous” and “disparaging” marks (my task focuses on disparagement). I’m sure many of my law professor and lawyer friends have thought and written about these provisions more thoroughly than I have. If you’re one of them, I’d be grateful for you pointing me in the direction of the best sources to consult. Shameless self-promotion is heartily encouraged.