New Draft: Jefferson’s Taper

Standard

Read Jefferson’s Taper on SSRN

A little less than a year ago, I made a startling discovery about Thomas Jefferson’s famous observation on the nature of ideas, which (he argued) spread like fire from one person to the next without diminishing the possession of anyone who shares them. As I discovered, Jefferson copied this metaphor from a nearly identical passage in Cicero’s De Officiis–a work of philosophy that was once one of the world’s most widely-read books, but which today few people have even heard of.  As I mined out the implications of Jefferson’s unattributed borrowing, I came to conclude that we have been misreading him for almost a hundred years. Rather than making a proto-utilitarian argument in favor of a limited system of patent rights, Jefferson was instead making a natural law argument–exactly the type of argument that his modern-day detractors rely on to support their policy prescriptions regarding the scope of intellectual property rights. And in fact, gaming out the implications of Jefferson’s natural law argument leads to the conclusion that knowledge creators may actually have some obligations to share their knowledge, rooted in a particular pre-Enlightenment conception of natural law and distributive justice.

Doing the work of fleshing out these implications required me to immerse myself in some old and (to me) unfamiliar philosophical sources for much of the past year. The result is the most “scholarly” work of scholarship I think I’ve ever produced: Jefferson’s Taper, now in draft on SSRN, and hopefully coming soon to a law review near you. This was a ton of fun to research and write; I think it is going to surprise a fair number of people. Comments, as always, are most welcome.

New Draft: Law and Philosophy in IP

Standard

I’ve just posted a draft of a new paper to SSRN on law and philosophy scholarship in intellectual property. It is my contribution to a forthcoming handbook from Oxford University Press, edited by Irene Calboli and Lillà Montagnani, on methodologies in IP research. Here’s the abstract:

Intellectual property (IP) law and philosophy is an interdisciplinary approach to scholarship that applies insights and methods from philosophy to the legal, normative, theoretical, political, and empirical questions presented by the project of organizing and regulating the creation and dissemination of knowledge, technology, and culture. In this chapter, I outline four types of IP-law-and-philosophy scholarship, focusing specifically on the discipline of analytic philosophy (with appropriate caveats about the coherence of that discipline). These modes of scholarship can be categorized as (1) the jurisprudence of the IP system, (2) philosophical analysis of IP law, (3) applied philosophy in IP, and (4) normative theory of IP. Category (4) is obviously a special case of category (3), focusing specifically on applications of moral philosophy. Within each category, I provide illustrative examples of past scholarship and suggestions for further research.

As always, comments are welcome.

New Draft: Post-Sale Confusion in Comparative Perspective (Cambridge Handbook on Comparative and International Trademark Law)

Standard

It’s the summer of short papers, and here’s another one: Post-Sale Confusion in Comparative Perspective, now available on SSRN. This is a chapter for an edited volume with a fantastic international roster of contributors, under the editorial guidance of Jane Ginsburg and Irene Calboli. My contribution is a condensed adaptation of my previous work on the ways trademark law facilitates conspicuous luxury consumption, with a new comparative angle, comparing post-sale-confusion doctrine to the EU’s misappropriation-based theory of trademark liability. Comments, as always, are welcome.

New Draft: Finding Dilution (An Application of Trademark as Promise)

Standard

I’ve just posted to SSRN a draft of a book chapter for a forthcoming volume on trademark law theory and reform edited by Graeme Dinwoodie and Mark Janis. My contribution, entitled “Finding Dilution,” reviews the history and theory of the quixotic theory of liability that everybody loves to hate. As Rebecca Tushnet has noted, in a post-Tam world dilution may not have much of a future, and my analysis in this draft may therefore be moot by the time this volume gets published. But if not, the exercise has given me an opportunity to extend the theoretical framework I established and defended in the Stanford Law Review a few years ago: Trademark as Promise.

In Marks, Morals, and Markets, I argued that a contractualist understanding of trademarks as a tool to facilitate the making and keeping of promises from producers to consumers offered a better descriptive–and more attractive normative–account of producer-consumer relations than the two theoretical frameworks most often applied to trademark law (welfarism and Lockean labor-desert theory). But I “intentionally avoided examining contractualist theory’s implications for trademark law’s regulation of producer-producer relationships” (p. 813), mostly for lack of space, though I conjectured that these implications might well differ from those of a Lockean account. In my new draft, I take on this previously avoided topic and argue that my conjecture was correct, and that the contractualist account of Trademark as Promise offers a justification for the seeming collapse of trademark dilution law into trademark infringement law (draft at 18):

This justification, in turn, seems to depend on a particular kind of consumer reliance—reliance not on stable meaning, which nobody in a free society is in a position to provide, but on performance of promises to deliver goods and services. It is interference with that promise—a promise that does not require the promisor to constrain the action of any third party against their will—that trademark law protects from outside interference. A contractualist trademark right, then, would be considerably narrower than even the infringement-based rights of today. To recast dilution law to conform to such a right would be to do away with dilution as a concept. A promise-based theory of dilution would enforce only those promises the promisor could reasonably perform without constraining the freedom of others to act, while constraining that freedom only to the extent necessary to allow individuals—and particularly consumers—to be able to determine whether a promise has in fact been performed.

As they say, read the whole thing. Comments welcome.

 

New Draft: Brand Renegades Redux

Standard

Charlottesville Riot

I have posted to SSRN a draft of the essay I contributed to Ann Bartow’s IP Scholarship Redux conference at the University of New Hampshire (slides from my presentation at the conference are available here.) These are dark times, and the darkness leaves nothing untouched–certainly not the consumer culture in which we all live our daily lives. As I say in the essay, Nazis buy sneakers too, and often with a purpose. We all–brand owners, consumers, lawyers, and judges–should think about how we can best respond to them.

Trademark Clutter at Northwestern Law REMIP

Standard

I’m in Chicago at Northwestern Law today to present an early-stage empirical project at the Roundtable on Empirical Methods in Intellectual Property (#REMIP). My project will use Canada’s pending change to its trademark registration system as a natural experiment to investigate the role national IP offices play in reducing “clutter”–registrations for marks that go unused, raising clearance costs and depriving competitors and the public of potentially valuable source identifiers.

Slides for the presentation are available here.

Thanks to Dave Schwartz of Northwestern, Chris Buccafusco of Cardozo, and Andrew Toole of the US Patent and Trademark Office for organizing this conference.

Mix, Match, and Layer: Hemel and Ouellette on Incentives and Allocation in Innovation Policy

Standard

One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.

In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.

Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)

The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.

By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).

The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)

The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.

Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.

What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.