Scholarship

Now In Print: Legal Sets

2019-07-25 12.20.52Published Version Available Here

Tenure has its privileges.

Three years ago, I posted on this site that I had spent the year prior working up a lengthy, dense draft of a deeply theoretical piece that had grown out of my noodling over a relatively small doctrinal question in trademark law. This draft was well outside of my usual wheelhouse: technical, philosophical, and abstract. It marked the beginning of what has now become a sharp pivot in my scholarly career, into more self-consciously philosophical investigations of the rules we impose on the creation and dissemination of knowledge.

Initially journals reacted coolly–which I can’t fault them for; the piece is long by law review standards and the framework is more technical and less accessible than standard law review fare. I took some time to get feedback from more accomplished legal theorists than myself, I revised the draft, and ultimately it found a home at the Cardozo Law Review, whose student editors worked hard to improve the piece and have now published Legal Sets in their June 2019 issue (which includes a number of other interesting pieces). All told, that means this project took four years to move from the question that first got me working on it to a final, published article. I am deeply cognizant of the tremendous privilege I enjoy in being able to earn a living by dedicating such a significant chunk of my life to this type of work, and I hope I’m making good on that privilege.

This is the kind of work–and the kind of departure from familiar subjects and methods–that I never would have attempted pre-tenure. The risk of failure was great; the opportunity costs were high, and the need to produce measurable outputs to get me through the next gate on my career path was too pressing. But you know, at the risk of tooting my own horn, I think this article is pretty good, and despite its theoretical cast I think it makes a real contribution to a sounder understanding of how law works in practice. So I come away from the experience of writing this piece with a renewed gratitude for the privilege of academic freedom and job security, and a renewed sense that the general absence of such freedom and security from our economy (outside of the shrinking rarefied precincts of the tenure-track academy) is almost surely holding our society back from its full potential.

 

New Draft: Jefferson’s Taper

Read Jefferson’s Taper on SSRN

A little less than a year ago, I made a startling discovery about Thomas Jefferson’s famous observation on the nature of ideas, which (he argued) spread like fire from one person to the next without diminishing the possession of anyone who shares them. As I discovered, Jefferson copied this metaphor from a nearly identical passage in Cicero’s De Officiis–a work of philosophy that was once one of the world’s most widely-read books, but which today few people have even heard of.  As I mined out the implications of Jefferson’s unattributed borrowing, I came to conclude that we have been misreading him for almost a hundred years. Rather than making a proto-utilitarian argument in favor of a limited system of patent rights, Jefferson was instead making a natural law argument–exactly the type of argument that his modern-day detractors rely on to support their policy prescriptions regarding the scope of intellectual property rights. And in fact, gaming out the implications of Jefferson’s natural law argument leads to the conclusion that knowledge creators may actually have some obligations to share their knowledge, rooted in a particular pre-Enlightenment conception of natural law and distributive justice.

Doing the work of fleshing out these implications required me to immerse myself in some old and (to me) unfamiliar philosophical sources for much of the past year. The result is the most “scholarly” work of scholarship I think I’ve ever produced: Jefferson’s Taper, now in draft on SSRN, and hopefully coming soon to a law review near you. This was a ton of fun to research and write; I think it is going to surprise a fair number of people. Comments, as always, are most welcome.

New Draft: Law and Philosophy in IP

I’ve just posted a draft of a new paper to SSRN on law and philosophy scholarship in intellectual property. It is my contribution to a forthcoming handbook from Oxford University Press, edited by Irene Calboli and Lillà Montagnani, on methodologies in IP research. Here’s the abstract:

Intellectual property (IP) law and philosophy is an interdisciplinary approach to scholarship that applies insights and methods from philosophy to the legal, normative, theoretical, political, and empirical questions presented by the project of organizing and regulating the creation and dissemination of knowledge, technology, and culture. In this chapter, I outline four types of IP-law-and-philosophy scholarship, focusing specifically on the discipline of analytic philosophy (with appropriate caveats about the coherence of that discipline). These modes of scholarship can be categorized as (1) the jurisprudence of the IP system, (2) philosophical analysis of IP law, (3) applied philosophy in IP, and (4) normative theory of IP. Category (4) is obviously a special case of category (3), focusing specifically on applications of moral philosophy. Within each category, I provide illustrative examples of past scholarship and suggestions for further research.

As always, comments are welcome.

Jefferson’s Taper at IPSC 2018 (Berkeley)

In researching my in-progress monograph on value pluralism in knowledge governance, I made a fascinating discovery about the history of ideas of American intellectual property law. That discovery is now the basis of an article-length project, which I am presenting today at the annual Intellectual Property Scholars Conference, hosted this year at UC Berkeley. The long title is “Jefferson’s Taper and Cicero’s Lumen: A Genealogy of Intellectual Property’s Distributive Ethos,” but I’ve taken to referring to it by the shorthand “Jefferson’s Taper.” Here’s the abstract:

This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s 1813 letter to Isaac McPherson regarding the absence of a natural right to property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson directly copied this Parable of the Taper from a nearly identical parable in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law tradition that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law tradition rests on a classical, pre-Enlightenment notion of distributive justice in which distribution of resources is a matter of private beneficence guided by a principle of proportionality to the merit of the recipient. I then review the ways that notion differs from the modern, post-Enlightenment notion of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with a historical pivot in the intellectual history of the West from the classical notion to the modern notion, and I argue that his invocation and interpretation of the Parable of the Taper reflect this mixing of traditions. Finally, I discuss the implications of both theories of distributive justice for the law and policy of knowledge governance—including but not limited to intellectual property law—and propose that the debate between classical and modern distributivists is more central to policy design than the familiar debate between utilitarians and Lockeans.

Slides for the presentation are available here.

New Draft: Post-Sale Confusion in Comparative Perspective (Cambridge Handbook on Comparative and International Trademark Law)

It’s the summer of short papers, and here’s another one: Post-Sale Confusion in Comparative Perspective, now available on SSRN. This is a chapter for an edited volume with a fantastic international roster of contributors, under the editorial guidance of Jane Ginsburg and Irene Calboli. My contribution is a condensed adaptation of my previous work on the ways trademark law facilitates conspicuous luxury consumption, with a new comparative angle, comparing post-sale-confusion doctrine to the EU’s misappropriation-based theory of trademark liability. Comments, as always, are welcome.

New Draft: Finding Dilution (An Application of Trademark as Promise)

I’ve just posted to SSRN a draft of a book chapter for a forthcoming volume on trademark law theory and reform edited by Graeme Dinwoodie and Mark Janis. My contribution, entitled “Finding Dilution,” reviews the history and theory of the quixotic theory of liability that everybody loves to hate. As Rebecca Tushnet has noted, in a post-Tam world dilution may not have much of a future, and my analysis in this draft may therefore be moot by the time this volume gets published. But if not, the exercise has given me an opportunity to extend the theoretical framework I established and defended in the Stanford Law Review a few years ago: Trademark as Promise.

In Marks, Morals, and Markets, I argued that a contractualist understanding of trademarks as a tool to facilitate the making and keeping of promises from producers to consumers offered a better descriptive–and more attractive normative–account of producer-consumer relations than the two theoretical frameworks most often applied to trademark law (welfarism and Lockean labor-desert theory). But I “intentionally avoided examining contractualist theory’s implications for trademark law’s regulation of producer-producer relationships” (p. 813), mostly for lack of space, though I conjectured that these implications might well differ from those of a Lockean account. In my new draft, I take on this previously avoided topic and argue that my conjecture was correct, and that the contractualist account of Trademark as Promise offers a justification for the seeming collapse of trademark dilution law into trademark infringement law (draft at 18):

This justification, in turn, seems to depend on a particular kind of consumer reliance—reliance not on stable meaning, which nobody in a free society is in a position to provide, but on performance of promises to deliver goods and services. It is interference with that promise—a promise that does not require the promisor to constrain the action of any third party against their will—that trademark law protects from outside interference. A contractualist trademark right, then, would be considerably narrower than even the infringement-based rights of today. To recast dilution law to conform to such a right would be to do away with dilution as a concept. A promise-based theory of dilution would enforce only those promises the promisor could reasonably perform without constraining the freedom of others to act, while constraining that freedom only to the extent necessary to allow individuals—and particularly consumers—to be able to determine whether a promise has in fact been performed.

As they say, read the whole thing. Comments welcome.

 

New Draft: Brand Renegades Redux

Charlottesville Riot

I have posted to SSRN a draft of the essay I contributed to Ann Bartow’s IP Scholarship Redux conference at the University of New Hampshire (slides from my presentation at the conference are available here.) These are dark times, and the darkness leaves nothing untouched–certainly not the consumer culture in which we all live our daily lives. As I say in the essay, Nazis buy sneakers too, and often with a purpose. We all–brand owners, consumers, lawyers, and judges–should think about how we can best respond to them.

Trademark Clutter at Northwestern Law REMIP

I’m in Chicago at Northwestern Law today to present an early-stage empirical project at the Roundtable on Empirical Methods in Intellectual Property (#REMIP). My project will use Canada’s pending change to its trademark registration system as a natural experiment to investigate the role national IP offices play in reducing “clutter”–registrations for marks that go unused, raising clearance costs and depriving competitors and the public of potentially valuable source identifiers.

Slides for the presentation are available here.

Thanks to Dave Schwartz of Northwestern, Chris Buccafusco of Cardozo, and Andrew Toole of the US Patent and Trademark Office for organizing this conference.

Mix, Match, and Layer: Hemel and Ouellette on Incentives and Allocation in Innovation Policy

One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.

In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.

Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)

The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.

By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).

The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)

The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.

Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.

What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.

Valuing Progress: Forthcoming 2018 from Cambridge University Press

I’m very pleased to announce that the book project I have been plodding away at for over two years is now under contract with Cambridge University Press. Its working title is Valuing Progress: A Pluralist Approach to Knowledge Governance. Keep an eye out for it in late 2018, and tell your librarian to do likewise!

Bits and pieces of Valuing Progress have appeared on this blog and elsewhere as it has developed from a half-baked essay into a monograph-sized project:

  • I presented my first musings about the relationship between normative commitments regarding distribution and the choice of a knowledge-governance regime as the opening plenary presentation at IPSC in Berkeley–these musings will now be more fully developed in Chapter 4 of the book: “Reciprocity.”
  • My exploration of our obligations to future persons, and the implication of those obligations for our present-day knowledge-governance policies, used analogous arguments in environmental policy as an early springboard. Deeper consideration of our obligations to the future led me to Derek Parfit’s Non-Identity Problem, at first through the lens of public health policy. Because knowledge governance–like environmental stewardship and global health policy–is a cooperative social phenomenon spanning timescales greater than any single human lifetime, the problem of future persons is one any theory of knowledge governance must engage. I made my first effort to do so at the 2015 Works-In-Progress in Intellectual Property (WIPIP) Conference at the University of Washington, and presented a more recent take at NYU’s 2017 Tri-State IP Workshop. My fuller treatment of the issue will appear in Chapter 7 of Valuing Progress: “Future Persons.”
  • Finally, the driving theoretical debate in IP lately has been the one between Mark Lemley, champion of consequentialism, and Rob Merges, who has lately turned from consequentialism to nonconsequentialist philosophers such as Locke and Rawls for theoretical foundations. My hot take on this debate was generative enough to justify organizing a symposium on the issue at the St. John’s Intellectual Property Law Center, where I serve as founding director. I was gratified that both Professors Lemley and Merges presented on a panel together, and that I was able to use the opportunity to more fully introduce my own thoughts on this debate. My introduction to the symposium issue of the St. John’s Law Review forms the kernel of Chapter 2 of Valuing Progress: “From Is to Ought.”

Other chapters will discuss the incommensurability of values at stake in knowledge governance, the relevance of luck and agency to our weighing of those values,  the widening of our moral concern regarding the burdens and benefits of knowledge creation to encompass socially remote persons, and the role of value pluralism in shaping political institutions and ethical norms to reconcile these values when they inevitably conflict. The result, I hope, will introduce my colleagues in innovation and creativity law and policy to a wider literature in moral philosophy that bears directly on their work. In doing so, I hope to help frame the distinction between–and the appropriate domains of–empirical and normative argumentation, to point a way out of our increasingly unhelpful arguments about 18th-century philosophy, and to introduce a more nuanced set of normative concerns that engage with the messiness and imperfection of human progress.

I am extremely grateful to everyone who has helped me to bring Valuing Progress to this important stage of development, including Matt Gallaway at CUP, the organizers of conferences at which I’ve had the opportunity to present early pieces of the project (particularly Peter Menell, Pam Samuelson, Molly Shaffer Van Houweling, and Rob Merges at Berkeley; Jennifer Rothman at Loyola of Los Angeles; Jeanne Fromer and Barton Beebe at NYU; Zahr Said at the University of Washington; Irina Manta at Hofstra; and Paul Gugliuzza at Boston University). I am also grateful for the support of St. John’s Law School, my dean Mike Simons, and my colleagues who have served as associate dean for faculty scholarship as this project has been in development: Marc DeGirolami and Anita Krishnakumar. Many more friends and colleagues have offered helpful feedback on early drafts and conversation about points and arguments that will find their way into the manuscript; they can all expect warm thanks in the acknowledgments section of the finished book.

But first, I have to finish writing the thing. So, back to work.