Patents

New Draft: Jefferson’s Taper

Read Jefferson’s Taper on SSRN

A little less than a year ago, I made a startling discovery about Thomas Jefferson’s famous observation on the nature of ideas, which (he argued) spread like fire from one person to the next without diminishing the possession of anyone who shares them. As I discovered, Jefferson copied this metaphor from a nearly identical passage in Cicero’s De Officiis–a work of philosophy that was once one of the world’s most widely-read books, but which today few people have even heard of.  As I mined out the implications of Jefferson’s unattributed borrowing, I came to conclude that we have been misreading him for almost a hundred years. Rather than making a proto-utilitarian argument in favor of a limited system of patent rights, Jefferson was instead making a natural law argument–exactly the type of argument that his modern-day detractors rely on to support their policy prescriptions regarding the scope of intellectual property rights. And in fact, gaming out the implications of Jefferson’s natural law argument leads to the conclusion that knowledge creators may actually have some obligations to share their knowledge, rooted in a particular pre-Enlightenment conception of natural law and distributive justice.

Doing the work of fleshing out these implications required me to immerse myself in some old and (to me) unfamiliar philosophical sources for much of the past year. The result is the most “scholarly” work of scholarship I think I’ve ever produced: Jefferson’s Taper, now in draft on SSRN, and hopefully coming soon to a law review near you. This was a ton of fun to research and write; I think it is going to surprise a fair number of people. Comments, as always, are most welcome.

Jefferson’s Taper at IPSC 2018 (Berkeley)

In researching my in-progress monograph on value pluralism in knowledge governance, I made a fascinating discovery about the history of ideas of American intellectual property law. That discovery is now the basis of an article-length project, which I am presenting today at the annual Intellectual Property Scholars Conference, hosted this year at UC Berkeley. The long title is “Jefferson’s Taper and Cicero’s Lumen: A Genealogy of Intellectual Property’s Distributive Ethos,” but I’ve taken to referring to it by the shorthand “Jefferson’s Taper.” Here’s the abstract:

This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s 1813 letter to Isaac McPherson regarding the absence of a natural right to property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson directly copied this Parable of the Taper from a nearly identical parable in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law tradition that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law tradition rests on a classical, pre-Enlightenment notion of distributive justice in which distribution of resources is a matter of private beneficence guided by a principle of proportionality to the merit of the recipient. I then review the ways that notion differs from the modern, post-Enlightenment notion of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with a historical pivot in the intellectual history of the West from the classical notion to the modern notion, and I argue that his invocation and interpretation of the Parable of the Taper reflect this mixing of traditions. Finally, I discuss the implications of both theories of distributive justice for the law and policy of knowledge governance—including but not limited to intellectual property law—and propose that the debate between classical and modern distributivists is more central to policy design than the familiar debate between utilitarians and Lockeans.

Slides for the presentation are available here.

Mix, Match, and Layer: Hemel and Ouellette on Incentives and Allocation in Innovation Policy

One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.

In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.

Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)

The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.

By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).

The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)

The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.

Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.

What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.

Progress for Future Persons: WIPIP Slide Deck and Discussion Points

Following up on yesterday’s post, here are the slides from my WIPIP talk on Progress for Future Persons. Another take on the talk is available in Rebecca Tushnet’s summary of my panel’s presentations.

A couple of interesting points emerged from the Q&A:

  • One of the reasons why rights-talk may be more helpful in the environmental context than in the knowledge-creation context is that rights are often framed in terms of setting a floor: whatever people may come into existence in the future, we want to ensure that they enjoy certain minimum standards of human dignity and opportunity. This makes sense where the legal regime in question is trying to guard against depletion of resources, as in environmental law. It’s less obviously relevant in the knowledge-creation context, where our choices are largely about increasing (and then distributing) available resources–including cultural resources and the resources and capacities made possible by innovation.
  • One of the problems with valuing future states of the world is uncertainty: we aren’t sure what consequences will flow from our current choices. This is true, but it’s not the theoretical issue I’m concerned with in this chapter. In fact, if we were certain what consequences would flow from our current choices, that would in a sense make the problem of future persons worse, if only by presenting it more squarely. That is, under certainty, the only question to deal with in normatively evaluating future states of the world would be choosing among the identities of future persons and of the resources they will enjoy.

Slides: Progress for Future Persons WIPIP 2016

Zika, the Pope, and the Non-Identity Problem

I’m in Seattle for the Works-In-Progress in Intellectual Property Conference (WIPIP […WIPIP good!]), where I’ll be presenting a new piece of my long-running book project, Valuing Progress. This presentation deals with issues I take up in a chapter on “Progress for Future Persons.” And almost on cue, we have international news that highlights exactly the same issues.

In light of the potential risk of serious birth defects associated with the current outbreak of the Zika virus in Latin America, Pope Francis has suggested in informal comments that Catholics might be justified in avoiding pregnancy until the danger passes–a position that some are interpreting to be in tension with Church teachings on contraception. The moral issue the Pope is responding to here is actually central to an important debate in moral philosophy over the moral status of future persons, and it is this debate that I’m leveraging in my own work to discuss whether and how we ought to take account of future persons in designing our policies regarding knowledge creation. This debate centers on a puzzle known as the Non-Identity Problem.

First: the problem in a nutshell. Famously formulated by Derek Parfit in his 1984 opus Reasons and Persons, the Non-Identity Problem presents a contradiction in three moral intuitions many of us share: (1) that an act is only wrong if it wrongs (or perhaps harms) some person; (2) that it is not wrong to bring someone into existence so long as their life remains worth living; and (3) a choice which has the effect of foregoing the creation of one life and inducing the creation of a different, happier life is morally correct. The problem Parfit pointed out is that many real-world cases require us to reject one of these three propositions. The Pope’s comments on Zika present exactly this kind of case.

The choice facing potential mothers in Zika-affected regions today is essentially choice 3. They could delay their pregnancies until after the epidemic passes in the hopes of avoiding the birth defects potentially associated with Zika. Or they could become pregnant and potentially give birth to a child who will suffer from some serious life-long health problems, but still (we might posit) have a life worth living. And if we think–as the reporter who elicited Pope Francis’s news-making comments seemed to think–that delaying pregnancy in this circumstance is “the lesser of two evils,” we must reject either Proposition 1 or Proposition 2. That is, a mother’s choice to give birth to a child who suffers from some birth defect that nevertheless leaves that child’s life worth living cannot be wrong on grounds that it wrongs that child, because the alternative is for that child not to exist at all. And it is a mistake to equate that child with the different child who might be born later–and healthier–if the mother waits to conceive until after the risk posed by Zika has passed. They are, after all, different (potential future) people.

So what does this have to do with Intellectual Property? Well, quite a bit–or so I will argue. Parfit’s point about future people can be generalized to future states of the world, in at least two ways.

One way has resonances with the incommensurability critique of welfarist approaches to normative evaluation: if our policies lead to creation of certain innovations, and certain creative or cultural works, and the non-creation of others, we can certainly say that the future state of the world will be different as a result of our policies than it would have been under alternative policies. But it is hard for us to say in the abstract that this difference has a normative valence: that the world will be better or worse for the creation of one quantum of knowledge rather than another. This is particularly true for cultural works.

The second and more troubling way of generalizing the Non-Identity Problem was in fact taken up by Parfit himself (Reasons and Persons at 361):

Screen Shot 2016-02-19 at 9.01.10 AM

What happens if we try to compare these two states of the world–and future populations–created by our present policies? Assuming that we do not reject Proposition 3–that we think the difference in identity between future persons determined by our present choices does not prevent us from imbuing that choice with moral content–we ought to be able to do the same to future populations. All we need is some metric for what makes life worth living, and some way of aggregating that metric across populations. Parfit called this approach to normative evaluation of states of the world the “Impersonal Total Principle,” and he built out of it  a deep challenge to consequentialist moral theory at the level of populations, encapsulated in what he called the Repugnant Conclusion (Reasons and Persons, at 388):

Screen Shot 2016-02-19 at 9.09.57 AM

If, like Parfit, we find this conclusion repugnant, it may be that we must reject Proposition 2–the reporter’s embedded assumption about the Pope’s views on contraception in the age of Zika. This, in turn, requires us to take Propositions 1 and 3–and the Non-Identity Problem in general–more seriously. It may, in fact, require us to find some basis other than aggregate welfare (or some hypothesized “Impersonal Total”) to normatively evaluate future states of the world, and determine moral obligations in choosing among those future states.

The Repugnant Conclusion is especially relevant to policy choices we make around medical innovations. Many of the choices we make when setting policies in this area have determinative effects on what people may come into existence in the future, and what the quality of their lives will be. But we lack any coherent account of how we ought to weigh the interests of these future people, and as Parfit’s work suggests, such a coherent account may not in fact be available. For example, if we have to choose between directing resources toward curing one of two life-threatening diseases, the compounding effects of such a cure over the course of future generations will result in the non-existence of many people who could have been brought into being had we chosen differently (and conversely, the existence of many people who would not have existed but for our policy choice). If we take the non-identity problem seriously, and fear the repugnant conclusion, identifying plausible normative criteria for guiding such a policy choice is a pressing concern.

I don’t think the extant alternatives are especially promising. The typical welfarist approach to the problem avoids the repugnant conclusion by essentially assuming that future persons don’t matter relative to present persons. The mechanism for this assumption is the discount rate incorporated into most social welfare functions, according to which the well-being of future people quickly and asymptotically approaches zero in our calculation of aggregate welfare. Parfit himself noted that such discounting leads to morally implausible results–for example, it would lead us to conclude we should generate a small amount of energy today through a cheap process that generates toxic waste that will kill billions of people hundreds of years from now. (Reasons and Persons, appx. F)

Another alternative, adopted by many in the environmental policy community (which has been far better at incorporating the insights of the philosophical literature on future persons than the intellectual property community, even though we both deal with social phenomena that are inherently oriented toward the relatively remote future), is that we ought to adopt an independent norm of conservation.  This approach is sometimes justified with rights-talk: it posits that whatever future persons come into being, they have a right to a certain basic level of resources, health, or opportunity. When dealing with a policy area that deals with potential depletion of resources to the point where human life becomes literally impossible, such rights-talk may indeed be helpful. But when weighing trade-offs with less-than-apocalyptic effects on future states of the world, such as most of the trade-offs we face in knowledge-creation policy, rights-talk does a lot less work.

The main approach adopted by those who consider medical research policy–quantification of welfare effects according to Quality-Adjusted-Life-Years (QALYs)–attempts to soften the sharp edge of the repugnant conclusion by considering not only the marginal quantity of life that results from a particular policy intervention (as compared with available alternatives), but also the quality of that added life. This is, for example, the approach of Terry Fisher and Talha Syed in their forthcoming work  on medical funding for populations in developing countries. But there is reason to believe that such quality-adjustment, while practically necessary, is theoretically suspect. In particular, Parfit’s student Larry Temkin has made powerful arguments that we lack a coherent basis to compare the relative effects on welfare of a mosquito bite and a course of violent torture, to say nothing of the relative effects of two serious medical conditions. If Temkin is right, then what is intended as an effort to account for quality of future lives in policymaking begins to look more like an exercise in imposing the normative commitments of policymakers on the future state of the world.

I actually embrace this conclusion. My own developing view is that theory runs out very quickly when evaluating present policies based on their effect on future states of the world. If this is right–that a coherent theoretical account of our responsibility to future generations is simply not possible–then whatever normative content informs our consideration of policies with respect to their effects on future states of the world is probably going to be exogenous to normative or moral theory–that is, it will be based on normative or moral preferences (or, to be more charitable, commitments or axioms). This does not strike me as necessarily a bad thing, but it does require us to be particularly attentive to how we resolve disputes among holders of inconsistent preferences. This is especially true because the future has no way to communicate its preferences to us: as I argued in an earlier post, there is no market for human flourishing. It may be that we have to choose among future states of the world according to idiosyncratic and contestable normative commitments; if that’s true then it is especially important that the social choice institutions to which we entrust such choices reflect appropriate allocations of authority. Representing the interests of future persons in those institutions is a particularly difficult problem: it demands that we in the present undertake difficult other-regarding deliberation in formulating and expressing our own normative commitments, and that the institutions themselves facilitate and respond to the results of that deliberation. Suffice it to say, I have serious doubts that intellectual property regimes–which at their best incentivize knowledge-creation in response to the predictable demands of relatively better-resourced members of society over a relatively short time horizon–satisfy these conditions.

Institutional Competence: SCOTUS Dings CAFC

Others with more of a dog in the fight over Federal Circuit deference to district courts on matters of patent claim construction will have more (and more interesting) things to say about today’s opinion in Teva v. Sandoz. I’ll only note one particular passage in Justice Breyer’s majority opinion that caught my eye, on pages 7-8 of the slip opinion:
Finally, practical considerations favor clear error re­view. We have previously pointed out that clear error review is “particularly” important where patent law is at issue because patent law is “a field where so much de­pends upon familiarity with specific scientific problems and principles not usually contained in the general store­house of knowledge and experience.” Graver Tank & Mfg.Co. v. Linde Air Products Co., 339 U. S. 605, 610 (1950). A district court judge who has presided over, and listened to, the entirety of a proceeding has a comparatively greater opportunity to gain that familiarity than an appeals court judge who must read a written transcript or perhaps just those portions to which the parties have referred. Cf. Lighting Ballast, 744 F. 3d, at 1311 (O’Malley, J., dissent­ing) (Federal Circuit judges “lack the tools that district courts have available to resolve factual disputes fairly and accurately,” such as questioning the experts, examining the invention in operation, or appointing a court-appointed expert); Anderson, 470 U. S., at 574 (“The trial judge’s major role is the determination of fact, and with experi­ence in fulfilling that role comes expertise”).
It seems to me that this reasoning is a fairly direct challenge to the raison d’être of the Federal Circuit.  Learned Hand himself complained that the technical knowledge and expertise necessary to oversee the operation of the patent laws were beyond the grasp of most generalist Article III judges, and this was among the weightier reasons underlying the creation of our only federal appeals court whose jurisdiction is defined by subject matter. But judging by the Supreme Court docket (and the ruminations of some fairly capable generalist federal appellate judges), the argument for a specialist patent court is increasingly under assault.
Of course, it is trendy to take pot-shots at the Federal Circuit, and at the patent system generally. And the Supreme Court has been admonishing the CAFC–in subtle and not-so-subtle ways–for years; the quoted language from the Teva opinion is just the latest in a long line of examples. But the status quo has its defenders, and it does not seem likely that Congress will be loosening the Federal Circuit’s grip on patent law any time soon. So in the meantime, we’re left in the awkward position of continuing to rely on an institution whose comparative competence is increasingly called into question. Which, regardless of your view of the merits of a specialist court, can begin to wear on that court’s perceived legitimacy.