New Draft: Finding Dilution (An Application of Trademark as Promise)

Standard

I’ve just posted to SSRN a draft of a book chapter for a forthcoming volume on trademark law theory and reform edited by Graeme Dinwoodie and Mark Janis. My contribution, entitled “Finding Dilution,” reviews the history and theory of the quixotic theory of liability that everybody loves to hate. As Rebecca Tushnet has noted, in a post-Tam world dilution may not have much of a future, and my analysis in this draft may therefore be moot by the time this volume gets published. But if not, the exercise has given me an opportunity to extend the theoretical framework I established and defended in the Stanford Law Review a few years ago: Trademark as Promise.

In Marks, Morals, and Markets, I argued that a contractualist understanding of trademarks as a tool to facilitate the making and keeping of promises from producers to consumers offered a better descriptive–and more attractive normative–account of producer-consumer relations than the two theoretical frameworks most often applied to trademark law (welfarism and Lockean labor-desert theory). But I “intentionally avoided examining contractualist theory’s implications for trademark law’s regulation of producer-producer relationships” (p. 813), mostly for lack of space, though I conjectured that these implications might well differ from those of a Lockean account. In my new draft, I take on this previously avoided topic and argue that my conjecture was correct, and that the contractualist account of Trademark as Promise offers a justification for the seeming collapse of trademark dilution law into trademark infringement law (draft at 18):

This justification, in turn, seems to depend on a particular kind of consumer reliance—reliance not on stable meaning, which nobody in a free society is in a position to provide, but on performance of promises to deliver goods and services. It is interference with that promise—a promise that does not require the promisor to constrain the action of any third party against their will—that trademark law protects from outside interference. A contractualist trademark right, then, would be considerably narrower than even the infringement-based rights of today. To recast dilution law to conform to such a right would be to do away with dilution as a concept. A promise-based theory of dilution would enforce only those promises the promisor could reasonably perform without constraining the freedom of others to act, while constraining that freedom only to the extent necessary to allow individuals—and particularly consumers—to be able to determine whether a promise has in fact been performed.

As they say, read the whole thing. Comments welcome.

 

Right and Wrong vs. Right and No-Right

Standard

This is a point that is probably too big for a blog post. But as the end of the Supreme Court term rolls around, and we start getting decisions in some of the more divisive cases of our times, something about the political undercurrents of the Court’s annual ritual has me thinking about the way it tends to legalize morality, and how much of our political narrative has to do with our disagreement about the way morals and law interact. I’m not speaking here about the positivist vs. anti-positivist debate in jurisprudence, which I view as being primarily about the ontology of law: what makes law “law” instead of something else. That question holds fairly little interest for me. Instead, I’m interested in the debate underlying that question: about the relationship between law on the one hand and justice or morality on the other. Most of the political energy released over these late decision days is not, I think, about the law, nor even about the morality, of the disputes themselves. Instead, it seems to me to be about the extent to which moral obligations ought or ought not to be legal ones–particularly in a democratic country whose citizens hold to diverse moral systems.

There is a long philosophical tradition that holds there is a difference between the kinds of conduct that can be enforced by legal coercion and the kinds that may attract moral praise or blame but which the state has no role in enforcing. This distinction–between the strict or “perfect” duties of Justice (or Right) and the softer or “imperfect” duties of Virtue (or Ethics)–is most familiar from Kant’s moral philosophy, but it has precursors stretching from Cicero to Grotius. Whether or not the distinction is philosophically sound or useful, I think it is a helpful tool for examining the interaction of morals and laws–but not in the sense in which philosophers have traditionally examined them. For most moral philosophers, justice and virtue are complementary parts of a cohesive whole: a moral system in which some duties are absolute and others contingent; the latter must often be weighed (sometimes against one another) in particular circumstances, but all duties are part of a single overarching normative system. But I think the end of the Supreme Court term generates so much heat precisely because it exposes the friction between two distinct and sometimes incompatible normative systems: the system of legal obligation and the system of moral obligation.

The simplest world would be one in which the law required us to do everything that was morally obligatory on us, forbade us to do everything that was morally wrong, and permitted us to do everything morally neutral–where law and morality perfectly overlap. But that has never been the world we live in–and not only because different people might have different views about morality. Conflicts between law and morality are familiar, and have been identified and examined at length in the scholarly literature, sometimes in exploring the moral legitimacy of legal authority, other times in evaluating the duty (or lack thereof) of obeying (or violating) unjust laws. Such conflicts are at least as old as Socrates’ cup of hemlock; one can trace a line from Finnis back to Aquinas, and from Hart back to Hobbes. Moreover, because in our society moral obligations are often derived from religious convictions, and our Constitution and statutes give religious practices a privileged status under the law, these conflicts are quite familiar to us in the form of claims for religious exemption from generally applicable laws–historically in the context of conscientious objection to military service, and more recently as an expanding web of recurring issues under the Religious Freedom Restoration Act.

But religious convictions are not the only moral convictions that might conflict with legal obligations. And the question whether one ought to obey an unjust law represents only one type of intersection between the two normative systems of law and morals–the most dramatic one, certainly, and the one that has garnered the most attention–but not the only one. Some of those intersections will present a conflict between law and morality, but many will not. Still, I think each such intersection carries a recognizable political valence in American society, precisely because our political allegiances tend to be informed by our moral commitments. I’ve outlined a (very preliminary) attempt to categorize those political valences in the chart below, though your views on the categories may differ (in which case I’d love to hear about it):

 Political Valence
Moral Categories
Wrong Suberogatory Morally Neutral Right Supererogatory
Legal Categories Forbidden Law and Order Nanny State Victimless Crime Civil Disobedience
Permitted Failure of Justice The Price of Liberty The Right to be Let Alone Good Deeds Saints and Heroes
Required Just Following Orders Red Tape Civic Duty Overdemanding Laws

Now of course, people of different political stripes will put different legal and moral situations into different boxes in the above chart, and frame the intersection of law and morals from the points of view of different agents. In our current political debate over enforcement of the immigration laws, for example, an American conservative might frame the issue from the point of view of the immigrant, and put enforcement in the “Law and Order” box; while an American progressive might frame the issue from the point of view of federal agents, and put enforcement in the “Just Following Orders” box. Conversely, to the progressive, the “Law and Order” category might call to mind the current controversy over whether a sitting president can be indicted; to a conservative, the “Just Following Orders” category might call to mind strict environmental regulations. Coordination of health insurance markets through federal law might be seen as an example of the Nanny State (conservative) or of Civic Duty (progressive–though 25 years ago this was a conservative position); permissive firearms laws as either a Failure of Justice (progressive) or as part of the Right to be Let Alone (conservative).

The complexity of these interactions of law and morality strikes me as extremely important to the functioning of a democratic, pluralistic society committed to the rule of law. On the one hand, the coexistence of diverse and mutually incompatible moral systems with a single (federalism aside) legal system means that inevitably some people subject to the legal system will identify some aspects of the law with a negative political valence while other people subject to the same legal system will identify those same aspects of the law with a positive political valence. For a society like ours to function, most of these people must in most circumstances be prepared to translate their moral dispute into a political-legal dispute: to recognize the legitimacy of law’s requirements and channel their moral disagreement into the democratic political process of changing the law (rather than making their moral commitment a law unto itself). The types of political valences I’ve identified may be a vehicle for doing precisely that: they provide a narrative and a framework that focuses moral commitments on political processes with legal outcomes. Our political process thus becomes the intermediating institution of our moral conflicts.

These types of moral disagreements are likely to account for the vast majority of disagreements about which political valence is implicated in a particular legal dispute. But interestingly, I think it is also likely that some disagreements over the political valence at the intersection of a legal and a moral category could arise even where people are in moral agreement. In such cases, the right/wrong axis of moral deliberation is replaced with the right/no-right axis of Hohfeldian legal architecture: we agree on what the parties morally ought to do, and the only question is how the law ought to be structured to reflect that moral agreement. Yesterday’s opinion in the Masterpiece Cakeshop case strikes me as evidence of this possibility, and I’m unsure whether that makes it comforting or concerning.

To be sure, there are deep moral disagreements fueling this litigation. The appellants and their supporters believe as a matter of sincere religious conviction that celebrating any same-sex marriages is morally wrong, and that it is at least supererogatory and perhaps morally required to refrain from contributing their services to the celebration of such a marriage. The respondents and their supporters believe that celebrating loving same-sex marriages is at least morally neutral and more likely morally right, and that refusing to do so is at least suberogatory and more likely morally wrong. (Full disclosure: I’m soundly on the side of the respondents on this moral issue.) But that moral dispute is not addressed in the opinion that issued yesterday. The opinion resolves a legal question–the question whether Colorado’s anti-discrimination laws had been applied in a way that was inconsistent with the appellants’ First Amendment rights. And the Court’s ruling turned on peculiarities of how the Colorado agency charged with enforcing the state’s antidiscrimination law went about its business–particularly, statements that Justice Kennedy believed evince “hostility” to religious claims.

It is quite likely that both political progressives and political conservatives would agree that “hostility” to religious beliefs on the part of state law enforcement officials is morally wrong, or at least suberogatory. And if that–rather than the morality of celebrating same-sex marriages–is the real moral issue in the case, the parties’ deep moral disagreement moves to one side, and instead we must simply ask how the law ought to be fashioned to avoid the moral wrong of anti-religious hostility. Here, interestingly, the typical moral progressive/conservative battle lines are either unclear or absent. If you agree with Justice Kennedy’s characterization of the facts (which you might not, and which I do not), and if you believe the respondents lack the power to compel the appellants to provide their services in connection with same-sex wedding celebrations (or that they ought to lack this power), you likely believe that the case so framed is a vindication of law and order. But even if you think the respondents do (or should) have the power to compel the appellants to provide their services for same-sex weddings, and you agree with Justice Kennedy’s characterization of the facts, invalidating this otherwise permissible state action on grounds that it was motivated by morally suspect “hostility” might be acceptable so framed as a vindication of Law and Order, or at worst seen as an example of the Nanny State. And this reveals an important point that I think likely motivated the Justices in Masterpiece Cakeshop: with this resolution of the case there is a way for moral adversaries to agree on the political valence of the outcome.

There is obviously no guarantee that the litigants or their supporters will come to such agreement. To the contrary, it seems more likely that both parties’ supporters will see the “hostility” reasoning as a distraction from the outcome, which still touches on the deeper moral issues involved: the appellants’ supporters will see the outcome as a vindication of Law and Order and an invitation to Good Deeds, while respondents’ supporters will see it as a Failure of Justice and an instruction to state law enforcement agents to Just Follow Orders. But at the very least, political agreement is possible in a way that it would not be if the Court had aligned the law with one of the conflicting moral frameworks of the litigants and against the other.

It is both the virtue and the weakness of this type of solution is that it solves a legal controversy without taking sides in the moral disputes that ultimately generated the controversy in the first place. In so doing, it insulates the law from contests of morality and, possibly, of politics–but those contests haven’t gone away. Conversely, such solutions might have the effect of insulating politics from law: if the courts will only decide disputes on grounds orthogonal to the moral commitments underlying political movements, such movements may cease to see the law as a useful instrument, and may start casting about for others. I’m not sure that’s a healthy result for a society trying to hold on to democracy, pluralism, and the rule of law all at the same time. On the other hand, I’m not sure there’s a better option. The belief that we can, and somehow will, forge a common legal and political culture despite our deep moral disagreements is not one I think a republic can safely abandon.

Mix, Match, and Layer: Hemel and Ouellette on Incentives and Allocation in Innovation Policy

Standard

One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.

In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.

Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)

The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.

By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).

The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)

The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.

Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.

What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.

Valuing Progress: Forthcoming 2018 from Cambridge University Press

Standard

I’m very pleased to announce that the book project I have been plodding away at for over two years is now under contract with Cambridge University Press. Its working title is Valuing Progress: A Pluralist Approach to Knowledge Governance. Keep an eye out for it in late 2018, and tell your librarian to do likewise!

Bits and pieces of Valuing Progress have appeared on this blog and elsewhere as it has developed from a half-baked essay into a monograph-sized project:

  • I presented my first musings about the relationship between normative commitments regarding distribution and the choice of a knowledge-governance regime as the opening plenary presentation at IPSC in Berkeley–these musings will now be more fully developed in Chapter 4 of the book: “Reciprocity.”
  • My exploration of our obligations to future persons, and the implication of those obligations for our present-day knowledge-governance policies, used analogous arguments in environmental policy as an early springboard. Deeper consideration of our obligations to the future led me to Derek Parfit’s Non-Identity Problem, at first through the lens of public health policy. Because knowledge governance–like environmental stewardship and global health policy–is a cooperative social phenomenon spanning timescales greater than any single human lifetime, the problem of future persons is one any theory of knowledge governance must engage. I made my first effort to do so at the 2015 Works-In-Progress in Intellectual Property (WIPIP) Conference at the University of Washington, and presented a more recent take at NYU’s 2017 Tri-State IP Workshop. My fuller treatment of the issue will appear in Chapter 7 of Valuing Progress: “Future Persons.”
  • Finally, the driving theoretical debate in IP lately has been the one between Mark Lemley, champion of consequentialism, and Rob Merges, who has lately turned from consequentialism to nonconsequentialist philosophers such as Locke and Rawls for theoretical foundations. My hot take on this debate was generative enough to justify organizing a symposium on the issue at the St. John’s Intellectual Property Law Center, where I serve as founding director. I was gratified that both Professors Lemley and Merges presented on a panel together, and that I was able to use the opportunity to more fully introduce my own thoughts on this debate. My introduction to the symposium issue of the St. John’s Law Review forms the kernel of Chapter 2 of Valuing Progress: “From Is to Ought.”

Other chapters will discuss the incommensurability of values at stake in knowledge governance, the relevance of luck and agency to our weighing of those values,  the widening of our moral concern regarding the burdens and benefits of knowledge creation to encompass socially remote persons, and the role of value pluralism in shaping political institutions and ethical norms to reconcile these values when they inevitably conflict. The result, I hope, will introduce my colleagues in innovation and creativity law and policy to a wider literature in moral philosophy that bears directly on their work. In doing so, I hope to help frame the distinction between–and the appropriate domains of–empirical and normative argumentation, to point a way out of our increasingly unhelpful arguments about 18th-century philosophy, and to introduce a more nuanced set of normative concerns that engage with the messiness and imperfection of human progress.

I am extremely grateful to everyone who has helped me to bring Valuing Progress to this important stage of development, including Matt Gallaway at CUP, the organizers of conferences at which I’ve had the opportunity to present early pieces of the project (particularly Peter Menell, Pam Samuelson, Molly Shaffer Van Houweling, and Rob Merges at Berkeley; Jennifer Rothman at Loyola of Los Angeles; Jeanne Fromer and Barton Beebe at NYU; Zahr Said at the University of Washington; Irina Manta at Hofstra; and Paul Gugliuzza at Boston University). I am also grateful for the support of St. John’s Law School, my dean Mike Simons, and my colleagues who have served as associate dean for faculty scholarship as this project has been in development: Marc DeGirolami and Anita Krishnakumar. Many more friends and colleagues have offered helpful feedback on early drafts and conversation about points and arguments that will find their way into the manuscript; they can all expect warm thanks in the acknowledgments section of the finished book.

But first, I have to finish writing the thing. So, back to work.

Derek Parfit, RIP

Standard

Reports are that Oxford philosopher Derek Parfit died last night. Parfit’s philosophy is not well known or appreciated in my field of intellectual property, which is only just starting to absorb the work of John Rawls. This is something I am working to change, as the questions Parfit raised about our obligations to one another as persons–and in particular our obligations to the future–are deeply implicated in the policies intellectual property law is supposed to serve. Indeed, when I learned about Parfit’s death, I was hard at work trying to finish a draft of a book chapter that I will be presenting at NYU in less than two weeks. (The chapter is an extension of a presentation I made at WIPIP this past spring at the University of Washington.)

Parfit’s thoughts on mortality were idiosyncratic, based on his equally idiosyncratic views of the nature and identity of persons over time. I must admit I have never found his account of identity as psychological connectedness to be especially useful, but I have always found his almost Buddhist description of his state of mind upon committing to this view to be very attractive. So rather than mourn Parfit, I prefer to ruminate on his reflections on death, from page 281 of his magnificent book, Reasons and Persons:

screen-shot-2017-01-02-at-4-46-16-pm

If Parfit is right, then my own experiences, and those of others who have learned from his work, give us all reason to view the fact of his physical death as less bad than we might otherwise–and to be grateful. I can at least do the latter.

Progress for Future Persons: WIPIP Slide Deck and Discussion Points

Standard

Following up on yesterday’s post, here are the slides from my WIPIP talk on Progress for Future Persons. Another take on the talk is available in Rebecca Tushnet’s summary of my panel’s presentations.

A couple of interesting points emerged from the Q&A:

  • One of the reasons why rights-talk may be more helpful in the environmental context than in the knowledge-creation context is that rights are often framed in terms of setting a floor: whatever people may come into existence in the future, we want to ensure that they enjoy certain minimum standards of human dignity and opportunity. This makes sense where the legal regime in question is trying to guard against depletion of resources, as in environmental law. It’s less obviously relevant in the knowledge-creation context, where our choices are largely about increasing (and then distributing) available resources–including cultural resources and the resources and capacities made possible by innovation.
  • One of the problems with valuing future states of the world is uncertainty: we aren’t sure what consequences will flow from our current choices. This is true, but it’s not the theoretical issue I’m concerned with in this chapter. In fact, if we were certain what consequences would flow from our current choices, that would in a sense make the problem of future persons worse, if only by presenting it more squarely. That is, under certainty, the only question to deal with in normatively evaluating future states of the world would be choosing among the identities of future persons and of the resources they will enjoy.

Slides: Progress for Future Persons WIPIP 2016

Zika, the Pope, and the Non-Identity Problem

Standard

I’m in Seattle for the Works-In-Progress in Intellectual Property Conference (WIPIP […WIPIP good!]), where I’ll be presenting a new piece of my long-running book project, Valuing Progress. This presentation deals with issues I take up in a chapter on “Progress for Future Persons.” And almost on cue, we have international news that highlights exactly the same issues.

In light of the potential risk of serious birth defects associated with the current outbreak of the Zika virus in Latin America, Pope Francis has suggested in informal comments that Catholics might be justified in avoiding pregnancy until the danger passes–a position that some are interpreting to be in tension with Church teachings on contraception. The moral issue the Pope is responding to here is actually central to an important debate in moral philosophy over the moral status of future persons, and it is this debate that I’m leveraging in my own work to discuss whether and how we ought to take account of future persons in designing our policies regarding knowledge creation. This debate centers on a puzzle known as the Non-Identity Problem.

First: the problem in a nutshell. Famously formulated by Derek Parfit in his 1984 opus Reasons and Persons, the Non-Identity Problem presents a contradiction in three moral intuitions many of us share: (1) that an act is only wrong if it wrongs (or perhaps harms) some person; (2) that it is not wrong to bring someone into existence so long as their life remains worth living; and (3) a choice which has the effect of foregoing the creation of one life and inducing the creation of a different, happier life is morally correct. The problem Parfit pointed out is that many real-world cases require us to reject one of these three propositions. The Pope’s comments on Zika present exactly this kind of case.

The choice facing potential mothers in Zika-affected regions today is essentially choice 3. They could delay their pregnancies until after the epidemic passes in the hopes of avoiding the birth defects potentially associated with Zika. Or they could become pregnant and potentially give birth to a child who will suffer from some serious life-long health problems, but still (we might posit) have a life worth living. And if we think–as the reporter who elicited Pope Francis’s news-making comments seemed to think–that delaying pregnancy in this circumstance is “the lesser of two evils,” we must reject either Proposition 1 or Proposition 2. That is, a mother’s choice to give birth to a child who suffers from some birth defect that nevertheless leaves that child’s life worth living cannot be wrong on grounds that it wrongs that child, because the alternative is for that child not to exist at all. And it is a mistake to equate that child with the different child who might be born later–and healthier–if the mother waits to conceive until after the risk posed by Zika has passed. They are, after all, different (potential future) people.

So what does this have to do with Intellectual Property? Well, quite a bit–or so I will argue. Parfit’s point about future people can be generalized to future states of the world, in at least two ways.

One way has resonances with the incommensurability critique of welfarist approaches to normative evaluation: if our policies lead to creation of certain innovations, and certain creative or cultural works, and the non-creation of others, we can certainly say that the future state of the world will be different as a result of our policies than it would have been under alternative policies. But it is hard for us to say in the abstract that this difference has a normative valence: that the world will be better or worse for the creation of one quantum of knowledge rather than another. This is particularly true for cultural works.

The second and more troubling way of generalizing the Non-Identity Problem was in fact taken up by Parfit himself (Reasons and Persons at 361):

Screen Shot 2016-02-19 at 9.01.10 AM

What happens if we try to compare these two states of the world–and future populations–created by our present policies? Assuming that we do not reject Proposition 3–that we think the difference in identity between future persons determined by our present choices does not prevent us from imbuing that choice with moral content–we ought to be able to do the same to future populations. All we need is some metric for what makes life worth living, and some way of aggregating that metric across populations. Parfit called this approach to normative evaluation of states of the world the “Impersonal Total Principle,” and he built out of it  a deep challenge to consequentialist moral theory at the level of populations, encapsulated in what he called the Repugnant Conclusion (Reasons and Persons, at 388):

Screen Shot 2016-02-19 at 9.09.57 AM

If, like Parfit, we find this conclusion repugnant, it may be that we must reject Proposition 2–the reporter’s embedded assumption about the Pope’s views on contraception in the age of Zika. This, in turn, requires us to take Propositions 1 and 3–and the Non-Identity Problem in general–more seriously. It may, in fact, require us to find some basis other than aggregate welfare (or some hypothesized “Impersonal Total”) to normatively evaluate future states of the world, and determine moral obligations in choosing among those future states.

The Repugnant Conclusion is especially relevant to policy choices we make around medical innovations. Many of the choices we make when setting policies in this area have determinative effects on what people may come into existence in the future, and what the quality of their lives will be. But we lack any coherent account of how we ought to weigh the interests of these future people, and as Parfit’s work suggests, such a coherent account may not in fact be available. For example, if we have to choose between directing resources toward curing one of two life-threatening diseases, the compounding effects of such a cure over the course of future generations will result in the non-existence of many people who could have been brought into being had we chosen differently (and conversely, the existence of many people who would not have existed but for our policy choice). If we take the non-identity problem seriously, and fear the repugnant conclusion, identifying plausible normative criteria for guiding such a policy choice is a pressing concern.

I don’t think the extant alternatives are especially promising. The typical welfarist approach to the problem avoids the repugnant conclusion by essentially assuming that future persons don’t matter relative to present persons. The mechanism for this assumption is the discount rate incorporated into most social welfare functions, according to which the well-being of future people quickly and asymptotically approaches zero in our calculation of aggregate welfare. Parfit himself noted that such discounting leads to morally implausible results–for example, it would lead us to conclude we should generate a small amount of energy today through a cheap process that generates toxic waste that will kill billions of people hundreds of years from now. (Reasons and Persons, appx. F)

Another alternative, adopted by many in the environmental policy community (which has been far better at incorporating the insights of the philosophical literature on future persons than the intellectual property community, even though we both deal with social phenomena that are inherently oriented toward the relatively remote future), is that we ought to adopt an independent norm of conservation.  This approach is sometimes justified with rights-talk: it posits that whatever future persons come into being, they have a right to a certain basic level of resources, health, or opportunity. When dealing with a policy area that deals with potential depletion of resources to the point where human life becomes literally impossible, such rights-talk may indeed be helpful. But when weighing trade-offs with less-than-apocalyptic effects on future states of the world, such as most of the trade-offs we face in knowledge-creation policy, rights-talk does a lot less work.

The main approach adopted by those who consider medical research policy–quantification of welfare effects according to Quality-Adjusted-Life-Years (QALYs)–attempts to soften the sharp edge of the repugnant conclusion by considering not only the marginal quantity of life that results from a particular policy intervention (as compared with available alternatives), but also the quality of that added life. This is, for example, the approach of Terry Fisher and Talha Syed in their forthcoming work  on medical funding for populations in developing countries. But there is reason to believe that such quality-adjustment, while practically necessary, is theoretically suspect. In particular, Parfit’s student Larry Temkin has made powerful arguments that we lack a coherent basis to compare the relative effects on welfare of a mosquito bite and a course of violent torture, to say nothing of the relative effects of two serious medical conditions. If Temkin is right, then what is intended as an effort to account for quality of future lives in policymaking begins to look more like an exercise in imposing the normative commitments of policymakers on the future state of the world.

I actually embrace this conclusion. My own developing view is that theory runs out very quickly when evaluating present policies based on their effect on future states of the world. If this is right–that a coherent theoretical account of our responsibility to future generations is simply not possible–then whatever normative content informs our consideration of policies with respect to their effects on future states of the world is probably going to be exogenous to normative or moral theory–that is, it will be based on normative or moral preferences (or, to be more charitable, commitments or axioms). This does not strike me as necessarily a bad thing, but it does require us to be particularly attentive to how we resolve disputes among holders of inconsistent preferences. This is especially true because the future has no way to communicate its preferences to us: as I argued in an earlier post, there is no market for human flourishing. It may be that we have to choose among future states of the world according to idiosyncratic and contestable normative commitments; if that’s true then it is especially important that the social choice institutions to which we entrust such choices reflect appropriate allocations of authority. Representing the interests of future persons in those institutions is a particularly difficult problem: it demands that we in the present undertake difficult other-regarding deliberation in formulating and expressing our own normative commitments, and that the institutions themselves facilitate and respond to the results of that deliberation. Suffice it to say, I have serious doubts that intellectual property regimes–which at their best incentivize knowledge-creation in response to the predictable demands of relatively better-resourced members of society over a relatively short time horizon–satisfy these conditions.