I arrived in Tokyo two days ago, and have already begun work at the Institute of Intellectual Property, digging in to the Japan Patent Office’s (JPO’s) trademark registration data. I’ve worked with several countries’ intellectual property data systems by now, and I’m starting to think they may provide a window into the societies that produced them–though I’m still too jet-lagged to thoughtfully analyze the connection. Besides which, any analysis purporting to draw such a connection would inevitably be reductive and probably chauvinistic. So, purely by way of observation:
Scholarship
Going to Tokyo: I’ve Been Appointed an “Invited Researcher” by Japan’s Institute of Intellectual Property
I’m very excited to announce that the Institute of Intellectual Property in Tokyo has invited me to participate in its Invited Overseas Researcher Program this coming summer. Under an agreement with the Japan Patent Office, each year IIP invites a small number of foreign researchers to come to Tokyo to study Japan’s industrial property system. (Past researchers can be found here.) I’ll be spending several weeks in Tokyo this summer doing empirical research into Japan’s trademark registration system (as a foundation for the kind of work discussed in this post). Many thanks to Kevin Collins (who did this program last year) for flagging this opportunity, and to Barton Beebe, Graeme Dinwoodie, and Jay Kesan (also a previous participant in the IIP program) for their support.
Progress for Future Persons: WIPIP Slide Deck and Discussion Points
Following up on yesterday’s post, here are the slides from my WIPIP talk on Progress for Future Persons. Another take on the talk is available in Rebecca Tushnet’s summary of my panel’s presentations.
A couple of interesting points emerged from the Q&A:
- One of the reasons why rights-talk may be more helpful in the environmental context than in the knowledge-creation context is that rights are often framed in terms of setting a floor: whatever people may come into existence in the future, we want to ensure that they enjoy certain minimum standards of human dignity and opportunity. This makes sense where the legal regime in question is trying to guard against depletion of resources, as in environmental law. It’s less obviously relevant in the knowledge-creation context, where our choices are largely about increasing (and then distributing) available resources–including cultural resources and the resources and capacities made possible by innovation.
- One of the problems with valuing future states of the world is uncertainty: we aren’t sure what consequences will flow from our current choices. This is true, but it’s not the theoretical issue I’m concerned with in this chapter. In fact, if we were certain what consequences would flow from our current choices, that would in a sense make the problem of future persons worse, if only by presenting it more squarely. That is, under certainty, the only question to deal with in normatively evaluating future states of the world would be choosing among the identities of future persons and of the resources they will enjoy.
Zika, the Pope, and the Non-Identity Problem
I’m in Seattle for the Works-In-Progress in Intellectual Property Conference (WIPIP […WIPIP good!]), where I’ll be presenting a new piece of my long-running book project, Valuing Progress. This presentation deals with issues I take up in a chapter on “Progress for Future Persons.” And almost on cue, we have international news that highlights exactly the same issues.
In light of the potential risk of serious birth defects associated with the current outbreak of the Zika virus in Latin America, Pope Francis has suggested in informal comments that Catholics might be justified in avoiding pregnancy until the danger passes–a position that some are interpreting to be in tension with Church teachings on contraception. The moral issue the Pope is responding to here is actually central to an important debate in moral philosophy over the moral status of future persons, and it is this debate that I’m leveraging in my own work to discuss whether and how we ought to take account of future persons in designing our policies regarding knowledge creation. This debate centers on a puzzle known as the Non-Identity Problem.
First: the problem in a nutshell. Famously formulated by Derek Parfit in his 1984 opus Reasons and Persons, the Non-Identity Problem presents a contradiction in three moral intuitions many of us share: (1) that an act is only wrong if it wrongs (or perhaps harms) some person; (2) that it is not wrong to bring someone into existence so long as their life remains worth living; and (3) a choice which has the effect of foregoing the creation of one life and inducing the creation of a different, happier life is morally correct. The problem Parfit pointed out is that many real-world cases require us to reject one of these three propositions. The Pope’s comments on Zika present exactly this kind of case.
The choice facing potential mothers in Zika-affected regions today is essentially choice 3. They could delay their pregnancies until after the epidemic passes in the hopes of avoiding the birth defects potentially associated with Zika. Or they could become pregnant and potentially give birth to a child who will suffer from some serious life-long health problems, but still (we might posit) have a life worth living. And if we think–as the reporter who elicited Pope Francis’s news-making comments seemed to think–that delaying pregnancy in this circumstance is “the lesser of two evils,” we must reject either Proposition 1 or Proposition 2. That is, a mother’s choice to give birth to a child who suffers from some birth defect that nevertheless leaves that child’s life worth living cannot be wrong on grounds that it wrongs that child, because the alternative is for that child not to exist at all. And it is a mistake to equate that child with the different child who might be born later–and healthier–if the mother waits to conceive until after the risk posed by Zika has passed. They are, after all, different (potential future) people.
So what does this have to do with Intellectual Property? Well, quite a bit–or so I will argue. Parfit’s point about future people can be generalized to future states of the world, in at least two ways.
One way has resonances with the incommensurability critique of welfarist approaches to normative evaluation: if our policies lead to creation of certain innovations, and certain creative or cultural works, and the non-creation of others, we can certainly say that the future state of the world will be different as a result of our policies than it would have been under alternative policies. But it is hard for us to say in the abstract that this difference has a normative valence: that the world will be better or worse for the creation of one quantum of knowledge rather than another. This is particularly true for cultural works.
The second and more troubling way of generalizing the Non-Identity Problem was in fact taken up by Parfit himself (Reasons and Persons at 361):
What happens if we try to compare these two states of the world–and future populations–created by our present policies? Assuming that we do not reject Proposition 3–that we think the difference in identity between future persons determined by our present choices does not prevent us from imbuing that choice with moral content–we ought to be able to do the same to future populations. All we need is some metric for what makes life worth living, and some way of aggregating that metric across populations. Parfit called this approach to normative evaluation of states of the world the “Impersonal Total Principle,” and he built out of it a deep challenge to consequentialist moral theory at the level of populations, encapsulated in what he called the Repugnant Conclusion (Reasons and Persons, at 388):
If, like Parfit, we find this conclusion repugnant, it may be that we must reject Proposition 2–the reporter’s embedded assumption about the Pope’s views on contraception in the age of Zika. This, in turn, requires us to take Propositions 1 and 3–and the Non-Identity Problem in general–more seriously. It may, in fact, require us to find some basis other than aggregate welfare (or some hypothesized “Impersonal Total”) to normatively evaluate future states of the world, and determine moral obligations in choosing among those future states.
The Repugnant Conclusion is especially relevant to policy choices we make around medical innovations. Many of the choices we make when setting policies in this area have determinative effects on what people may come into existence in the future, and what the quality of their lives will be. But we lack any coherent account of how we ought to weigh the interests of these future people, and as Parfit’s work suggests, such a coherent account may not in fact be available. For example, if we have to choose between directing resources toward curing one of two life-threatening diseases, the compounding effects of such a cure over the course of future generations will result in the non-existence of many people who could have been brought into being had we chosen differently (and conversely, the existence of many people who would not have existed but for our policy choice). If we take the non-identity problem seriously, and fear the repugnant conclusion, identifying plausible normative criteria for guiding such a policy choice is a pressing concern.
I don’t think the extant alternatives are especially promising. The typical welfarist approach to the problem avoids the repugnant conclusion by essentially assuming that future persons don’t matter relative to present persons. The mechanism for this assumption is the discount rate incorporated into most social welfare functions, according to which the well-being of future people quickly and asymptotically approaches zero in our calculation of aggregate welfare. Parfit himself noted that such discounting leads to morally implausible results–for example, it would lead us to conclude we should generate a small amount of energy today through a cheap process that generates toxic waste that will kill billions of people hundreds of years from now. (Reasons and Persons, appx. F)
Another alternative, adopted by many in the environmental policy community (which has been far better at incorporating the insights of the philosophical literature on future persons than the intellectual property community, even though we both deal with social phenomena that are inherently oriented toward the relatively remote future), is that we ought to adopt an independent norm of conservation. This approach is sometimes justified with rights-talk: it posits that whatever future persons come into being, they have a right to a certain basic level of resources, health, or opportunity. When dealing with a policy area that deals with potential depletion of resources to the point where human life becomes literally impossible, such rights-talk may indeed be helpful. But when weighing trade-offs with less-than-apocalyptic effects on future states of the world, such as most of the trade-offs we face in knowledge-creation policy, rights-talk does a lot less work.
The main approach adopted by those who consider medical research policy–quantification of welfare effects according to Quality-Adjusted-Life-Years (QALYs)–attempts to soften the sharp edge of the repugnant conclusion by considering not only the marginal quantity of life that results from a particular policy intervention (as compared with available alternatives), but also the quality of that added life. This is, for example, the approach of Terry Fisher and Talha Syed in their forthcoming work on medical funding for populations in developing countries. But there is reason to believe that such quality-adjustment, while practically necessary, is theoretically suspect. In particular, Parfit’s student Larry Temkin has made powerful arguments that we lack a coherent basis to compare the relative effects on welfare of a mosquito bite and a course of violent torture, to say nothing of the relative effects of two serious medical conditions. If Temkin is right, then what is intended as an effort to account for quality of future lives in policymaking begins to look more like an exercise in imposing the normative commitments of policymakers on the future state of the world.
I actually embrace this conclusion. My own developing view is that theory runs out very quickly when evaluating present policies based on their effect on future states of the world. If this is right–that a coherent theoretical account of our responsibility to future generations is simply not possible–then whatever normative content informs our consideration of policies with respect to their effects on future states of the world is probably going to be exogenous to normative or moral theory–that is, it will be based on normative or moral preferences (or, to be more charitable, commitments or axioms). This does not strike me as necessarily a bad thing, but it does require us to be particularly attentive to how we resolve disputes among holders of inconsistent preferences. This is especially true because the future has no way to communicate its preferences to us: as I argued in an earlier post, there is no market for human flourishing. It may be that we have to choose among future states of the world according to idiosyncratic and contestable normative commitments; if that’s true then it is especially important that the social choice institutions to which we entrust such choices reflect appropriate allocations of authority. Representing the interests of future persons in those institutions is a particularly difficult problem: it demands that we in the present undertake difficult other-regarding deliberation in formulating and expressing our own normative commitments, and that the institutions themselves facilitate and respond to the results of that deliberation. Suffice it to say, I have serious doubts that intellectual property regimes–which at their best incentivize knowledge-creation in response to the predictable demands of relatively better-resourced members of society over a relatively short time horizon–satisfy these conditions.
Trademarks and Economic Activity
There’s an increasing amount of empirical data available on trademark registration systems. The USPTO released a comprehensive dataset three years ago, and there are less complete and less user-friendly data sources available from other national and regional offices–though some offices make it a bit tricky to get their data, and others restrict access or charge for their data products. As with most trends in legal scholarship, the empirical turn has come late to the study of trademarks. Part of this is because the scholarly community is small, and not as quantitatively-minded as other disciplines. Part of it is because it’s not clear what questions regarding trademarks we might look to empirical evidence to answer. I’ve published a study of the impact of the federal antidilution statute on federal registration (spoiler alert: it adds to the cost of registration but doesn’t seem to affect outcomes), but that’s a pretty narrow issue. What else could we learn from this kind of data?
One possibility is to examine the link between trademarks and economic activity. People who make a living from commerce involving intellectual property like to emphasize how important IP protection is to the economy, though the numbers they throw around are a bit dubious. But if we were serious about it, could we rigorously draw some link between trademarks–which are the most common and ubiquitous form of intellectual property in the economy–and economic performance?
I’ve been thinking about how we might do so, so I brought my modest quantitative analytical skills to bear on the best data currently available: the USPTO’s dataset. I thought I’d just look to see whether there is any relationship between trademark activity (in this case, applications for federal trademark registrations) and economic activity (in this case, real GDP). And it seems that there is one…kind of.
The GDP data from the St. Louis Fed is reported quarterly and seasonally adjusted; I compiled the trademark application data on a quarterly basis and calculated a 4-quarter moving average as a seasonal smoothing kludge. We see that trademark application activity is strongly seasonal, and that it tends to roughly track GDP trends–perhaps with a bit of a lag. The lag is interesting if more rigorous analysis bears it out: it seems to suggest that trademarks, rather than driving economic activity, are merely a lagging indicator of that activity.
The big exception is the late 1990s to the early 2000s. As Barton Beebe documented in his first look at USPTO data, this spike in trademark activity seems to correspond with the dot-com boom and bust. (Registration rates also dropped during this period–lots of these applications were low-quality or quickly abandoned.) It’s interesting to see that this huge discontinuity in trademark application activity doesn’t correlate with anywhere near as big an impact in the overall economy. We could speculate about why that might be–it probably has something to do with the “gold-rush” scramble to occupy a new, untapped field of commerce, and I suspect it also reflects (poorly) on the value of the early web to the overall economy.
This is an example of the kind of analysis these new data sources might be useful for–and it’s not that tricky to carry out. Building this chart was a couple hours’ work, and I’m no expert. A more rigorous econometric model is beyond my expertise, but I’m sure it could be done (I’m less sure what we could learn from it). What other kinds of questions might we look to trademark data to answer?
Preview of “Legal Sets” at IPSC 2015
I’m in Chicago for the the 15th Annual Intellectual Property Scholars Conference, where I’ll be presenting one of the projects I’ve been working on this summer for the first time. The project, whose working title is “Legal Sets,” is a work of legal theory that uses IP doctrine as its primary exemplary material. You can get a sneak peek at my slides for the presentation here.
Faith-Based vs. Value-Based IP: On the Lemley-Merges Debate
The splash in the IP academy today is Mark Lemley’s posting last night of a somewhat polemical essay he has forthcoming in the UCLA Law Review. In it he criticizes a number of IP scholars–principally his former Berkeley colleague Rob Merges–for turning to moral-rights-based arguments in favor of strong intellectual property protections as mounting empirical evidence fails to present a compelling case for their preferred policies. The “faith-based” epithet is intentionally provocative, but the money graf comes at the end (footnotes omitted):
But if you are one of the faithful, I probably haven’t persuaded you. The psychology literature suggests that while people are willing to be corrected about factual inaccuracies—things they think are true but are not—they are essentially impervious to correction once the thing that turns out to be untrue crosses the line into a belief. And that leads me to the last—and, to me, most worrisome—problem with faith-based IP. If you are a true believer, we have nothing to say to each other. I don’t mean by that that I am giving up on you, deciding that you’re not worth my time to persuade. Rather, I mean that we simply cannot speak the same language. There is no principled way to compare one person’s claim to lost freedom to another’s claim to a right to ownership. Nor is there a way to weigh your claim of moral entitlement against evidence that the exercise of that right actually reduces creativity by others. Faith-based IP is at its base a religion and not a science because it does not admit the prospect of being proven wrong. The inevitable result of a move toward faith-based IP is that we will make policy based on our instincts without being able to engage in a meaningful conversation about the wisdom of that policy.
The accusation Mark is making here is of epistemic closure: that his antagonists are unwilling to entertain the possibility that they are mistaken, or to candidly weigh evidence that would tend to prove such a mistake. For an academic, them’s fightin’ words. And I think they’re unfortunate. I think the problem here is neither epistemic nor methodological; it’s political (in a non-pejorative sense).
I suspect we are dealing with two academic camps that simply value different things in different measure, as humans are wont to do. This disagreement might lead to the conclusion that the two sides “have nothing to say to each other.” But we might also conclude that the apparent absence of a shared language between moral theorists and consequentialists is precisely the type of problem academics in an applied discipline like law are particularly well-suited to solve, by looking beneath the language each camp uses to identify the ideas and disagreements underneath, and frame the issues in a language that both sides can engage on the merits. Indeed, that’s precisely what I’ve been working on lately.
For example: Mark takes a very strong position in his essay against “rights-talk” in IP: the idea that the market interventions IP law makes in favor of authors and inventors (and those in privity with them) are “some kind of prepolitical right to which inventors and creators are entitled” “regardless of the social cost that granting that right imposes” (pp. 10, 15). But I think very few if any IP scholars–even Rob–are willing to take such an extreme theoretical position in favor of strong IP rights. Rob’s foray into rights-based justification for IP rights is hardly the stuff of doctrinaire deontological theory; it is suffused with concerns over consequences that a strict Kantian might shrug off as either the rational implication of self-consistent moral duties or the sphere of practical reason rather than moral theory (take Rob’s entire chapter on what he calls “The Proportionality Principle,” for example).
I think it is clear that Rob thinks very highly of authors and inventors, and is willing to privilege them over users and consumers in many contexts where Mark would prefer to allow competition to do its consumer-friendly work at the expense of the professional creative class. But it isn’t clear that in choosing the language of Kant and Rawls to justify his preference, Rob is shutting himself off to evidence that would persuade him that any of Mark’s particular policy preferences are well-founded, any more than I think Mark would dismiss out of hand the idea that some of the benefits of creative activity might be overlooked in particular forms of cost-benefit analysis. Instead, I think these two scholars are simply disagreeing over the appropriate domain of empirical inquiry–chiefly with respect to the measurement of value.
The line between the empirical and the normative is not so clear here. Take a seemingly simple example: How much is fan fiction worth to society? How should we even go about trying to answer this question? Is revealed preference through market transactions a reliable empirical measure in this context? Is there some way to measure imputed foregone income of fanfic authors, and if so would that be a good measure? Might there be some legitimate value in the freedom of fans to express themselves through transformative works that can’t be measured economically or even empirically? And on the other side of the ledger, how much value is generated by giving the commercial author the right to control the production of such works? Again, are the commercial author’s own preferences revealed through market transactions a useful measure? Is the psychological reward authors feel when exercising control over their own creations something that we can empirically measure? If so, should we count it? And If so, how do we weigh that psychological reward against the psychological reward experienced by fanfic authors? And finally, what value do those of us who neither create commercial works nor use them as a basis for our own expression derive from a world in which fanfiction is liberally allowed? Or strictly controlled? Again, how would we measure that value? How would we compare it to the other forms of value already discussed?
We can ask similar questions about most areas of innovation and creativity–from pharmaceutical patents to federal research grants to streaming audio royalties. Mark would have us obtain the relevant data and follow where it leads, which sounds good in the abstract. But the first step in answering any question in these areas of policy empirically is figuring out whether there is even anything useful to measure, or whether the relevant questions are too deeply enmeshed in questions of subjective value that empirical measurement cannot meaningfully capture. These are, in the language of moral philosophy, the problems of interpersonal comparison and aggregation. And the debate over them is an old one, in law, in philosophy, and even in literature; it goes back at least to Dworkin and Posner; to Parfit and Scanlon (and, yes, Rawls); to Dostoyevsky and to Captain Spock. And ultimately, as I noted above and will be arguing in a book I’m currently working on, these problems are not theoretical or empirical; they are political (in the non-pejorative sense). That is, when two parties fundamentally disagree over questions of value that cannot be resolved empirically, the only tools we can feasibly use to resolve the disagreement are political ones. (Of course, this observation thrusts us into the knot of dilemmas handed down to us from Arrow, Sen, and the rest of the social choice theorists, but that is a separate issue for another day.)
There are good reasons why a thinker in the area of IP might dispute the relevance of empirical evidence on many of the questions we deal with–though it would be the poor scholar indeed who shuts out relevant empirical data entirely on those issues to which it is relevant. I don’t think Rob has done that; I think he and Mark simply disagree as to which of the questions that we must confront in setting innovation policy can be helpfully answered by reference to empirical evidence. We may not ultimately agree on that particular problem–on whether a particular predicate for a policy decision is properly considered empirical or normative. Indeed, I don’t agree entirely with Rob or with Mark on that problem, let alone on the policy positions that they might derive from their particular approaches to the task. But I am quite certain that this is something we can indeed talk about in a common language if we turn the temperature down a bit.
Don’t Hate the Player, Hate the Game
Via engadget, here’s an IP-related story that brings me back to middle school. A redditor who (currently) goes under the handle XsimonbelmontX has clearly spent a lot of time building and testing a board game based on one of my favorite 8-bit side-scrolling platformers from the 1980s, Castlevania:
It’s an impressive bit of work. The game involves co-operative play, and in addition to the progressive board game format it seems to incorporate elements of card-based role playing games like Magic: The Gathering in addition to die-based roleplaying elements reminiscent of Dungeons & Dragons. Responses on Reddit are quite positive; the general tenor is captured by the current leading comment, from redditor “Canadianized” which reads: “I would buy the fuck out of this.”
But of course, Canadianized can’t buy XsimonbelmontX’s game, because it’s not available for sale–there’s just the one prototype. And some redditors, predictably, blame IP (see here, and here). But the IP story appears to be somewhat complicated.
There Is No Market for Human Flourishing
A blog post by Santa Clara economist and law professor David Friedman came across one of my social media feeds today, and it resonated with a lot of issues I’ve been thinking about lately. Friedman is taking issue with a particular corner of the climate change debate: the uncertainty over the sign and magnitude of future economic effects of climate change. This uncertainty, he argues, counsels caution: we know it would be very costly to intervene right now; but waiting will allow for more precise assessment of the potential costs of climate change, a longer time frame for spreading those costs over the human population, and–importantly–potential to distribute those costs in a more appropriate way. Here are the key passages for my purposes:
Diking against a meter of sea level change could be a serious problem for Bangladesh if it happened tomorrow. If Bangladesh follows the pattern of China, where GDP per capita has increased twenty fold since Mao’s death, by the time it happens they can pay the cost out of small change.
…
William Nordhaus, an economist who has specialized in climate issues … reported his estimate of how much greater the cost of climate change would be if we waited fifty years to deal with it instead of taking the optimal action at once. The number was $4.1 trillion. He took that as an argument for action, writing that “Wars have been started over smaller sums.”As I pointed out in a post here responding to Nordhaus, the cost is spread over the entire world and a long period of time. Annualized, it comes to something under .1% of world GNP.(Emphasis and link to Nordhaus article added–JNS)
I have no specialized insight into the climate change debate, I have no particular beef against Prof. Friedman or his work, and I carry no particular brief for Prof. Nordhaus or his work. But I think this particular post is a very good encapsulation of how profoundly unhelpful the welfare economics approach is in analyzing problems of social cooperation over long time scales and across disparate populations, and I thought I’d take the opportunity to try to explain why.
“Dilution at the PTO”–Slides from Presentation to the ABA-IP Section Spring Meeting
Last week I presented the latest findings from my work-in-progress, “Dilution at the Patent and Trademark Office”, at the inaugural Scholarship Symposium at the ABA-IP Section’s Spring Meeting. This includes the first presentation of my findings on famous marks. Slides from the presentation can be found here.