I’m in Chicago at Northwestern Law today to present an early-stage empirical project at the Roundtable on Empirical Methods in Intellectual Property (#REMIP). My project will use Canada’s pending change to its trademark registration system as a natural experiment to investigate the role national IP offices play in reducing “clutter”–registrations for marks that go unused, raising clearance costs and depriving competitors and the public of potentially valuable source identifiers.
One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.
In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.
Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)
The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.
By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).
The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)
The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.
Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.
What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.
I’m very pleased to announce that the book project I have been plodding away at for over two years is now under contract with Cambridge University Press. Its working title is Valuing Progress: A Pluralist Approach to Knowledge Governance. Keep an eye out for it in late 2018, and tell your librarian to do likewise!
Bits and pieces of Valuing Progress have appeared on this blog and elsewhere as it has developed from a half-baked essay into a monograph-sized project:
- I presented my first musings about the relationship between normative commitments regarding distribution and the choice of a knowledge-governance regime as the opening plenary presentation at IPSC in Berkeley–these musings will now be more fully developed in Chapter 4 of the book: “Reciprocity.”
- My exploration of our obligations to future persons, and the implication of those obligations for our present-day knowledge-governance policies, used analogous arguments in environmental policy as an early springboard. Deeper consideration of our obligations to the future led me to Derek Parfit’s Non-Identity Problem, at first through the lens of public health policy. Because knowledge governance–like environmental stewardship and global health policy–is a cooperative social phenomenon spanning timescales greater than any single human lifetime, the problem of future persons is one any theory of knowledge governance must engage. I made my first effort to do so at the 2015 Works-In-Progress in Intellectual Property (WIPIP) Conference at the University of Washington, and presented a more recent take at NYU’s 2017 Tri-State IP Workshop. My fuller treatment of the issue will appear in Chapter 7 of Valuing Progress: “Future Persons.”
- Finally, the driving theoretical debate in IP lately has been the one between Mark Lemley, champion of consequentialism, and Rob Merges, who has lately turned from consequentialism to nonconsequentialist philosophers such as Locke and Rawls for theoretical foundations. My hot take on this debate was generative enough to justify organizing a symposium on the issue at the St. John’s Intellectual Property Law Center, where I serve as founding director. I was gratified that both Professors Lemley and Merges presented on a panel together, and that I was able to use the opportunity to more fully introduce my own thoughts on this debate. My introduction to the symposium issue of the St. John’s Law Review forms the kernel of Chapter 2 of Valuing Progress: “From Is to Ought.”
Other chapters will discuss the incommensurability of values at stake in knowledge governance, the relevance of luck and agency to our weighing of those values, the widening of our moral concern regarding the burdens and benefits of knowledge creation to encompass socially remote persons, and the role of value pluralism in shaping political institutions and ethical norms to reconcile these values when they inevitably conflict. The result, I hope, will introduce my colleagues in innovation and creativity law and policy to a wider literature in moral philosophy that bears directly on their work. In doing so, I hope to help frame the distinction between–and the appropriate domains of–empirical and normative argumentation, to point a way out of our increasingly unhelpful arguments about 18th-century philosophy, and to introduce a more nuanced set of normative concerns that engage with the messiness and imperfection of human progress.
I am extremely grateful to everyone who has helped me to bring Valuing Progress to this important stage of development, including Matt Gallaway at CUP, the organizers of conferences at which I’ve had the opportunity to present early pieces of the project (particularly Peter Menell, Pam Samuelson, Molly Shaffer Van Houweling, and Rob Merges at Berkeley; Jennifer Rothman at Loyola of Los Angeles; Jeanne Fromer and Barton Beebe at NYU; Zahr Said at the University of Washington; Irina Manta at Hofstra; and Paul Gugliuzza at Boston University). I am also grateful for the support of St. John’s Law School, my dean Mike Simons, and my colleagues who have served as associate dean for faculty scholarship as this project has been in development: Marc DeGirolami and Anita Krishnakumar. Many more friends and colleagues have offered helpful feedback on early drafts and conversation about points and arguments that will find their way into the manuscript; they can all expect warm thanks in the acknowledgments section of the finished book.
But first, I have to finish writing the thing. So, back to work.
The Institute of Intellectual Property has graciously allowed me to share the slide deck from my summer research project on Japan’s trademark registration system. The slide deck includes the text of the presentation in the presenter notes, and you can download it here.
The photo leading this post was taken during my presentation at IIP in Tokyo. It shows me with my favorite visual aid: a bottle of (excellent) mirin bearing one of the contenders for Japan’s oldest registered trademark, Kokonoe Sakura.
A little over a year ago, I was noodling over a persistent doctrinal puzzle in trademark law, and I started trying to formulate a systematic approach to the problem. The system quickly became bigger than the problem it was trying to solve, and because of the luxuries of tenure, I’ve been able to spend much of the past year chasing it down a very deep rabbit hole. Now I’m back, and I’ve brought with me what I hope is a useful way of thinking about law as a general matter. I call it “Legal Sets,” and it’s my first contribution to general legal theory. Here’s the abstract:
In this Article I propose that legal reasoning and analysis are best understood as being primarily concerned, not with rules or propositions, but with sets. The distinction is important to the work of lawyers, judges, and legal scholars, but is not currently well understood. This Article develops a formal model of the role of sets in a common-law system defined by a recursive relationship between cases and rules. In doing so it demonstrates how conceiving of legal doctrines as a universe of discourse comprising (sometimes nested or overlapping) sets of cases can clarify the logical structure of many so-called “hard cases,” and help organize the available options for resolving them according to their form. This set-theoretic model can also help to cut through ambiguities and clarify debates in other areas of legal theory—such as in the distinction between rules and standards, in the study of interpretation, and in the theory of precedent. Finally, it suggests that recurring substantive concerns in legal theory—particularly the problem of discretion—are actually emergent structural properties of a system that is composed of “sets all the way down.”
And the link: http://ssrn.com/abstract=2830918
And a taste of what’s inside:
I’ll be grateful for comments, suggestions, and critiques from anyone with the patience to read the draft.
Today was the deadline for me to submit a draft presentation on the research I’ve been doing in Japan for the past six weeks. The deadline pressure explains why I haven’t posted here in a while. The good news is that I was able to browbeat my new (and still growing) dataset into sufficient shape to generate some interesting insights, which I will share with my generous sponsors here at the Institute for Intellectual Property next week, before heading home to New York.
I am not at liberty to share my slide deck right now, but I can’t help but post on a couple of interesting tidbits from my research. The first is a follow-up on my earlier post about the oldest Japanese trademark. I had been persuaded that the two-character mark 重九 was in fact a form of the three-character mark (大重九) a brand of Chinese cigarette. Turns out I was wrong. It is, in fact, the brand of a centuries-old brewer of mirin–a sweet rice wine used in cooking. (The cigarette brand is also registered in Japan, as of 2007–which says something about the likelihood-of-confusion standard in Japanese trademark law). And as I found out, there’s some question as to whether this mark (which, read right to left, reads “Kokonoe”) really is the oldest Japanese trademark. There’s competition from the hair-products company, Yanagiya, which traces its lineage back 400 years to the court physician of the first Tokugawa Shogun; and also from a sake brewer from Kobe prefecture who sells under the “Jukai” label. Which is the oldest depends on how you count: by registration number, by registration date, or by application date. Anyway all of them would have taken a backseat to that historic American brand, Singer–but the company allowed its oldest Japanese trademark registration to lapse six years ago.
The other tidbit is my first attempt at a map-based data visualization, which I built using Tableau, a surprisingly handy software tool with a free public build. I used it to visualize how trademark owners from outside Japan try to protect their marks in Japan–specifically, whether they seek registrations via Japan’s domestic registration system, or via the international registration system established by the Madrid Protocol. Here’s what I’ve found:
The size of each circle represents an estimate of the number of applications for Japanese trademark registrations from each country between 2001 and 2014. The color represents the proportion of those applications that were filed via the Madrid Protocol (dark blue is all Madrid Protocol; dark red is all domestic applications; paler colors are a mix). The visualization isn’t perfect because not all countries acceded to the Madrid Protocol at the same time–some acceded in the middle of the data collection period, and many have never acceded. (When I have more time maybe I’ll try to figure out how to add a time-lapse animation to bring an extra dimension to the visualization.) Still, it’s a nice, rich, dense presentation of a large and complex body of data.
4,518,184 unique applications. From four different data sources. 74.71GB. All in Stata.
Now I just have to figure out what it all means. And I have two weeks to do it.