Scholarship

New Draft: Finding Dilution (An Application of Trademark as Promise)

I’ve just posted to SSRN a draft of a book chapter for a forthcoming volume on trademark law theory and reform edited by Graeme Dinwoodie and Mark Janis. My contribution, entitled “Finding Dilution,” reviews the history and theory of the quixotic theory of liability that everybody loves to hate. As Rebecca Tushnet has noted, in a post-Tam world dilution may not have much of a future, and my analysis in this draft may therefore be moot by the time this volume gets published. But if not, the exercise has given me an opportunity to extend the theoretical framework I established and defended in the Stanford Law Review a few years ago: Trademark as Promise.

In Marks, Morals, and Markets, I argued that a contractualist understanding of trademarks as a tool to facilitate the making and keeping of promises from producers to consumers offered a better descriptive–and more attractive normative–account of producer-consumer relations than the two theoretical frameworks most often applied to trademark law (welfarism and Lockean labor-desert theory). But I “intentionally avoided examining contractualist theory’s implications for trademark law’s regulation of producer-producer relationships” (p. 813), mostly for lack of space, though I conjectured that these implications might well differ from those of a Lockean account. In my new draft, I take on this previously avoided topic and argue that my conjecture was correct, and that the contractualist account of Trademark as Promise offers a justification for the seeming collapse of trademark dilution law into trademark infringement law (draft at 18):

This justification, in turn, seems to depend on a particular kind of consumer reliance—reliance not on stable meaning, which nobody in a free society is in a position to provide, but on performance of promises to deliver goods and services. It is interference with that promise—a promise that does not require the promisor to constrain the action of any third party against their will—that trademark law protects from outside interference. A contractualist trademark right, then, would be considerably narrower than even the infringement-based rights of today. To recast dilution law to conform to such a right would be to do away with dilution as a concept. A promise-based theory of dilution would enforce only those promises the promisor could reasonably perform without constraining the freedom of others to act, while constraining that freedom only to the extent necessary to allow individuals—and particularly consumers—to be able to determine whether a promise has in fact been performed.

As they say, read the whole thing. Comments welcome.

 

New Draft: Brand Renegades Redux

Charlottesville Riot

I have posted to SSRN a draft of the essay I contributed to Ann Bartow’s IP Scholarship Redux conference at the University of New Hampshire (slides from my presentation at the conference are available here.) These are dark times, and the darkness leaves nothing untouched–certainly not the consumer culture in which we all live our daily lives. As I say in the essay, Nazis buy sneakers too, and often with a purpose. We all–brand owners, consumers, lawyers, and judges–should think about how we can best respond to them.

Trademark Clutter at Northwestern Law REMIP

I’m in Chicago at Northwestern Law today to present an early-stage empirical project at the Roundtable on Empirical Methods in Intellectual Property (#REMIP). My project will use Canada’s pending change to its trademark registration system as a natural experiment to investigate the role national IP offices play in reducing “clutter”–registrations for marks that go unused, raising clearance costs and depriving competitors and the public of potentially valuable source identifiers.

Slides for the presentation are available here.

Thanks to Dave Schwartz of Northwestern, Chris Buccafusco of Cardozo, and Andrew Toole of the US Patent and Trademark Office for organizing this conference.

Mix, Match, and Layer: Hemel and Ouellette on Incentives and Allocation in Innovation Policy

One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.

In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.

Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)

The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.

By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).

The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)

The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.

Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.

What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.

Valuing Progress: Forthcoming 2018 from Cambridge University Press

I’m very pleased to announce that the book project I have been plodding away at for over two years is now under contract with Cambridge University Press. Its working title is Valuing Progress: A Pluralist Approach to Knowledge Governance. Keep an eye out for it in late 2018, and tell your librarian to do likewise!

Bits and pieces of Valuing Progress have appeared on this blog and elsewhere as it has developed from a half-baked essay into a monograph-sized project:

  • I presented my first musings about the relationship between normative commitments regarding distribution and the choice of a knowledge-governance regime as the opening plenary presentation at IPSC in Berkeley–these musings will now be more fully developed in Chapter 4 of the book: “Reciprocity.”
  • My exploration of our obligations to future persons, and the implication of those obligations for our present-day knowledge-governance policies, used analogous arguments in environmental policy as an early springboard. Deeper consideration of our obligations to the future led me to Derek Parfit’s Non-Identity Problem, at first through the lens of public health policy. Because knowledge governance–like environmental stewardship and global health policy–is a cooperative social phenomenon spanning timescales greater than any single human lifetime, the problem of future persons is one any theory of knowledge governance must engage. I made my first effort to do so at the 2015 Works-In-Progress in Intellectual Property (WIPIP) Conference at the University of Washington, and presented a more recent take at NYU’s 2017 Tri-State IP Workshop. My fuller treatment of the issue will appear in Chapter 7 of Valuing Progress: “Future Persons.”
  • Finally, the driving theoretical debate in IP lately has been the one between Mark Lemley, champion of consequentialism, and Rob Merges, who has lately turned from consequentialism to nonconsequentialist philosophers such as Locke and Rawls for theoretical foundations. My hot take on this debate was generative enough to justify organizing a symposium on the issue at the St. John’s Intellectual Property Law Center, where I serve as founding director. I was gratified that both Professors Lemley and Merges presented on a panel together, and that I was able to use the opportunity to more fully introduce my own thoughts on this debate. My introduction to the symposium issue of the St. John’s Law Review forms the kernel of Chapter 2 of Valuing Progress: “From Is to Ought.”

Other chapters will discuss the incommensurability of values at stake in knowledge governance, the relevance of luck and agency to our weighing of those values,  the widening of our moral concern regarding the burdens and benefits of knowledge creation to encompass socially remote persons, and the role of value pluralism in shaping political institutions and ethical norms to reconcile these values when they inevitably conflict. The result, I hope, will introduce my colleagues in innovation and creativity law and policy to a wider literature in moral philosophy that bears directly on their work. In doing so, I hope to help frame the distinction between–and the appropriate domains of–empirical and normative argumentation, to point a way out of our increasingly unhelpful arguments about 18th-century philosophy, and to introduce a more nuanced set of normative concerns that engage with the messiness and imperfection of human progress.

I am extremely grateful to everyone who has helped me to bring Valuing Progress to this important stage of development, including Matt Gallaway at CUP, the organizers of conferences at which I’ve had the opportunity to present early pieces of the project (particularly Peter Menell, Pam Samuelson, Molly Shaffer Van Houweling, and Rob Merges at Berkeley; Jennifer Rothman at Loyola of Los Angeles; Jeanne Fromer and Barton Beebe at NYU; Zahr Said at the University of Washington; Irina Manta at Hofstra; and Paul Gugliuzza at Boston University). I am also grateful for the support of St. John’s Law School, my dean Mike Simons, and my colleagues who have served as associate dean for faculty scholarship as this project has been in development: Marc DeGirolami and Anita Krishnakumar. Many more friends and colleagues have offered helpful feedback on early drafts and conversation about points and arguments that will find their way into the manuscript; they can all expect warm thanks in the acknowledgments section of the finished book.

But first, I have to finish writing the thing. So, back to work.

The Japan Trademarks Dataset: Presentation Slides

The Institute of Intellectual Property has graciously allowed me to share the slide deck from my summer research project on Japan’s trademark registration system. The slide deck includes the text of the presentation in the presenter notes, and you can download it here.

The photo leading this post was taken during my presentation at IIP in Tokyo. It shows me with my favorite visual aid: a bottle of (excellent) mirin bearing one of the contenders for Japan’s oldest registered trademark, Kokonoe Sakura.

“Legal Sets” Posted to SSRN

A little over a year ago, I was noodling over a persistent doctrinal puzzle in trademark law, and I started trying to formulate a systematic approach to the problem. The system quickly became bigger than the problem it was trying to solve, and because of the luxuries of tenure, I’ve been able to spend much of the past year chasing it down a very deep rabbit hole. Now I’m back, and I’ve brought with me what I hope is a useful way of thinking about law as a general matter. I call it “Legal Sets,” and it’s my first contribution to general legal theory. Here’s the abstract:

In this Article I propose that legal reasoning and analysis are best understood as being primarily concerned, not with rules or propositions, but with sets. The distinction is important to the work of lawyers, judges, and legal scholars, but is not currently well understood. This Article develops a formal model of the role of sets in a common-law system defined by a recursive relationship between cases and rules. In doing so it demonstrates how conceiving of legal doctrines as a universe of discourse comprising (sometimes nested or overlapping) sets of cases can clarify the logical structure of many so-called “hard cases,” and help organize the available options for resolving them according to their form. This set-theoretic model can also help to cut through ambiguities and clarify debates in other areas of legal theory—such as in the distinction between rules and standards, in the study of interpretation, and in the theory of precedent. Finally, it suggests that recurring substantive concerns in legal theory—particularly the problem of discretion—are actually emergent structural properties of a system that is composed of “sets all the way down.”

And the link: http://ssrn.com/abstract=2830918

And a taste of what’s inside:

Screen Shot 2016-08-27 at 5.23.27 PM

I’ll be grateful for comments, suggestions, and critiques from anyone with the patience to read the draft.

Home Stretch

Today was the deadline for me to submit a draft presentation on the research I’ve been doing in Japan for the past six weeks. The deadline pressure explains why I haven’t posted here in a while. The good news is that I was able to browbeat my new (and still growing) dataset into sufficient shape to generate some interesting insights, which I will share with my generous sponsors here at the Institute for Intellectual Property next week, before heading home to New York.

I am not at liberty to share my slide deck right now, but I can’t help but post on a couple of interesting tidbits from my research. The first is a follow-up on my earlier post about the oldest Japanese trademark. I had been persuaded that the two-character mark 重九 was in fact a form of the three-character mark (大重九) a brand of Chinese cigarette. Turns out I was wrong. It is, in fact, the brand of a centuries-old brewer of mirin–a sweet rice wine used in cooking. (The cigarette brand is also registered in Japan, as of 2007–which says something about the likelihood-of-confusion standard in Japanese trademark law). And as I found out, there’s some question as to whether this mark (which, read right to left, reads “Kokonoe”) really is the oldest Japanese trademark. There’s competition from the hair-products company, Yanagiya, which traces its lineage back 400 years to the court physician of the first Tokugawa Shogun; and also from a sake brewer from Kobe prefecture who sells under the “Jukai” label. Which is the oldest depends on how you count: by registration number, by registration date, or by application date. Anyway all of them would have taken a backseat to that historic American brand, Singer–but the company allowed its oldest Japanese trademark registration to lapse six years ago.

The other tidbit is my first attempt at a map-based data visualization, which I built using Tableau, a surprisingly handy software tool with a free public build. I used it to visualize how trademark owners from outside Japan try to protect their marks in Japan–specifically, whether they seek registrations via Japan’s domestic registration system, or via the international registration system established by the Madrid Protocol. Here’s what I’ve found:

MadridMap
The size of each circle represents an estimate of the number of applications for Japanese trademark registrations from each country between 2001 and 2014. The color represents the proportion of those applications that were filed via the Madrid Protocol (dark blue is all Madrid Protocol; dark red is all domestic applications; paler colors are a mix). The visualization isn’t perfect because not all countries acceded to the Madrid Protocol at the same time–some acceded in the middle of the data collection period, and many have never acceded. (When I have more time maybe I’ll try to figure out how to add a time-lapse animation to bring an extra dimension to the visualization.) Still, it’s a nice, rich, dense presentation of a large and complex body of data.

 

Turning a Corner

It has been a rough week of coding, processing, and de-bugging. But at long last, tonight I’m running the last two scripts I need to run to parse the last of the 330+ Gigabytes of data I received three weeks ago, and I’ve already tested them so I’m pretty confident they’ll work. By tomorrow, if all goes as planned, I’ll have all the data I’m going to be using on this project (a lean 60 GB or so) imported into Stata, where I can slice and dice it however I please. At exactly the halfway point of my residency in Tokyo, this is a major milestone.

The next step is some finer-grained cleaning and de-duplicating of this data, followed by some additional coding to structure it in a useful way (as you can see in the photo, I’ve already started sketching out my file trees). Then I’ll be able to describe and analyze what I’ve built. All of this will take time–the most primitive observation identifier in my data is the individual trademark application number, and it looks like I’ll be dealing with about 4.5 million of them, give or take. And each application number will have multiple records associated with it to capture lots of nitty-gritty trademark-y information like changing ownership and legal representation, renewals, divisional applications, goods and services classifications, foreign and international priority claims, and so on. Processing all that information takes time, and requires a lot of attention to detail. But today I’m feeling good. Today, I feel as confident as I ever have that this project is going to succeed.

So here is a first fruit of my research. The data I’m working with only goes back 15 years, but for any trademark registrations that have still been in force during those past 15 years, I have a fair amount of historical data. The earliest application date I’ve found in the data I’ve imported so far is July 31, 1890. That application– which became Japan Trademark Registration Number 521–is for the mark “重九”, which means literally nothing to me. But I asked around the office, and fortunately I have a colleague from Beijing here in Tokyo, who tells me 重九 is actually a Chinese brand–for cigarettes:

http://www.etmoc.com/eWebEditor/2011/2011090515413693.jpg

重九 translates roughly to “double-nine”, and the additional character (which apparently always accompanies the mark in its current use) translates roughly to “big” (i.e., “Big Double-Nine” Cigarettes). The mark was last renewed in Japan on March 28, 2015. Given that I’m here to study international aspects of intellectual property as they pertain to Japan, the fact that the earliest mark on record appears to be foreign is an interesting development.