Brand Renegades Redux: University of New Hampshire IP Scholarship Redux Conference

Apparently I’ve been professoring long enough to reflect back on my earlier work to see how well it has held up to the tests of time. Thanks to Ann Bartow, I have the opportunity to engage in this introspection publicly and collaboratively, among a community of scholars doing likewise. I’m in Concord, New Hampshire, today to talk about my 2011 paper, Brand Renegades. At the time I wrote it, I was responding to economic and legal dynamics between consumers, brand owners, and popular culture. Relatively light fare, but with a legal hook.

Nowadays, these issues carry a bit more weight. As with everything else in these dark times, brands have become battlegrounds for high-stakes political identity clashes. I’ve talked about this trend in the media; today I’ll be discussing what I think it means for law.

Slides of the presentation are here.

Mix, Match, and Layer: Hemel and Ouellette on Incentives and Allocation in Innovation Policy

One of the standard tropes of IP scholarship is that when it comes to knowledge goods, there is an inescapable tradeoff between incentives and access. IP gives innovators and creators some assurance that they will be able to recoup their investments, but at the cost of the deadweight losses and restriction of access that result from supracompetitive pricing. Alternative incentive regimes—such as government grants, prizes, and tax incentives—may simply recapitulate this tradeoff in other forms: providing open access to government-funded research, for example, may blunt the incentives that would otherwise spur creation of knowledge goods for which a monopolist would be able to extract significant private value through market transactions.

In “Innovation Policy Pluralism” (forthcoming Yale L. J.), Daniel Hemel and Lisa Larrimore Ouellette challenge this orthodoxy. They argue that the incentive and access effects of particular legal regimes are not necessarily a package deal. And in the process, they open up tremendous new potential for creative thinking about how legal regimes can and should support and disseminate new knowledge.

Building on their prior work on innovation incentives, Hemel and Ouellette note that such incentives may be set ex ante or ex post, by the government or by the market. (Draft at 8) Various governance regimes—IP, prizes, government grants, and tax incentives—offer policymakers “a tunable innovation-incentive component: i.e., each offers potential innovators a payoff structure that determines the extent to which she will bear R&D costs and the rewards she will receive contingent upon different project outcomes.” (Id. at 13-14)

The authors further contend that each of these governance regimes also entails a particular allocation mechanism—“the terms under which consumers and firms can gain access to knowledge goods.” (Id. at 14) The authors’ exploration of allocation mechanisms is not as rich as their earlier exploration of incentive structures—they note that allocation is a “spectrum” at one end of which is monopoly pricing and at the other end of which is open access. But further investigation of the details of allocation mechanisms may well be left to future work; the key point of this paper is that “the choice of innovation incentive and the choice of allocation mechanism are separable.” (Id., emphasis added) While the policy regimes most familiar to us tend to bundle a particular innovation incentive with a particular allocation mechanism, setting up the familiar tradeoff between incentives and access, Hemel and Ouellette argue that “policymakers can and sometimes do decouple these elements from one another.” (Id. at 15) They suggest three possible mechanisms for such de-coupling: mixing, matching, and layering.

By “matching,” the authors are primarily referring to the combination of IP-like innovation incentives with open-access allocation mechanisms, which allows policymakers “to leverage the informational value of monopoly power while achieving the allocative efficiency of open access.” For example, the government could “buy out” a patentee using some measure of the patent’s net present value and then dedicate the patent to the public domain. (Id. at 15-17) Conversely, policymakers could incentivize innovation with non-IP mechanisms while then channeling the resulting knowledge goods into a monopoly-seller market allocation mechanism. This, they argue, might be desirable where incentives are required for the commercialization of knowledge goods (such as drugs that require lengthy and expensive testing), as the Bayh-Dole Act was supposedly designed to provide. (Id. At 18-23) Intriguingly, they also suggest that such matching might be desirable in service to a “user-pays” distributive principle (Id. At 18) (More on that in a moment).

The second de-coupling strategy is “mixing.” Here, the focus is not so much on the relationships between incentives and allocation, but on the ways various incentive structures can be combined, or various allocation mechanisms can be combined. The incentives portion of this section (id. at 23-32) reads largely as an extention and refinement of Hemel’s and Ouellette’s earlier paper on incentive mechanisms, following the model of Suzanne Scotchmer and covering familiar ground on the information economics of incentive regimes. Their discussion of mixing allocation mechanisms (id. at 32-36)—for example by allowing monopolization but providing consumers with subsidies—is a bit less assured, but far more novel. They note that monopoly pricing seems normatively undesirable due to deadweight loss, but offer two justifications for it. The first, building on the work of Glen Weyl and Jean Tirole, is a second-order justification that piggybacks on the information economics of the authors’ incentives analysis. To wit: they suggest that allocating access according to price gives some market test of a knowledge good’s social value, so an appropriate incentive can be provided. (Id. at 33-34) Again, however, the authors’ second argument is intriguingly distributive: they suggest that for some knowledge goods—for example “a new yachting technology” enjoyed only by the wealthy—restricting access by imposing supracompetitive costs may help enforce a normatively attractive “user-pays” principle. (Id. at 33, 35)

The final de-coupling strategy, “layering,” involves different mechanisms operating at different levels of political organization. For example, while TRIPS imposes an IP regime at the supranational level, individual TRIPS member states may opt for non-IP incentive mechanisms or open access allocation mechanisms at the domestic level—as many states do with Bayh-Dole regimes and pharmaceutical delivery systems, respectively. (Id. at 36-39) This analysis builds on another of the authors’ previous papers, and again rests on a somewhat underspecified distributive rationale: layering regimes with IP at the supranational level may be desirable, Hemel and Ouellette argue, because it allows “signatory states commit to reaching an arrangement under which knowledge-good consumers share costs with knowledge-good producers” and “establish[es] a link between the benefits to the consumer state and the size of the transfer from the consumer state to the producer state” so that “no state ever needs to pay for knowledge goods it doesn’t use.” (Id. at 38, 39) What the argument does not include is any reason to think these features of the supranational IP regime are in fact normatively desirable.

Hemel’s and Ouellette’s article concludes with some helpful illustrations from the pharmaceutical industry of how matching, mixing, and layering operate in practice. (Id. at 39-45) These examples, and the theoretical framework underlying them, offer fresh ways of looking at our knowledge governance regimes. They demonstrate that incentives and access are not simple tradeoffs baked into those regimes—that they have some independence, and that we can tune them to suit our normative ends. They also offer tantalizing hints that those ends may—perhaps should—include norms regarding distribution.

What this article lacks, but strongly invites the IP academy to begin investigating, is an articulated normative theory of distribution. Distributive norms are an uncomfortable discussion for American legal academics—and especially American IP academics—who have almost uniformly been raised in the law-and-economics tradition. That tradition tends to bracket distributive questions and focus on questions of efficiency as to which—it is thought—all reasonable minds should agree. Such agreement is admittedly absent from distributive questions, and as a result we may simply lack the vocabulary, at present, to thoroughly discuss the implications of Hemel’s and Ouellette’s contributions. Their latest work suggests it may be time for our discipline to broaden its perspective on the social implications of knowledge creation.

What If All Proof is Social Proof?

In 1931 Kurt Gödel proved that any consistent symbolic language system rich enough to express the principles of arithmetic would include statements that can be neither proven nor disproven within the system. A necessary implication is that in such systems, there are infinitely many true statements that cannot (within that system) be proven to be true, and infinitely many false statements that cannot (within that system) be proven to be false. Gödel’s achievement has sometimes been over-interpreted since–as grounds for radical skepticism about the existence of truth, for example–when really all it expressed were some limitations on the possibility of formally modeling a complete system of logic from which all mathematical truths would deductively flow. Gödel gives us no reason to be skeptical of truth; he gives us reason to be skeptical of the possibility of proof, even in a domain so rigorously logical as arithmetic. In so doing, he teaches us that–in mathematics at least–truth and proof are different things.

What is true for mathematics may be true for societies as well. The relationship between truth and proof is increasingly strained online, where we spend increasing portions of our lives. Finding the tools to extract reliable information from the firehose of our social media feeds is proving difficult. The latest concern is “deepfakes”: video content that takes identifiable faces and voices and either puts them on other people’s bodies or digitally renders fabricated behaviors for them. Deepfakes can make it seem as if well-known celebrities or random private individuals are appearing in hard-core pornography, or as if world leaders are saying or doing things they never actually said or did. A while ago, the urgent concern was fake followers: the prevalence of bots and stolen identities being used to artificially inflate follower and like counts on social media platforms like twitter, facebook, and instagram–often for a profit. Some worry that these and other features of online social media are symptoms of a post-truth world, where facts or objective reality simply do not matter. But to interpret this situation as one in which truth is meaningless is to make the same error made by those who would read Gödel’s incompleteness theorems as a license to embrace epistemic nihilism. Our problem is not one of truth, but one of proof. And the ultimate question we must grapple with is not whether truth matters, but to whom it matters, and whether those to whom truth matters can form a cohesive and efficacious political community.

The deepfakes problem, for example, does not suggest that truth is in some new form of danger. What it suggests is that one of the proof strategies that we had thought bound  our political community together may no longer do so. After all, an uncritical reliance on video recordings as evidence of what has happened in the world is untenable if video of any possible scenario can be undetectably fabricated.

But this is not an entirely new problem. Video may be more reliable than other forms of evidence in some ways. But video has always proven different things to different people. Different observers, with different backgrounds and commitments, can and will justify different–even inconsistent–beliefs using identical evidence. Where one person sees racist cops beating a black man on the sidewalk, another person will see a dangerous criminal refusing to submit to lawful authority. These differences in the evaluation of evidence reveal deep and painful fissures in our political community, but they do not suggest that truth does not matter to that community–if anything, our intense reactions to episodes of disagreement suggest the opposite. As these episodes demonstrate, video was never truth, it has always been evidence, and evidence is, again, a component of proof. We have long understood this distinction, and should recognize its importance to the present perceived crisis.

What, after all, is the purpose of proof? One purpose–the purpose for which we often think of proof as being important–is that proof is how we acquire knowledge. If, as Plato argued, knowledge is justified true belief (or even if this is merely a necessary, albeit insufficient, basis for a claim to knowledge), proof may satisfy some need for justification. But that does not mean that justification can always be derived from truth. One can be a metaphysical realist–that is, believe that some objective reality exists independently of our minds–without holding any particular commitments regarding the nature of justified belief. Justification is, descriptively, whatever a rational agent will accept as a reason for belief. And in this view, proof is simply a tool for persuading rational agents to believe something.

Insofar as proof is thought to be a means to acquiring knowledge, the agent to be persuaded is often oneself. But this obscures the deeply interpersonal–indeed social–nature of proof and justification. When asking whether our beliefs are justified, we are really asking ourselves whether the reasons we can give for our beliefs are such as we would expect any rational agent to accept. We can thus understand the purpose of proof as the persuasion of others that our own beliefs are correct–something Socrates thought the orators and lawyers of Athens were particularly skilled at doing. As Socrates recognized, this understanding of proof clearly has no necessary relation to the concept of truth. It is, instead, consistent with an “argumentative theory” of rationality and justification. To be sure, we may have strong views about what rational agents ought to accept as a reason for belief–and like Socrates, we might wish to identify those normative constraints on justification with some notion of objective truth. But such constraints are contested, and socially contingent.

This may be why the second social media trend noted above–“fake followers”–is so troubling. The most socially contingent strategy we rely on to justify our beliefs is to adopt the observed beliefs of others in our community as our own. We often rely, in short, on social proof. This is something we apparently do from a very early age, and indeed, it would be difficult to obtain the knowledge needed to make our way through the world if we didn’t. When a child wants to know whether something is safe to eat, it is a useful strategy to see whether an adult will eat it. But what if we want to know whether a politician actually said something they were accused of saying on social media–or something a video posted online appears to show them saying? Does the fact that thousands of facebook accounts have “liked” the video justify any belief in what the politician did in fact say, one way or another?

Social proof has an appealingly democratic character, and it may be practically useful in many circumstances. But we should clearly recognize that the acceptance of a proposition as true by others in our community doesn’t have any necessary relation to actual truth. Your parents were wise to warn you that you shouldn’t jump off a cliff just because all your friends did it. We obviously cannot rely exclusively on social proof as a justification for belief. To this extent, as Ian Bogost put it, “all followers are fake followers.”

Still, the observation that social proof is an imperfect proxy for truth does not change the fact that it is–like the authority of video–something we have a defensible habit of relying on in (at least partially) justifying (at least some of) our beliefs. Moreover, social proof makes a particular kind of sense under a pragmatist approach to knowledge. As the pragmatists argued, the relationship between truth and proof is not a necessary one, because proof is ultimately not about truth; it is about communities. In Rorty’s words:

For the pragmatist … knowledge is, like truth, simply a compliment paid to the beliefs we think so well justified that, for the moment, further justification is not needed. An inquiry into the nature of knowledge can, on his view, only be a socio-historical account of how various people have tried to reach agreement on what to believe.

Whether we frame them in terms of Kuhnian paradigms or cultural cognition, we are all familiar with examples of different communities asserting or disputing truth on the basis of divergent or incompatible criteria of proof. Organs of the Catholic Church once held that the Earth is motionless and the sun moves around it–and banned books that argued the contrary–relying on the authority of Holy Scripture as proof. The contrary position that sparked the Galilean controversy–eppur si muove–was generated by a community that relied on visual observation of the celestial bodies as proof. Yet another community might hold that the question of which body moves and which stands still depends entirely on the identification of a frame of reference, without which the concept of motion is ill-defined.

For each of these communities, their beliefs were justified in the Jamesian sense that they “worked” to meet the needs of individuals in those communities at those times–at least until they didn’t. As particular forms of justification stop working for a community’s purposes, that community may fracture and reorganize around a new set of justifications and a new body of knowledge–hopefully but not necessarily closer to some objective notion of truth than the body of knowledge it left behind. Even if we think there is a truth to the matter–as one feels there must be in the context of the physical world–there are surely multiple epistemic criteria people might cite as justification for believing such a truth has been sufficiently identified to cease further inquiry, and those criteria might be more or less useful for particular purposes at particular times.

This is why the increasing unreliability of video evidence and social proof are so troubling in our own community, in our own time. These are criteria of justification that have up to now enjoyed (rightly or wrongly) wide acceptance in our political community. But when one form of justification ceases to be reliable, we must either discover new ones or fall back on others–and either way, these alternative proof strategies may not enjoy such wide acceptance in our community. The real danger posed by deepfakes is not that recorded truth will somehow get lost in the fever swamps of the Internet. The real danger posed by fake followers is not that half a million “likes” will turn a lie into the truth. The deep threat of these new phenomena is that they may undermine epistemic criteria that bind members of our community in common practices of justification, leaving only epistemic criteria that we do not all share.

This is particularly worrisome because quite often we think ourselves justified in believing what we wish to be true, to the extent we can persuade ourselves to do so. Confirmation bias and motivated reasoning significantly shape our actual practices of justification. We seek out and credit information that will confirm what we already believe, and avoid or discredit information that will refute our existing beliefs. We shape our beliefs around our visions of ourselves, and our perceived place in the world as we believe it should be. To the extent that members of a community do not all want to believe the same things, and cannot rely on shared modes of justification to constrain their tendency toward motivated reasoning, they may retreat into fractured networks of trust and affiliation that justify beliefs along ideological, religious, ethnic, or partisan lines. In such a world, justification may conceivably come to rest on the argument popularized by Richard Pryor and Groucho Marx: Who are you going to believe: me, or your lying eyes?

The danger of our present moment, in short, is that we will be frustrated in our efforts to reach agreement with our fellow citizens on what we ought to believe and why. This is not an epistemic crisis, it is a social one. We should not be misled into believing that the increased difficulty of justifying our beliefs to one another within our community somehow puts truth further out of our grasp. To do so would be to embrace the möbius strip of epistemology Orwell put in the mouths of his totalitarians:

Anything could be true. The so-called laws of nature were nonsense. The law of gravity was nonsense. “If I wished,” O’Brien had said, “I could float off this floor like a soap bubble.” Winston worked it out. “If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens.” Suddenly, like a lump of submerged wreckage breaking the surface of water, the thought burst into his mind: “It doesn’t really happen. We imagine it. It is hallucination.” He pushed the thought under instantly. The fallacy was obvious. It presupposed that somewhere or other, outside oneself, there was a “real” world where “real” things happened. But how could there be such a world? What knowledge have we of anything, save through our own minds? All happenings are in the mind. Whatever happens in all minds, truly happens.

This kind of equation of enforced belief with truth can only hold up where–as in the ideal totalitarian state–justification is both socially uncontested and entirely a matter of motivated reasoning. Thankfully, that is not our world–nor do I belive it ever can truly be. To be sure, there are always those who will try to move us toward such a world for their own ends–undermining our ability to forge common grounds for belief by fraudulently muddying the correlations between the voices we trust and the world we observe. But there are also those who work very hard to expose such actions to the light of day, and to reveal the fabrications of evidence and manipulations of social proof that are currently the cause of so much concern. This is good and important work. It is the work of building a community around identifying and defending shared principles of proof. And I continue to believe that such work can be successful, if we each take up the responsibility of supporting and contributing to it. Again, this is not an epistemic issue, it is a social one.

The fact that this kind of work is necessary in our community and our time may be unwelcome, but it is not cause for panic. Our standards of justification–the things we will accept as proof–are within our control, so long as we identify and defend them, together. Those who would undermine these standards can only succeed if we despair of the possibility that we can, across our political community, come to agreement on what justifications are valid, and put beliefs thus justified into practice in the governance of our society. I believe we can do this, because I believe that there are more of us than there are of them–that there are more people of goodwill and reason than there are nihilist opportunists. If I am right, and if enough of us give enough of our efforts to defending the bonds of justification that allow us to agree on sufficient truths to organize ourselves towards a common purpose, we will have turned the totalitarian argument on its head. Orwell’s totalitarians were wrong about truth, but they may have been right about proof.

Letter Opposing New York Right of Publicity Bill

Following the lead of the indefatigable Jennifer Rothman, I’ve posted the following letter to the members of the New York State Assembly and Senate opposing the current draft of the pending bill to replace New York’s venerable privacy tort with a right of publicity. I hope one of them will take me up on my offer to host discussion of the implications of this legislation at the St. John’s Intellectual Property Law Center.

Sheff NY ROP Letter 2017-6-16

Ripped From the Headlines: IP Exams

The longer I’ve been teaching the harder I’ve found it to come up with novel fact patterns for my exams. There are only so many useful (and fair) ways to ask “Who owns Blackacre?” after all. So I’ve increasingly turned to real-life examples–modified to more squarely present the particular doctrinal issues I want to assess–as a basis for my exams. (I always make clear to my students that when I do use examples from real life in an exam, I will change the facts in potentially significant ways, such that they can do themselves more harm than good by referring to any commentary on the real-world inspirations for the exam.) In my IP classes there are always lots of fun examples to choose from. This spring, a couple of news reports that came out the week I was writing exams provided useful fodder for issue-spotter questions.

The first was a report in the Guardian of the story of Catherine Hettinger, a grandmother from Orlando who claims to have invented the faddish “fidget spinner” toys that have recently been banned from my son’s kindergarten classroom and most other educational spaces. (A similar story appeared in CNN Money the same day.) Ms. Hettinger patented her invention, she explained to the credulous reporters, but allowed the patent to expire for want of the necessary funds to pay the maintenance fee. Thereafter, finger spinners flooded the market, and Ms. Hettinger didn’t see a penny. (If you feel bad for her, you can contribute to her Kickstarter campaign, or launched the day after the Guardian article posted.) The story was picked up by multiple other outlets, including the New York Times, US News, the New York Post, and the Jewish Telegraph.

There’s one problem with Ms. Hettinger’s story, which you might guess at by comparing her patent to the finger spinners you’ve seen in the market:

us5591062-1

36e388fe-1c3c-4306-9030-2305b866720d_1-a890049963cffbc9e8b5ab5bdd256700

Source: Walmart.com

The problem with Ms. Hettinger’s story is that it isn’t true. She didn’t invent the fidget spinner–her invention is a completely different device. As of this writing, the leading Google search result for her name is an article on Fatherly.com insinuating that Hettinger is committing fraud with her Kickstarter campaign.

Of course, as any good patent lawyer knows, the fact that Hettinger didn’t invent an actual fidget spinner doesn’t mean she couldn’t have asserted her patent against the makers of fidget spinners, if it were still in force. The question whether the fidget spinner would infringe such a patent depends on the validity and interpretation of the patent’s claims. So: a little cutting, pasting, and editing of the Hettinger patent, a couple of prior art references thrown in, and a few dates changed…and voilà! We’ve got an exam question.

The second example arose from reports of a complaint filed in federal court in California against the Canadian owners of a Mexican hotel who have recently begun marketing branded merchandise over the Internet. The defendants’ business is called the Hotel California, and the plaintiffs are yacht-rock megastars The Eagles.

lebowski-eagles-o

Source: The Big Lebowski, via http://www.brostrick.com/viral/best-quotes-from-the-big-lebowski-gifs/

A little digging into the facts of this case reveals a host of fascinating trademark law issues, on questions of priority and extraterritorial rights, the Internet as a marketing channel, product proximity and dilution, geographic indications and geographic descriptiveness, and registered versus unregistered rights. All in all, great fodder for an exam question.

 

Valuing Progress: Forthcoming 2018 from Cambridge University Press

I’m very pleased to announce that the book project I have been plodding away at for over two years is now under contract with Cambridge University Press. Its working title is Valuing Progress: A Pluralist Approach to Knowledge Governance. Keep an eye out for it in late 2018, and tell your librarian to do likewise!

Bits and pieces of Valuing Progress have appeared on this blog and elsewhere as it has developed from a half-baked essay into a monograph-sized project:

  • I presented my first musings about the relationship between normative commitments regarding distribution and the choice of a knowledge-governance regime as the opening plenary presentation at IPSC in Berkeley–these musings will now be more fully developed in Chapter 4 of the book: “Reciprocity.”
  • My exploration of our obligations to future persons, and the implication of those obligations for our present-day knowledge-governance policies, used analogous arguments in environmental policy as an early springboard. Deeper consideration of our obligations to the future led me to Derek Parfit’s Non-Identity Problem, at first through the lens of public health policy. Because knowledge governance–like environmental stewardship and global health policy–is a cooperative social phenomenon spanning timescales greater than any single human lifetime, the problem of future persons is one any theory of knowledge governance must engage. I made my first effort to do so at the 2015 Works-In-Progress in Intellectual Property (WIPIP) Conference at the University of Washington, and presented a more recent take at NYU’s 2017 Tri-State IP Workshop. My fuller treatment of the issue will appear in Chapter 7 of Valuing Progress: “Future Persons.”
  • Finally, the driving theoretical debate in IP lately has been the one between Mark Lemley, champion of consequentialism, and Rob Merges, who has lately turned from consequentialism to nonconsequentialist philosophers such as Locke and Rawls for theoretical foundations. My hot take on this debate was generative enough to justify organizing a symposium on the issue at the St. John’s Intellectual Property Law Center, where I serve as founding director. I was gratified that both Professors Lemley and Merges presented on a panel together, and that I was able to use the opportunity to more fully introduce my own thoughts on this debate. My introduction to the symposium issue of the St. John’s Law Review forms the kernel of Chapter 2 of Valuing Progress: “From Is to Ought.”

Other chapters will discuss the incommensurability of values at stake in knowledge governance, the relevance of luck and agency to our weighing of those values,  the widening of our moral concern regarding the burdens and benefits of knowledge creation to encompass socially remote persons, and the role of value pluralism in shaping political institutions and ethical norms to reconcile these values when they inevitably conflict. The result, I hope, will introduce my colleagues in innovation and creativity law and policy to a wider literature in moral philosophy that bears directly on their work. In doing so, I hope to help frame the distinction between–and the appropriate domains of–empirical and normative argumentation, to point a way out of our increasingly unhelpful arguments about 18th-century philosophy, and to introduce a more nuanced set of normative concerns that engage with the messiness and imperfection of human progress.

I am extremely grateful to everyone who has helped me to bring Valuing Progress to this important stage of development, including Matt Gallaway at CUP, the organizers of conferences at which I’ve had the opportunity to present early pieces of the project (particularly Peter Menell, Pam Samuelson, Molly Shaffer Van Houweling, and Rob Merges at Berkeley; Jennifer Rothman at Loyola of Los Angeles; Jeanne Fromer and Barton Beebe at NYU; Zahr Said at the University of Washington; Irina Manta at Hofstra; and Paul Gugliuzza at Boston University). I am also grateful for the support of St. John’s Law School, my dean Mike Simons, and my colleagues who have served as associate dean for faculty scholarship as this project has been in development: Marc DeGirolami and Anita Krishnakumar. Many more friends and colleagues have offered helpful feedback on early drafts and conversation about points and arguments that will find their way into the manuscript; they can all expect warm thanks in the acknowledgments section of the finished book.

But first, I have to finish writing the thing. So, back to work.

Derek Parfit, RIP

Reports are that Oxford philosopher Derek Parfit died last night. Parfit’s philosophy is not well known or appreciated in my field of intellectual property, which is only just starting to absorb the work of John Rawls. This is something I am working to change, as the questions Parfit raised about our obligations to one another as persons–and in particular our obligations to the future–are deeply implicated in the policies intellectual property law is supposed to serve. Indeed, when I learned about Parfit’s death, I was hard at work trying to finish a draft of a book chapter that I will be presenting at NYU in less than two weeks. (The chapter is an extension of a presentation I made at WIPIP this past spring at the University of Washington.)

Parfit’s thoughts on mortality were idiosyncratic, based on his equally idiosyncratic views of the nature and identity of persons over time. I must admit I have never found his account of identity as psychological connectedness to be especially useful, but I have always found his almost Buddhist description of his state of mind upon committing to this view to be very attractive. So rather than mourn Parfit, I prefer to ruminate on his reflections on death, from page 281 of his magnificent book, Reasons and Persons:

screen-shot-2017-01-02-at-4-46-16-pm

If Parfit is right, then my own experiences, and those of others who have learned from his work, give us all reason to view the fact of his physical death as less bad than we might otherwise–and to be grateful. I can at least do the latter.

The Japan Trademarks Dataset: Presentation Slides

The Institute of Intellectual Property has graciously allowed me to share the slide deck from my summer research project on Japan’s trademark registration system. The slide deck includes the text of the presentation in the presenter notes, and you can download it here.

The photo leading this post was taken during my presentation at IIP in Tokyo. It shows me with my favorite visual aid: a bottle of (excellent) mirin bearing one of the contenders for Japan’s oldest registered trademark, Kokonoe Sakura.

“Legal Sets” Posted to SSRN

A little over a year ago, I was noodling over a persistent doctrinal puzzle in trademark law, and I started trying to formulate a systematic approach to the problem. The system quickly became bigger than the problem it was trying to solve, and because of the luxuries of tenure, I’ve been able to spend much of the past year chasing it down a very deep rabbit hole. Now I’m back, and I’ve brought with me what I hope is a useful way of thinking about law as a general matter. I call it “Legal Sets,” and it’s my first contribution to general legal theory. Here’s the abstract:

In this Article I propose that legal reasoning and analysis are best understood as being primarily concerned, not with rules or propositions, but with sets. The distinction is important to the work of lawyers, judges, and legal scholars, but is not currently well understood. This Article develops a formal model of the role of sets in a common-law system defined by a recursive relationship between cases and rules. In doing so it demonstrates how conceiving of legal doctrines as a universe of discourse comprising (sometimes nested or overlapping) sets of cases can clarify the logical structure of many so-called “hard cases,” and help organize the available options for resolving them according to their form. This set-theoretic model can also help to cut through ambiguities and clarify debates in other areas of legal theory—such as in the distinction between rules and standards, in the study of interpretation, and in the theory of precedent. Finally, it suggests that recurring substantive concerns in legal theory—particularly the problem of discretion—are actually emergent structural properties of a system that is composed of “sets all the way down.”

And the link: http://ssrn.com/abstract=2830918

And a taste of what’s inside:

Screen Shot 2016-08-27 at 5.23.27 PM

I’ll be grateful for comments, suggestions, and critiques from anyone with the patience to read the draft.

Home Stretch

Today was the deadline for me to submit a draft presentation on the research I’ve been doing in Japan for the past six weeks. The deadline pressure explains why I haven’t posted here in a while. The good news is that I was able to browbeat my new (and still growing) dataset into sufficient shape to generate some interesting insights, which I will share with my generous sponsors here at the Institute for Intellectual Property next week, before heading home to New York.

I am not at liberty to share my slide deck right now, but I can’t help but post on a couple of interesting tidbits from my research. The first is a follow-up on my earlier post about the oldest Japanese trademark. I had been persuaded that the two-character mark 重九 was in fact a form of the three-character mark (大重九) a brand of Chinese cigarette. Turns out I was wrong. It is, in fact, the brand of a centuries-old brewer of mirin–a sweet rice wine used in cooking. (The cigarette brand is also registered in Japan, as of 2007–which says something about the likelihood-of-confusion standard in Japanese trademark law). And as I found out, there’s some question as to whether this mark (which, read right to left, reads “Kokonoe”) really is the oldest Japanese trademark. There’s competition from the hair-products company, Yanagiya, which traces its lineage back 400 years to the court physician of the first Tokugawa Shogun; and also from a sake brewer from Kobe prefecture who sells under the “Jukai” label. Which is the oldest depends on how you count: by registration number, by registration date, or by application date. Anyway all of them would have taken a backseat to that historic American brand, Singer–but the company allowed its oldest Japanese trademark registration to lapse six years ago.

The other tidbit is my first attempt at a map-based data visualization, which I built using Tableau, a surprisingly handy software tool with a free public build. I used it to visualize how trademark owners from outside Japan try to protect their marks in Japan–specifically, whether they seek registrations via Japan’s domestic registration system, or via the international registration system established by the Madrid Protocol. Here’s what I’ve found:

MadridMap
The size of each circle represents an estimate of the number of applications for Japanese trademark registrations from each country between 2001 and 2014. The color represents the proportion of those applications that were filed via the Madrid Protocol (dark blue is all Madrid Protocol; dark red is all domestic applications; paler colors are a mix). The visualization isn’t perfect because not all countries acceded to the Madrid Protocol at the same time–some acceded in the middle of the data collection period, and many have never acceded. (When I have more time maybe I’ll try to figure out how to add a time-lapse animation to bring an extra dimension to the visualization.) Still, it’s a nice, rich, dense presentation of a large and complex body of data.