Valuing Progress: Forthcoming 2018 from Cambridge University Press

I’m very pleased to announce that the book project I have been plodding away at for over two years is now under contract with Cambridge University Press. Its working title is Valuing Progress: A Pluralist Approach to Knowledge Governance. Keep an eye out for it in late 2018, and tell your librarian to do likewise!

Bits and pieces of Valuing Progress have appeared on this blog and elsewhere as it has developed from a half-baked essay into a monograph-sized project:

  • I presented my first musings about the relationship between normative commitments regarding distribution and the choice of a knowledge-governance regime as the opening plenary presentation at IPSC in Berkeley–these musings will now be more fully developed in Chapter 4 of the book: “Reciprocity.”
  • My exploration of our obligations to future persons, and the implication of those obligations for our present-day knowledge-governance policies, used analogous arguments in environmental policy as an early springboard. Deeper consideration of our obligations to the future led me to Derek Parfit’s Non-Identity Problem, at first through the lens of public health policy. Because knowledge governance–like environmental stewardship and global health policy–is a cooperative social phenomenon spanning timescales greater than any single human lifetime, the problem of future persons is one any theory of knowledge governance must engage. I made my first effort to do so at the 2015 Works-In-Progress in Intellectual Property (WIPIP) Conference at the University of Washington, and presented a more recent take at NYU’s 2017 Tri-State IP Workshop. My fuller treatment of the issue will appear in Chapter 7 of Valuing Progress: “Future Persons.”
  • Finally, the driving theoretical debate in IP lately has been the one between Mark Lemley, champion of consequentialism, and Rob Merges, who has lately turned from consequentialism to nonconsequentialist philosophers such as Locke and Rawls for theoretical foundations. My hot take on this debate was generative enough to justify organizing a symposium on the issue at the St. John’s Intellectual Property Law Center, where I serve as founding director. I was gratified that both Professors Lemley and Merges presented on a panel together, and that I was able to use the opportunity to more fully introduce my own thoughts on this debate. My introduction to the symposium issue of the St. John’s Law Review forms the kernel of Chapter 2 of Valuing Progress: “From Is to Ought.”

Other chapters will discuss the incommensurability of values at stake in knowledge governance, the relevance of luck and agency to our weighing of those values,  the widening of our moral concern regarding the burdens and benefits of knowledge creation to encompass socially remote persons, and the role of value pluralism in shaping political institutions and ethical norms to reconcile these values when they inevitably conflict. The result, I hope, will introduce my colleagues in innovation and creativity law and policy to a wider literature in moral philosophy that bears directly on their work. In doing so, I hope to help frame the distinction between–and the appropriate domains of–empirical and normative argumentation, to point a way out of our increasingly unhelpful arguments about 18th-century philosophy, and to introduce a more nuanced set of normative concerns that engage with the messiness and imperfection of human progress.

I am extremely grateful to everyone who has helped me to bring Valuing Progress to this important stage of development, including Matt Gallaway at CUP, the organizers of conferences at which I’ve had the opportunity to present early pieces of the project (particularly Peter Menell, Pam Samuelson, Molly Shaffer Van Houweling, and Rob Merges at Berkeley; Jennifer Rothman at Loyola of Los Angeles; Jeanne Fromer and Barton Beebe at NYU; Zahr Said at the University of Washington; Irina Manta at Hofstra; and Paul Gugliuzza at Boston University). I am also grateful for the support of St. John’s Law School, my dean Mike Simons, and my colleagues who have served as associate dean for faculty scholarship as this project has been in development: Marc DeGirolami and Anita Krishnakumar. Many more friends and colleagues have offered helpful feedback on early drafts and conversation about points and arguments that will find their way into the manuscript; they can all expect warm thanks in the acknowledgments section of the finished book.

But first, I have to finish writing the thing. So, back to work.

Derek Parfit, RIP

Reports are that Oxford philosopher Derek Parfit died last night. Parfit’s philosophy is not well known or appreciated in my field of intellectual property, which is only just starting to absorb the work of John Rawls. This is something I am working to change, as the questions Parfit raised about our obligations to one another as persons–and in particular our obligations to the future–are deeply implicated in the policies intellectual property law is supposed to serve. Indeed, when I learned about Parfit’s death, I was hard at work trying to finish a draft of a book chapter that I will be presenting at NYU in less than two weeks. (The chapter is an extension of a presentation I made at WIPIP this past spring at the University of Washington.)

Parfit’s thoughts on mortality were idiosyncratic, based on his equally idiosyncratic views of the nature and identity of persons over time. I must admit I have never found his account of identity as psychological connectedness to be especially useful, but I have always found his almost Buddhist description of his state of mind upon committing to this view to be very attractive. So rather than mourn Parfit, I prefer to ruminate on his reflections on death, from page 281 of his magnificent book, Reasons and Persons:

screen-shot-2017-01-02-at-4-46-16-pm

If Parfit is right, then my own experiences, and those of others who have learned from his work, give us all reason to view the fact of his physical death as less bad than we might otherwise–and to be grateful. I can at least do the latter.

The Japan Trademarks Dataset: Presentation Slides

The Institute of Intellectual Property has graciously allowed me to share the slide deck from my summer research project on Japan’s trademark registration system. The slide deck includes the text of the presentation in the presenter notes, and you can download it here.

The photo leading this post was taken during my presentation at IIP in Tokyo. It shows me with my favorite visual aid: a bottle of (excellent) mirin bearing one of the contenders for Japan’s oldest registered trademark, Kokonoe Sakura.

“Legal Sets” Posted to SSRN

A little over a year ago, I was noodling over a persistent doctrinal puzzle in trademark law, and I started trying to formulate a systematic approach to the problem. The system quickly became bigger than the problem it was trying to solve, and because of the luxuries of tenure, I’ve been able to spend much of the past year chasing it down a very deep rabbit hole. Now I’m back, and I’ve brought with me what I hope is a useful way of thinking about law as a general matter. I call it “Legal Sets,” and it’s my first contribution to general legal theory. Here’s the abstract:

In this Article I propose that legal reasoning and analysis are best understood as being primarily concerned, not with rules or propositions, but with sets. The distinction is important to the work of lawyers, judges, and legal scholars, but is not currently well understood. This Article develops a formal model of the role of sets in a common-law system defined by a recursive relationship between cases and rules. In doing so it demonstrates how conceiving of legal doctrines as a universe of discourse comprising (sometimes nested or overlapping) sets of cases can clarify the logical structure of many so-called “hard cases,” and help organize the available options for resolving them according to their form. This set-theoretic model can also help to cut through ambiguities and clarify debates in other areas of legal theory—such as in the distinction between rules and standards, in the study of interpretation, and in the theory of precedent. Finally, it suggests that recurring substantive concerns in legal theory—particularly the problem of discretion—are actually emergent structural properties of a system that is composed of “sets all the way down.”

And the link: http://ssrn.com/abstract=2830918

And a taste of what’s inside:

Screen Shot 2016-08-27 at 5.23.27 PM

I’ll be grateful for comments, suggestions, and critiques from anyone with the patience to read the draft.

Home Stretch

Today was the deadline for me to submit a draft presentation on the research I’ve been doing in Japan for the past six weeks. The deadline pressure explains why I haven’t posted here in a while. The good news is that I was able to browbeat my new (and still growing) dataset into sufficient shape to generate some interesting insights, which I will share with my generous sponsors here at the Institute for Intellectual Property next week, before heading home to New York.

I am not at liberty to share my slide deck right now, but I can’t help but post on a couple of interesting tidbits from my research. The first is a follow-up on my earlier post about the oldest Japanese trademark. I had been persuaded that the two-character mark 重九 was in fact a form of the three-character mark (大重九) a brand of Chinese cigarette. Turns out I was wrong. It is, in fact, the brand of a centuries-old brewer of mirin–a sweet rice wine used in cooking. (The cigarette brand is also registered in Japan, as of 2007–which says something about the likelihood-of-confusion standard in Japanese trademark law). And as I found out, there’s some question as to whether this mark (which, read right to left, reads “Kokonoe”) really is the oldest Japanese trademark. There’s competition from the hair-products company, Yanagiya, which traces its lineage back 400 years to the court physician of the first Tokugawa Shogun; and also from a sake brewer from Kobe prefecture who sells under the “Jukai” label. Which is the oldest depends on how you count: by registration number, by registration date, or by application date. Anyway all of them would have taken a backseat to that historic American brand, Singer–but the company allowed its oldest Japanese trademark registration to lapse six years ago.

The other tidbit is my first attempt at a map-based data visualization, which I built using Tableau, a surprisingly handy software tool with a free public build. I used it to visualize how trademark owners from outside Japan try to protect their marks in Japan–specifically, whether they seek registrations via Japan’s domestic registration system, or via the international registration system established by the Madrid Protocol. Here’s what I’ve found:

MadridMap
The size of each circle represents an estimate of the number of applications for Japanese trademark registrations from each country between 2001 and 2014. The color represents the proportion of those applications that were filed via the Madrid Protocol (dark blue is all Madrid Protocol; dark red is all domestic applications; paler colors are a mix). The visualization isn’t perfect because not all countries acceded to the Madrid Protocol at the same time–some acceded in the middle of the data collection period, and many have never acceded. (When I have more time maybe I’ll try to figure out how to add a time-lapse animation to bring an extra dimension to the visualization.) Still, it’s a nice, rich, dense presentation of a large and complex body of data.

 

Turning a Corner

It has been a rough week of coding, processing, and de-bugging. But at long last, tonight I’m running the last two scripts I need to run to parse the last of the 330+ Gigabytes of data I received three weeks ago, and I’ve already tested them so I’m pretty confident they’ll work. By tomorrow, if all goes as planned, I’ll have all the data I’m going to be using on this project (a lean 60 GB or so) imported into Stata, where I can slice and dice it however I please. At exactly the halfway point of my residency in Tokyo, this is a major milestone.

The next step is some finer-grained cleaning and de-duplicating of this data, followed by some additional coding to structure it in a useful way (as you can see in the photo, I’ve already started sketching out my file trees). Then I’ll be able to describe and analyze what I’ve built. All of this will take time–the most primitive observation identifier in my data is the individual trademark application number, and it looks like I’ll be dealing with about 4.5 million of them, give or take. And each application number will have multiple records associated with it to capture lots of nitty-gritty trademark-y information like changing ownership and legal representation, renewals, divisional applications, goods and services classifications, foreign and international priority claims, and so on. Processing all that information takes time, and requires a lot of attention to detail. But today I’m feeling good. Today, I feel as confident as I ever have that this project is going to succeed.

So here is a first fruit of my research. The data I’m working with only goes back 15 years, but for any trademark registrations that have still been in force during those past 15 years, I have a fair amount of historical data. The earliest application date I’ve found in the data I’ve imported so far is July 31, 1890. That application– which became Japan Trademark Registration Number 521–is for the mark “重九”, which means literally nothing to me. But I asked around the office, and fortunately I have a colleague from Beijing here in Tokyo, who tells me 重九 is actually a Chinese brand–for cigarettes:

http://www.etmoc.com/eWebEditor/2011/2011090515413693.jpg

重九 translates roughly to “double-nine”, and the additional character (which apparently always accompanies the mark in its current use) translates roughly to “big” (i.e., “Big Double-Nine” Cigarettes). The mark was last renewed in Japan on March 28, 2015. Given that I’m here to study international aspects of intellectual property as they pertain to Japan, the fact that the earliest mark on record appears to be foreign is an interesting development.

Now You’re Just Messing With Me

At some point in 2014, without any warning so far as I can tell, the Japan Patent Office changed the file-naming convention for their digital archives. Whereas before the archives would be stored under a filename such as “T2014-20(01-01)20150114.ISO”, hereafter they will be stored under a file name such as “T2014-21(01_01)20150121.ISO”.

Screen Shot 2016-07-06 at 10.05.29 AMCatch the difference?  Yeah, I didn’t either. Until I let my code–which was based on the old naming convention–run all day. Then I found out the last two years’ data had corrupted all my output files, wiping out 7 GB of data. More fun after the jump…
Continue reading…

Think Different: More Translation Hijinx

I’ve been trying for about three days to figure out why one of the scripts I was given to parse all this government data has been failing when I try to run it. Because the researchers who gave me the scripts commissioned them from some outside programmers, they can’t help me debug it. So I’ve been going line-by-line through the code and cross-referencing every command, option, and character with online manuals and forums.

My best guess is that my problem is (probably) once again a failure of translation. The code I’ve been given was written for LINUX, a (relatively) open UNIX platform. Mac OSX–which I use–is also built on top of a UNIX architecture, which power users can access via the built-in Terminal application. But Apple uses an idiosyncratic and somewhat dated version of the UNIX shell scripting language–this is the computer language you can use to tell the computer to do stuff with the files stored on it. (“Think different” indeed.) There are tons of tiny differences between Apple’s shell language and the open standard implemented in LINUX, and any one of them could be responsible for causing my code to fail. I spent the better part of two days tweaking individual characters, options, and commands in this script, to no avail. Then I tried a patch to update Apple’s scripting language to more closely mirror the one used by LINUX. Still no luck. And three days of my precious seven-week residency in Tokyo gone.

So I gave up. I’ll write my own code instead.

The script I’ve been trying to debug is one of a series of algorithms used to collate and deduplicate several years’ worth of parsed data. But I can create those kinds of algorithms myself, once I know how the parsed data is structured. The hard part was parsing the data in the first place to extract it from its arcane government archive format–and the scripts that do that worked a treat, once I figured out how they function. Besides which, the deduplication strategy used by the researchers who gave me these troublesome scripts is a bit more heavy-handed than I’d use if I were starting from scratch. Which I just did–in Stata, the statistical software package I’ll use to analyze the data, which uses a native scripting language I’m much more familiar with.

Screen Shot 2016-07-05 at 1.48.31 PM

This new script seems to be working; now I just need a good solid stretch of time to allow my home-brewed code to process the several gigabytes of data I’m feeding it. Unfortunately, time is in short supply–I’m in week 3 of my 7-week stay, and I’m supposed to present my findings to my hosts during my last week here. So from here on out, days are for coding and nights are for processing.

It’ll get done. Somehow.

Flying Away on a Wing and a Prayer

If you’re roughly my age, you’ll remember this guy:

3978293-granhc3a9roe

If not, meet Ralph Hinkley, The Greatest American Hero. Ralph (played by William Katt) is the protagonist of a schlocky 1980s sitcom with the greatest television theme song ever written. The premise of the show is that Ralph was driving through the desert one evening when some aliens decided to give him a supersuit that gives him superpowers. Unfortunately, Ralph lost the instruction manual for the suit, so he can never get it to work quite right. He nevertheless attempts to use the suit’s powers for good, and hilarity–or what passed for it on early-80s network television–ensues. In one episode, a replacement copy of the suit’s instruction manual is found, but it’s written in an indecipherable alien language. What could have been a tremendous force for good becomes a frustrating reminder of one’s own shortcomings.

As you know if you’ve been following my recent posts, I’m currently working with a treasure trove of Japanese government data. I’ve been given a helpful translation of the introductory chapters of the data specification. I’ve been given an incredibly helpful set of computer scripts to parse the data and I’ve gotten them to (mostly) work. And now that I’m at the point where I’m about ready to start revising the computer scripts to extract more and different data, I’ve got to start deciphering the various alphanumeric codes that stand in as symbols for more complex data. I’m nearly two weeks in to a seven-week research residency, and I feel like I’m finally approaching the point where I can actually start doing something instead of just getting my bearings. It’s exciting. But then, well…

Up to this point, I’ve been working with a map of the data structure that is organized (like the data itself) with english-language SGML tags (if you know anything about XML, this will look familiar):

Screen Shot 2016-06-30 at 2.55.41 PM

See that column that says “Index”? The five-character sequences in that column map to a list of codes that correspond to definitions for the types of data in this archive. These definitions–set forth in a series of tables–allow the data to be stored using compact sequences that can then be expanded and explained by reference to the code definition tables.When you’re dealing with millions of data records, compactness is pretty important.

So, “B0010” tells you that the data inside this tag (the “application-number” tag) is encoded according to definition table B0010. So I’ll just flip through the code list and…

Screen Shot 2016-06-30 at 2.50.57 PM

Uh… Hmm.

Well, that’s not so bad; I can just search this document for “B0010” (it would be a sight easier if the codes were in order!) and then just copy and paste the corresponding cell from the first column into Google Translate (it’s not a terrifically accurate translator, but it’ll do in a pinch.) The description corresponding to B0010 is “出願番号,” which Google translates to “Application Number.” That makes sense; after all the code is used for data appearing inside the <application-number> SGML tag. So now I just need to look up the code table for 出願番号/B0010 to learn how to decipher the data inside the <application-number> tag, and…

Screen Shot 2016-06-30 at 3.09.54 PM.png

Hmm.

This one actually makes some sense to me. It looks like data in code B0010 consists of a 10-character sequence, in which the first four characters correspond to an application year and the last six characters correspond to an application serial number. Simple, really.

Of course, there are dozens of these codes in the data. And not all of them are so obvious. Some of them even map to other codes that are even more obscure. For example, code A0050–which appears all over this data–is described as “中間記録”. Google translates this as “Intermediate Record”. Code table A0050, in turn, maps to three other code tables–C0840, C0850, and C0870. The code table for C0840 is basically eleven pages of this:

Screen Shot 2016-06-30 at 3.26.00 PM.png

(Sigh.)

In every episode of The Greatest American Hero, there’s a point where Ralph’s malfunctioning suit starts getting in the way–hurting more than helping. Like, he tries to fly in to rescue someone from evil kidnappers and ends up crash-landing, knocking himself out, and making himself a hostage. Nevertheless, with good intentions, persistence, ingenuity, and the help of his friends, he always managed to dig himself out of whatever mess he’d gotten himself into and save the day.

So… yeah. I’m going to figure this one out.