[Paper Overview] [DRM-Workshop Homepage]

Position Paper for the W3C DRM Workshop
Sophia Antipolis, France
22-23 January 2001


Digital Rights (mis)Management

Judie Mulholland (fn 1)
FSU School of Information Studies
<judiemul@kc-inc.net>



Abstract


This paper outlines some of the concerns that will need to be resolved in order to design/develop viable systems for IP protection and enforcement. Secondly, an illustration of some of the challenges and obstacles that may be faced during implementation are briefly elaborated. Finally, this author concludes with a consideration of what kind of role the W3C can play in fostering the emergence of an acceptable DRM regime.


"We shape our tools and they in turn shape us" — Marshall McLuhan.

Introduction

One can't help wondering if current approaches to "intellectual property (IP) protection and enforcement" will strike future generations as being archaic and overly reactionary. In fifty years from now, will our successors respond in a slightly amused fashion when they recall "DRM" à la 2001. Much like we do, when we think of turn-of-the-century flying machines and horseless carriages. Revolutionary concepts in their time but in hindsight, these inventions were mere precursors to the broader notion(s) of airplanes/automobiles, transportation systems, and regulatory agencies. Granted, when technological innovations are first introduced, it is not unusual to discover that they are being used in ways never anticipated by the creator. In the present context, witness the emergence of peer-to-peer file sharing. A year ago, who would have imagined that the recording industry would end up grappling with the impact of napsterization and the threat it poses to their raison d'être.

Before outlining where I stand, let me point out that it is beyond the scope of this short essay to broach the myriad issues involved in developing new approaches for setting up a viable DRM infrastructure. In the opinion of this observer, it is only a matter of time before most of the technical wrinkles are worked out, particularly if the open source community succeeds in setting a "nonproprietary" course of action. Plus, I am confident that other workshop attendees will adequately cover all the pros and cons of what is, or isn't, an appropriate architecture, protocol, mechanism, and the like.

Instead, I plan to take a contrarian view and put forward for discussion and debate, the possibility of encountering push-back (fn 2) and missed opportunities should we allow ourselves to become too narrowly focused or locked-in to a particular paradigm. Not surprisingly, when it comes to the protection and enforcement of digital objects, we are still "betwixt and between." Act too soon and we'll be saddled with quaint artefacts, e.g., technologies that lose out in a standards war [5: 261-296] or succumb to ruinous externalities [5:13-17] (fn 3). But act too late and we're apt to spin out of control, ending up with the DRM equivalent of urban sprawl, traffic congestion and road rage. Unfortunately for those of us working in the trenches, the margin for error doesn't permit much latitude. In fact, the naysayers would have us believe — it's too late already!

For the remainder of this paper, I will address the following issues: (i) system requirements; (ii) design and development challenges; (iii) obstacles to diffusion and deployment; and finally, (iii) the role of the W3C in spearheading DRM standards and solutions.

Requirements

Regardless of which combination of technologies or approaches emerges as a standard or dominant design, it is my understanding that an "equitable" system for managing digital rights will need to:

Challenges

Based on the current state-of-the-art, I don't think anyone will deny that we, as system designers/developers, are approaching the impossible. Without going into all the minutiae, it would appear that we are trying to solve a problem that involves several orders of magnitude (e.g., time scales that range from milliseconds to 70+ years), not to mention, numerous constraints (e.g., the existing legal system, business practices, user norms) and downside risks (e.g., the threat of litigation, piracy, etc.). Due to the complexities involved, there is a high probability of committing a "Type III Error" — solving the wrong problem (fn 5). Since, at this stage of the game, we can't be sure if we are even asking the "right" question. However, without apologizing, think of what we're up against. As an unfolding phenomenon, DRM is characterized by: dynamic change, nonlinearity, tightly coupled interactions, path dependence, and network effects. When designing/developing systems for managing rights (clearances, authorizations, permissions, and so forth), it would behoove us to take into consideration:

Obstacles

The gist of this position paper can be summed up as: there are no free rides. Despite all the hype and hoopla, this wave of innovation is bound to encounter its share of false starts and U-turns. The trick is discovering not only where and when, but how and why. To give some idea of what may be in store, below is a sampling of "speed bumps and potholes" (i.e. counter-intuitive effects), that are likely to be met along the way (fn 6).

  1. Today's problems come from yesterday's "solutions." DRM is exacerbated by the fact that neither the Internet nor the WWW were designed to accommodate the full-scale management of digital rights.

  2. The harder you push, the harder the system pushes back. This refers to the idea of compensating feedback, AKA resistance. It turns out that even our best efforts to redesign or improve a process often call forth responses (from the system) that offset the benefits of the intervention. Invariably, it seems that the more time and effort that is spent, the more time and effort that is required. In case of DRM, it seems the every time we try to clamp down on illegal copying, the user community comes back with a new way of defeating or compromising copy protection mechanisms/devices.

  3. Behavior grows better before it grows worse. It has long been acknowledged that well-intentioned people seeking to solve a problem often make matters worse. Without realizing it, attempts to stabilize a system may further undermine it and/or policies meant to improve a situation, may actually aggravate it. According to Senge, low-leverage interventions would be much less alluring if it were not for the fact that in the short term, many of them actually work. Eventually, however, compensating feedback kicks in, and the ramifications of earlier actions come back to haunt you.

Role of the W3C

First of all, it should be stated up front that as a governing body, the W3C would be remiss if it didn't assume an active part in helping to establish a credible infrastructure for DRM. Together with other non-governmental stakeholders (e.g., CNI, CDT, CPSR, EFF, EPIC, etc.), the W3C has a crucial role to play in ensuring that technical standards remain fair and open. By the same token, if members of the W3C only concentrate on technical issues, and exclude from their purview the likely impact of these tools and techniques on society as a whole and vice versa; they will be passing up the chance to help shape the contours of a new regime. And while someone may question what the long term impact has to do with setting standards and writing specs, lest we forget, the Y2K debacle came about as a result of routine decisions made to save money and cut costs.

In a similar vein, all too often, decisions that will have a broad and overwhelming impact take place behind closed doors by persons (e.g., IP lawyers, vendors, government officials and other vested interests) who have a direct stake in the outcome. Largely absent from the discussion is the general public who stand to win or lose, depending on which approach is finally selected. As a relatively neutral forum, the W3C is well-positioned to ensure that any solutions regarding the deployment of DRM works to the benefit of all parties and not just the incumbents.

For all intents and purposes, programmers have become tacit law-makers. They write the rules that the rest of us are bound by. According to Lessig, code writers determine: (i) what the defaults of the Internet will be, (ii) whether privacy will be protected, and (iii) the degree to which anonymity will be allowed. The extent to which code remains "open" will determine how much power, governments are able to wield and whether or not rights-holders will be able to successfully constrain behavior [1: 60]. If Lessig's assertions are true, then all the more reason for the W3C to accept responsibility and assume leadership of DRM before it is ceded to public authorities or hijacked by private interests.

Finally, it is my sincere hope that the W3C will take the initiative and seriously study both the short and long term implications of DRM. According to a report by the National Research Council (NRC), "[c]hoices made now will have long-lasting consequences, and attention must be paid not only to their technological merit, but also to their social and economic impacts [2: Summary]. In fact, some observers believe (myself included) that we are on the verge of a massive unbundling of rights and as a result, a large number of products and services will no longer be bought and sold as stand-alone commodities (fn 7). Instead, they will be reconstituted and reused in a kaleidoscopic fashion to take advantage of underlying relationships, ushering in what Rifkin and others have dubbed, The Age of Access [3]. Just what this will mean for a system of DRM derived from IP principles and precedents based on exclusion, nonrivalry and one-size-fits-all, remains to be seen. However, unless steps are taken to rectify the imbalance, we had better brace ourselves for another disaster-in-the-making.

On the other hand, because there hasn't been enough investment in research to help understand how recent changes will affect society, the NRC has concluded that "[t]he United States, and indeed the world, are facing critical policy issues — involving intellectual property rights, privacy, free speech, education, and other crucial concerns — armed with very little understanding and analysis of the consequences of possible choices" [2: Chapter 1]. Should we fail to fully examine the implications of choices made on the basis of competitive advantage and short-term gain, we will have demonstrated once again, plus ça change, plus c'est la même chose (fn 8).


Foot Notes

(fn 1) The author is a Ph.d. candidate in the School of Information Studies at Florida State University. She is presently developing a simulation model (based on system dynamics) to look at the long term effects of rights management. Her dissertation is entitled "Safeguarding Innovation and Access: A Dynamic Model of Rights Management." A copy of the research prospectus is located at http://www.kc-inc.net/~judiemul/dissertation/.

(fn 2) Stefik writes that at the "edge" of technologies of connection, there is often a conflict between global and local values. This conflict evokes a form of resistance called "push back," as people seek stability and attempt to preserve the status quo [6: 3].

(fn 3) The unintended "spillover" of any good or service is called an externality. If the spillover is positive (e.g. a research breakthrough, a new innovation, a cure for disease), then it is regarded as a benefit. On the other hand, if the spillover is negative (e.g. pollution, unsolicited e-mail, invasion of privacy), then the externality is considered to be a nuisance or a cost to society.

(fn 4) A "handle" is identifying information that specifies levels of permission.

(fn 5) Recall from Statistics 101, a Type I error (alpha) occurs if a hypothesis is rejected when it should have been accepted and a Type II error (beta) occurs if a hypothesis is accepted when it should have been rejected. In contrast, a Type III error refers to the probability of solving the wrong problem.

(fn 6) Senge refers to these barriers as "The Laws of the Fifth Discipline." Other impediments to organizational learning (not covered here) are: the easy way out, usually leads back in ... cause and effect are not closely related in time and space, dividing an elephant in half does not produce two small elephants, etc.[4: 57-67].

(fn 7) When you purchase a book or a CD, embedded in that product is a bundle of ancillary rights that you may or may not make use of (i.e., the right to read, lend, resell and so forth) but nevertheless, you are forced to pay for them as part of the package. In a digital environment, the user will only pay for those privileges he or she wishes to take advantage of.

(fn 8) A rough translation of this French proverb: "The more things change, the more they remain the same."


References

[1] Lessig, L. Code and Other Laws of Cyberspace, (NY, NY: Basic Books: 1999, ISBN: 0-465-03912-X, 297 pp).

[2] National Research Council (NRC) "Fostering Research on the Economic and Social Impacts of Information Technology," Report of a Workshop, (Wash., DC: National Academy Press, 1998), ISBN 0-309- 06032-X, 98-86542. URL: http://www.nap.edu/readingroom/books/esi/index.html.

[3] Rifkin, J. The Age of Access: The New Culture of Hypercapitalism, Where All of Life is a Paid-for Experience, (NY, NY: Jeremy P. Tarcher/Putnam, ISBN 1-58542-018-2, 312 pp).

[4] Senge, P. M. The Fifth Discipline: The Art & Practice of The Learning Organization, (NY, NY: Doubleday, HD58.9.S46 1990, 423 pp).

[5] Shapiro, C. and H. R. Varian Information Rules: A Strategic Guide to the Network Economy, (Cambr., MA: HBS Press, ISBN 0-87584-863-X, HC79 155S53 1998, 352 pp). URL: http://www.inforules.com.

[6] Stefik, M. The Internet Edge: Social, Technical, and Legal Challenges for a Networked World, (Cambr., MA: The MIT Press, 320 pp).

Copyright  ©  2001