Revised April 1998
Lorrie Faith Cranor AT&T Labs-Research |
Joseph Reagle Jr. Massachusetts Institute of Technology |
The Platform for Privacy Preferences Project (P3P), under development by the World Wide Web Consortium (W3C), enables users and Web sites to reach explicit agreements regarding sites data privacy practices. The options chosen in developing the protocols, grammar, and vocabulary needed for an agreement lead the authors to a number of generalizations regarding the development of technology designed for "social" purposes.
In this paper we will explain the goals of P3P; discuss the importance of simplicity, layering, and defaults in the development of social protocols; and examine the sometimes-difficult relationship between technical and policy decisions in this domain.
IntroductionThe relationship between technical choices and the non-technical consequences of those choices is inherently difficult. In this paper we discuss several methods for addressing such "policy" decisions in ways that allow engineers and policy makers to apply the methods of their trade to the questions they are best equipped to solve. Our discussion is motivated by our participation in the Platform for Privacy Preferences Project (P3P), a framework for automated decision making about online privacy; however, it is also relevant to a more general set of problems.
When the relationship between technical and policy choices is ignored, it may lead to unintended and undesirable consequences, or situations in which technologies can be coerced to effect covert policies. We highlight two examples of these situations: a proposal for protecting children from harmful materials online that could have far-reaching unintended consequences for the Internet, and an urban design decision that effected covert policies.
The debate surrounding the Communications Decency Act (CDA) presents several examples of technologies (and proposed technologies) that could lead to unintended consequences. For instance one of the proposals for preventing children from accessing harmful materials online would have required the next version of the Internet Protocol to include support for labeling each piece of Internet data with respect to the age of its sender. A note entitled, Enforcing the CDA Improperly May Pervert Internet Architecture, stated that by including such functionality within Internet routers the simplicity, low cost, and radical scalability of the Internet would be jeopardized (Reed, 1996):
No matter what you believe about the issues raised by the Communications Decency Act, I expect that you will agree that the mechanism to carry out such a discussion or implement a resolution is in the agreements and protocols between end users of the network, not in the groups that design and deploy the internal routers and protocols that they implement. I hope you will join in and make suggestions as to the appropriate process to use to discourage the use of inappropriate architectural changes to the fundamental routing architecture of the net to achieve political policy goals.
An example of a covert mechanism designed to implement social policy is the decision of mid-twentieth century New York city planner Robert Moses to design his roads and over-passes so as to exclude the 12-foot high public transit buses that carried people -- often poor or of color -- to the parks and beaches he also designed (Winner, 1988). Not only did he fail to separate the mechanism of transportation from social policy, he did so in such a way that his own biases were substituted in the place of legitimate policy processes.
Bob Scheifler, a developer of the X Windows System, did recognize the importance of separating mechanism and policy and is often quoted for his useful maxim of "mechanism not policy." The application of this statement was towards the rather technical decision of how applications should avoid setting X resources directly, but instead allow these "policy" decisions to be controlled by the user (Scheifler, 87). The result was a mechanism that allowed user control over graphical elements and window system.
In the online realm, protocols have been developed to solve technical problems such as uniquely addressing computers on a network and preventing network bottlenecks. However, new protocols are being developed that are driven by explicit policy requirements. For example, meta-data -- ways to describe or make statements about other things -- and automated negotiation capabilities are being used as the foundation for applications that mimic the social capabilities we have in the real world: capabilities to create rich content, entrust decisions to an agent, make verifiable assertions, create agreements, and develop and manage trust relationships. We characterize this breed of protocols -- including P3P -- as social protocols (Reagle, 97). In contrast to technical protocols, which typically serve to facilitate machine to machine communications, social protocols often mediate interactions between humans.
In this paper we discuss some of the issues we confronted during our work as active members of the World Wide Web Consortium (W3C) Platform for Privacy Preferences Project working groups and the Internet Privacy Working Group (IPWG) vocabulary subcommittee. Many of the lessons learned in the course of P3P can apply to the development of other social protocols such as those designed to facilitate content control, intellectual property rights management, and contract negotiation.
In the following sections we present a brief background of the P3P effort from both policy and technical perspectives. We then examine the issues of simplicity versus sophistication, layering, and defaults to illustrate ways in which non-technical decisions are implicitly incorporated into or promoted by technology; we also present several options and recommendations for designers to consider when attempting to mitigate contentions between technical and policy concerns.
Before proceeding, we wish to quickly explain what we mean by "policy" through a simple definition and example. Mechanism is how to technically achieve something, policy is what one wishes to achieve. For example, P3P is a mechanism for expressing privacy practices; European Union data protection concepts are an example of a policy. In general, the separation of mechanism and policy provides for great flexibility and allows non-technical decisions to be made by those most qualified to make them.
P3P BackgroundAs the use of the World Wide Web for electronic commerce has grown, so too have concerns about online privacy. Individuals who send Web sites personal information for online purchases or customization wonder where their information is going and what will be done with it; some even withhold or falsify information when unsure about a sites information practices (Kehoe & Pitkow, 1997). Parents are particularly concerned about Web sites that collect information from their children (CARU, 1997). While an April 1997 Harris-Westin survey (LH&A & Westin, 1997) found that only 5% of Internet users surveyed said they had what they considered to be an invasion of their privacy on the Internet, 53% said they were concerned that "information about which sites they visit will be linked to their email addresses and disclosed to some other person or organization without their knowledge or consent." These concerns, coupled with confusion about the automatic collection of data and its storage on users hard drives, have prompted legislators and regulators to take a critical look at online privacy issues and motivated companies and organizations to seek technical solutions to online privacy problems.
In 1996, several organizations launched efforts to develop "user empowerment" approaches to online privacy (Cranor, 1997). The Electronic Frontier Foundation partnered with CommerceNet to create TRUSTe (renamed from eTRUST), a branded system of "trustmarks" designed to inform consumers about the privacy practices of Web sites and to provide assurances that Web sites accurately report these practices (TRUSTe, 1997). At the beginning of 1997 the Internet Privacy Working Group (IPWG) was formed. IPWG is an ad-hoc group coordinated by the Center for Democracy and Technology and is comprised of a broad cross-section of public interest organizations and private industry engaged in commerce and communication on the Internet. IPWG began developing a framework for policies and technical tools that give users the ability to make choices about the flow of personal information online (Berman & Mulligan, 1997). In May 1997, the World Wide Web Consortium (W3C) launched the Platform for Privacy Preferences Project (P3P) to develop recommended specifications for automated privacy discussions (W3C, 1997).
Technical BackgroundThe Platform for Privacy Preferences is intended to allow sites to express their privacy practices and for users to exercise preferences over those practices. If a relationship is developed, subsequent interactions and any resulting data activities are governed by an agreement between the site and the user. After configuring privacy preferences, individuals should be able to seamlessly browse the Internet; their browsing software (user agent) negotiates with Web sites and provides access to sites only when a mutually acceptable agreement can be reached. Any requests from a service to store data on the users side must comply with any outstanding P3P agreements. P3P efforts focus on how to exchange privacy statements in a flexible and seamless manner. However, the platform may be used in conjunction with other systems, such as TRUSTe, that provide assurances that privacy statements are accurate.
W3C began P3P by designing the overall architecture for the P3P system and a grammar for expressing privacy practices (W3C, 1997). The P3P grammar specifies the types of clauses that comprise P3P statements. A P3P vocabulary is akin to a Platform for Internet Content Selection (PICS) "rating system" (Resnick & Miller, 1996); the vocabulary specifies the specific terms that fit into the P3P grammar. For example, the P3P grammar specifies that P3P statements must include, among other things, clauses describing any data that is to be collected and the practices that apply to that data; a vocabulary includes a list of specific data practices that are valid in a practice clause. Multiple rating systems or vocabularies can be developed and used independently. IPWG is in the process of designing one such vocabulary.
P3P and PICS are both applications of meta-data. While PICS provides only for a simple "label" to be used in describing Web content, P3P employs a grammar that allows clauses to be combined to form richer P3P statements.
Design Issues for Social ProtocolsWhile separating technical decisions from policy decisions is laudable, such separation is not always readily achievable when designing social protocols, as technical and policy decisions often become intertwined. The line between mechanism and policy may be a fuzzy one, and some aspect of the design often falls within the gray area. We explore three themes of social protocol design that are important to P3P: simplicity versus sophistication, defaults, and layers. We discuss each with respect to separating mechanism from policy, and when such separation is impossible, we offer potential solutions technologists can use to produce good engineering in the face of contentious policy issues.
We address the themes of simplicity, defaults, and layers in separate sections. However, no understanding of one theme can be applied without an understanding of the others. Decisions about how to set defaults and in what layers to address various concerns can impact the overall simplicity of a software tool; indeed layers and defaults can be created specifically to simplify the user experience while providing sophisticated options for the users who want them.
Simplicity and SophisticationIn early discussions about P3P, its designers considered ideas for elaborate systems that would contain extremely sophisticated and detailed privacy grammars, tools for robust strategic negotiation, automated privacy enforcement (ways to automatically penalize "cheaters"), cryptography, certificate schemes, and more. There is relatively little that cannot be seen within scope on first blush. However, designers must often simplify their elaborate ideas in favor of system designs that can be readily implemented and used.
With P3P -- as with other projects -- one must strike a balance between sophisticated capabilities and ease of implementation and use. Difficult decisions will always have to be made with respect to defining certain capabilities as out-of-scope, or unattainable. However, a number of techniques also exist whereby one can enable sophisticated capabilities that are realizable, readily comprehensible, and easy to use. Such techniques include breaking a large system into smaller modules (modularity), designing a system in layers that have varying levels of accessibility to the user (layering), and building a basic system that allows new features or even entire modules or layers to be plugged-in later (extensibility). These techniques also prove useful for separating technical and policy decisions.
Descriptive versus Subjective VocabulariesPICS and P3P have both been designed as basic frameworks to support automated decision-making, but neither specifies the details of a decision-making language. PICS allows multiple third parties to provide these details in the form of rating systems; P3P allows these details to be provided in vocabularies. As a result, third parties can design rating systems and vocabularies that are fairly descriptive, or very simple. A sophisticated rating system might have 20 variables that users must set with respect to the type of content they wish to see, while a simple rating might have a single "thumbs up" variable. Each system has its benefits. The sophisticated rating system provides more information, but requires greater user involvement in its configuration. The simple rating system is quite easy to use, but conveys less information.
When considering P3P, we are presented with a spectrum of options, ranging from fairly descriptive and sophisticated vocabularies over which users must carefully express their preferences, to simple vocabularies with which users defer the expression of their preferences to others. As shown in Figure 1, two important factors that may contribute to a vocabularys complexity are the character of the information subjective versus descriptive and the number of variables. The degree to which the variables are interrelated is another important factor.
Figure 1. A spectrum of rating systems and vocabularies.
To draw the line between descriptive and subjective we present the following understanding. Users express preferences over descriptive information in order to reach subjective "opinions" upon which their agents act. Rating systems include both descriptive information and subjective opinion about the appropriateness of content. However, subjective systems can be problematic because users may not know if the bias inherent in the system matches their own. (The most common complaint against filtering technologies today is that decisions are opaque, consequently a user may have deferred to biases that would be offensive to the user if known.) Also, from descriptive information one can always derive a new set of "subjective" opinions. If you are told about the content of a sites in terms of violence, language, nudity, sex, who paid to produce it, and the intellectual property rights associated with its use, one can make a thumbs up or thumbs down decision. But given only someone elses thumbs down, one cannot recapture the descriptive information. Once opinions replace descriptions, information is lost.
Of course, designing a purely descriptive system is not an easy task (Martin & Reagle, 1997), and in many cases may not be practical due to the enormous number of factors that an exhaustive description would entail. Indeed, the choice of which categories to include in a system may be in and of itself a subjective decision (Friedman, 1997). Thus we prefer to think of systems as relatively descriptive or subjective, rather than absolutely so. Rating systems designed to describe adult content have been criticized for not being able to distinguish artistic nudity from sexual nudity, a distinction that is inherently subjective. In this case a system that identifies several factors that describe the type of nudity, even if those factors are somewhat subjective, would likely be overall more descriptive and more transparent than a system that rates nudity based on some notion of age appropriateness.
The loss of descriptive information is a significant issue when creating systems for international use where laws and culture vary. Descriptive systems fare best, because a cultural group can always operate upon descriptive information, but the biases implicit in a western "thumbs up" may make such information useless to other cultures. The P3P designers have therefore focussed on designing a grammar and vocabulary that allow for the description of how collected information is used rather than subjective statements such as whether information is used in a "responsible" or "appropriate" way.
Recommended SettingsIn the privacy realm both simplicity and sophistication are required. Because people have varying sensitivity towards privacy, we cannot afford to reduce the amount of information expressed to all users to the granularity that is desired by the "lowest common denominator" - those who want the least information. Fortunately, a complex, descriptive vocabulary can be easily translated into simpler, subjective statements. Rather than manually configuring a user agent using the complex vocabulary, an unsophisticated user can select a trusted source from which to obtain a recommended setting in the form of a "canned" configuration file. These are the settings the user agent will use when browsing the Web on behalf of its user. (A set of recommended settings may be thought of as a subjective vocabulary and simplified grammar, overlaid on top of a descriptive vocabulary; this is an example of layering, a topic we will address further in the next section.) Recommended settings capture valuable subjective information that can simplify the user experience while retaining descriptive information in the vocabulary.
Figure 2 shows a sample set of recommended settings with a corresponding complex privacy vocabulary. An organization may develop a single recommended setting that reflects their views about privacy, or, as illustrated in the figure, they may develop several settings to provide a simplified menu of options for users. We hope organizations will develop recommended settings that users can download and install in their browsers with the click of a mouse.
Figure 2. A sample set of recommended settings overlaid on a complex privacy vocabulary. The grids to the right of the settings are shaded to indicate the Practice/Category combinations that are allowed under each setting. Users can select the setting that corresponds most closely to their privacy preferences rather than individually configuring 35 options.
This model of referencing others settings is common in the computer world. For example, it is common for users of the highly-customizable Unix operating system to copy the configuration files of more experienced users when first starting out on a new system. In the P3P realm, this model has the added policy benefit of making a distinction between the relatively unbiased provision of descriptive information, and those willing to provide recommendations. In such a model, sites would describe their practices in an informative and globally comprehensible vocabulary upon which other entities can make recommendations about how users should act. For example, sites may rate with a descriptive vocabulary such as the one illustrated in Figure 2, but the user need only chose between a small number of recommended settings. Information is not lost, but the user experience is simplified.
Defining a Reasonable GrammarWe have discussed the use of a descriptive vocabulary over which more subject preferences can be expressed (overlaid). In this section we examine the amount of sophistication appropriate for a grammar and the relationship between the grammar and recommended settings.
At one extreme, we could create a grammar capable of representing any English sentence. However, such a grammar is likely to be difficult to employ in automated discussions, and will often include clauses that prove unnecessary, redundant, or ambiguous in the intended application. It is therefore desirable to restrict the grammar to the number of clauses necessary to make a reasonable set of statements about privacy on the Web. By including extensibility mechanisms, new clauses that are deemed necessary later can be added.
In P3P, the requirements for the grammar are limited to expressing statements about privacy practices. But determining the universe of statements that might be made about privacy practices is not a simple task. For example, while it seems clear that a privacy grammar must be able to express information about the type of data that might be collected and what the data will be used for, it is unclear whether the grammar must be able to express information about how individuals can update their data or how long the data collector intends to retain the data.
Regardless, a useful check on the adequacy of the P3P grammar is to ask the following two questions. Could a Web site use the grammar (and an appropriate vocabulary) to clearly express that its practices meet the legal requirements for data protection in a given country? Does the grammar provide the ability to express enough information such that a third party (such as a consumer advocate) could issue recommend settings that are meaningful to users?
Layering and User InterfaceThe concept of layering has already been introduced in our discussion of recommended settings, we used one layer to simplify or abstract upon an underlying layer. This helps us resolve contentions associated with simple versus sophisticated vocabularies. However, all layers are not created equally: each layer may be owned by a different entity in the technology production chain. Obviously, each owner will be accountable to a different constituency and will have differing policy biases as a result.
The P3P architecture can be modeled as a set of five layers, as shown in Figure 3.
P3P User Experience | The user experience is the sum of all experiences that the user encounters when using P3P. Such experiences might include the configuration process, and the result of the users P3P agent exercising the users P3P preferences. Designers at any level should have in mind the provision of a good user experience. |
P3P User Interface | What the user sees on the screen and hears; the user interface can take cues from underlying semantics of the vocabulary but is implementation dependent. It may also provide ways for users to represent themselves as personas. |
P3P Recommended Setting | Preferences that are provided by a trusted third party to users who do not wish to configure their own preferences. |
P3P Vocabulary |
The defined set of words or statements that are allowable in a P3P clause. For example, one vocabulary might define a PRACTICE clause to be either: {for system administration} or {for research} or both. |
P3P Grammar | The structure for properly ordering P3P clauses to construct
a valid P3P statement. The following example structures clauses (in caps) to make a simple
privacy practice statement: For (URI) Note: The above example is much simpler than the actual P3P grammar. |
Figure 3. A five-layer model of the P3 architecture.
The P3P layered model serves to both establish a framework for abstraction, and to distinguish the parts of P3P according to the parties responsible for their development. Grammar is the responsibility of the W3C, vocabularies may be defined by IPWG or other organizations, recommended settings may be developed by consumer advocates and other parties trusted by users, and user interfaces will be developed by software implementers. All of these parties contribute towards the final P3P user experience. The entire platform can itself be thought of a layer (or set of layers) on top of the underlying Internet protocol layers.
While this model can help separate policy and technical considerations to some extent, it is important to remember that decisions made at one layer may have ramifications elsewhere. For example, an overly restrictive P3P grammar may limit the ability of vocabulary designers to create a vocabulary that they consider adequately expressive, or a simplistic user interface may limit the ability of users to express sophisticated preferences that would otherwise be possible with the grammar and vocabulary. Consequently, it is important to determine the most appropriate layer for each decision to be made and to consider design alternatives that will not overly restrict options at other layers. When considering assigning a policy decision to a given layer, one should consider whether that layer is accessible to the constituency that needs to be represented in making such decisions. The P3P design has been structured so that most of the policy-related decisions can be made in the vocabulary layer -- which is being developed by more policy-oriented people than are the other layers. Some of the policy-related decisions will be left to the user interface layer, where it is hoped that developers will either provide the mechanisms necessary to support a broad range of policies, or produce a variety of specialized interfaces to address particular consumer needs.
Representing a ChildOne issue that comes up in the P3P context is the question of where to represent the concept of a "child." We would like to allow children to browse the Web, while preventing them from over-riding the preferences set by their parents. We would also like to give parents the option of having their own contact information released rather than their childs information (if information is to be released at all). Need these requirements be internalized in the grammar? If not, should they be referenced in a vocabulary? For example, need the vocabulary be able to reference both the childs and parents name, or should one create a simple "name" type and associate preferences with it for different personae? (A persona is a virtual entity under which ones characteristics and activities can be logically grouped. Some people will have multiple personae: for example, a work persona and a home persona.) For instance, one vocabulary could enable the statement:
{we collect your childs name} or
{we collect the parents name}
Another vocabulary could only allow the following:
{we collect your name}
In the later case, the distinction between a parent and a child would have to be made outside of the vocabulary - perhaps in the user interface of the browser or operating system.
Automated Discussions about New Data ElementsIt is important to provide mechanisms that allow users to manage their preferences over data elements, both common elements (such as name) and elements encountered occasionaly (such as nike_shoesize).
User interfaces may permit users to express separate preferences for each of several categories of data elements. The interface might also allow the user to decide which data elements fall into each category. We can imagine an interface that presents the user with a "bucket" for each category and a set of data elements that can be dragged from bucket to bucket as the user sees fit. The user would then be able to set separate preferences for the elements in each bucket, as shown in Figure 4. A user agent with such an interface would be able to engage in automated discussions with Web sites regarding the use of the data elements for which the user has expressed preferences. However, when a site proposes to collect a data element that the user has not expressed a preference about, the user agent will have to prompt the user for input. While such a system gives users great control, it may cause users to be interrupted with prompts quite frequently.
Figure 4. User agents may allow users to setup buckets
for data elements about which they hold
similar preferences. The user can drag elements from bucket to bucket and set the
preferences on each bucket.
One of the goals of the project has been, to the extent possible, to maintain a seamless browsing experience on the World Wide Web. If users are interrupted frequently, the experience will not be seamless and it is likely that some users will "swat away" prompts without reading them or grow frustrated and disable the platform completely.
Figure 5. User agents may allow users to express preferences aboutOne way to address this problem is to provide elements in a vocabulary that allow Web sites to convey "hints" to the user interface about the nature of each data element. We call these hints data categories. Data categories are characteristics or qualities of data elements. For example, Figure 5 shows five data categories: contact information, account numbers and unique IDs, information about my computer, navigational and click-stream data, and demographic and preference data. The data elements "phone number" and "home address" are examples of data that would likely fall into the contact information category. Every time a Web site asks to collect a piece of data it states the category to which the data element belongs. If the user has already expressed a preference about that piece of data or has asked to be prompted about all new data types, the user agent may ignore the data category hint. However, if the user has not expressed a preference about that piece of data, the user agent may use the users preferences over the data category to proceed in an automated discussion. Consequently, users still retain ultimate control over the data elements and how they are classified, but can also rely on data categories to reduce the number of choices they have to make while browsing.
DefaultsThe costs associated with configuring software are high in terms of knowledge and time. Consequently, the default settings provided "out of the box" are valuable in that they may determine the online behavior of many users.
While specifications may avoid defining default values, eventually somebody must make a decision about how to initially set variables, checkboxes, and sliders. We present several approaches that might be taken in establishing defaults. More than one of these approaches may be taken simultaneously for a given piece of software. Each of these options is presented below in contrast to the possibility that implementers could simply create unchangeable settings (implementers decide on some set of default settings and do not allow users to change them).The three options we present are:
We examine each of these options below and conclude this section with a number of general observations on defaults.
Option: Leave features that require configuration turned off. In this approach, implementers simply deactivate any features that require special configuration. A user who wishes to use these features must explicitly activate and then configure them. This is the approach that was taken by Microsoft for the "Content Advisor" feature in their Internet Explorer 3.0 software. Parents who wished to use this feature to prevent their children from accessing inappropriate content had to explicitly activate it. At that time they could set a password and select a rating system and settings for their child.
Leaving features turned off may be a good alternative when the features are of interest to a small percentage of users or when they may have adverse impacts on users who are unaware of their presence. In the case of P3P, one could argue that setting defaults that would covertly prevent users from accessing sites that do not match the default privacy settings could have adverse impacts on users, and thus P3P should be turned off by default. On the other hand, P3P might be implemented so that its behavior would be obvious to the user, as would be the mechanisms to change the defaults or disable P3P completely.
When features are left turned off, the chances that they will ever be activated are reduced because many users will never take the time to figure out how to activate them (and some will never even realize they exist).
Option: Leave features unconfigured, but prompt users to configure them before using the software. In this approach, implementers setup their software so that it is not usable until a user configures it. There are many pieces of software that take this approach for their most important settings, although they generally have defaults for their less-important settings that can be reconfigured later.
This alternative has the advantage that users become immediately aware of the existence of features. However, when features take a long time or significant thought to configure, users may grow impatient and give up installing the software or select the simplest configuration options rather than taking the time to select the options they most prefer. This alternative might be useful in P3P if users are initially presented with only a small number of recommended settings.
Option: Configure features with default settings. In this approach implementers decide on some set of default settings for the initial configuration of their software. Users may change these settings later.
This alternative has the advantage that users get the benefits of features without having to take the time to configure them. However, the pre-configured default settings may have a covert impact on the user experience.
In the case of P3P, pre-configured defaults are likely to be controversial, especially in software not advertised as having any particular biases with respect to privacy. A default that provides minimal privacy protections is likely to be criticized by privacy advocates. However, a default that provides strong privacy protections but blocks access to many Web sites is likely to be criticized by people who dont understand why their Web surfing is being interrupted, as well as by the owners of sites that get blocked frequently. On the other hand, there may be a market for products pre-configured to meet a specific need, for example a product pre-configured with strong privacy defaults and explicitly marketed as "privacy friendly."
If implementers select alternatives in which P3P is enabled by default, Web sites that are not P3P compliant may not be viewable without explicit overrides from users. This could be frustrating for users, but might give sites an incentive to adopt P3P technology quickly. (Although to ease the transition, implementers might include tools that would allow users to access sites with no privacy statements if those sites do not actually collect information other than click-stream data.)
When considering the best approach for setting defaults in a particular application, the following ideas should be considered:
Technologists are designing Web applications to address social problems. We characterize such applications as "social protocols" and believe that the good engineering principles of modularity, extensibility, and the development of mechanism - rather than "policy" - continue to be important in designing these protocols. A unique characteristic of these protocols is that social concerns may explicitly contribute to the requirements of the design. However if collecting and comprehending requirements from a client in the commercial or technical world is difficult, it is much harder to understand the requirements of a society, or multiple societies! Consequently, engineers must be doubly vigilant in the pursuit of mechanism and in promotion of technologies that support multiple policies. For example, a privacy protocol that "hardwired" the biases of a few American engineers would be of limited utility in a global Web space.
However, as hard as one may try to draw a line between mechanism and policy, that line is a fuzzy one, and some aspect of the design often falls within the gray area. In this paper we have presented three themes (simplicity versus sophistication, layering, and defaults) that are useful for navigating through the gray no-mans-land between mechanism and policy. We have presented our current understanding of these themes as applied to the development of P3P and we have presented recommendations for applying relevant design principles in an explicit and reasoned light. Our purpose has been to convey our own experiences, generalize them so that they may be useful in other applications, and to hopefully inform policy makers about technology development relevant to their own activities.
AcknowledgementsThe authors thank the P3P working group participants, the members of the Internet Privacy Working Group vocabulary subcommittee, and our colleagues at AT&T Labs-Research and the World Wide Web Consortium. Many of the concepts discussed herein were not the result of any single individuals effort, but the combined efforts of a number of people. These ideas were developed through many conference calls, email discussions, and face to face meetings.
This paper references work in progress by several organizations. Nothing in this paper should be construed as representing the final status of any effort.
Berman, J., & Mulligan, D. (1997, April 15). Comments of Internet Privacy Working Group concerning consumer on-line privacy P954807, submitted to the Federal Trade Commission, Workshop on Consumer Privacy. Available [Online]: http://www.ftc.gov/bcp/privacy2/comments2/ipwg049.htm
Childrens Advertising Review Unit of Better Business Bureaus, Inc. (1997, July 14). FTC: Consumer privacy comments concerning the Childrens Advertising Review Unit P954807 supplement to comment #008, submitted to the Federal Trade Commission, Workshop on Consumer Privacy. Available [Online]: http://www.ftc.gov/bcp/privacy2/comments3/CARU.1.htm
Cranor, L. (1997, June). The role of technology in self-regulatory privacy regimes. In Privacy and Self Regulation in the Information Age. U.S. Department of Commerce, National Telecommunications and Infrastructure Administration. Available [Online]: http://www.ntia.doc.gov/reports/privacy/selfreg5.htm#5B
Friedman, B. (1997). Human Values and the Design of Computer Technology. Cambridge: Cambridge University Press.
Kehoe, C. & Pitkow, J. (1997, June). GVUs Seventh WWW User Survey. Available [Online]: http://www.gvu.gatech.edu/user_surveys/survey-1997-04/
Louis Harris & Associates and Westin, A. F. (1997). Commerce, Communication and Privacy Online: A National Survey of Computer Users. Conducted for Privacy & American Business.
Martin, C.D. & Reagle, J. M. (1997). A technical alternative to government regulation and censorship: content advisory systems for the Internet. Cardozo Arts & Entertainment Law Journal,Benjamin Cardozo Law School, Yeshiva University, New York, NY. 15 (2), 409-427.
Reagle, J. (1997, January). Social protocols: enabling sophisticated commerce on the Web. Presentation before the Interactive Multimedia Association, Electronic Commerce For Content II. Available [Online]: http://www.w3.org/Talks/9704-IMA/
Reed, D. (1996, April 28) In Cipher Electronic Issue #14. Available [Online]: http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/old-conf-rep/comment-reed-960331.html
Resnick, P. & Miller, J. (1996, October). PICS: Internet access controls without censorship. Communications of the ACM. 39(10), 87-93.
Scheifler, R. (1987, June). "RFC 1013 X WINDOW SYSTEM PROTOCOL, VERSION 11." Available [Online]: ftp://ds.internic.net/rfc/rfc1013.txt
TRUSTe. (1997, June.) The TRUSTe Story. Available [Online]: http://www.truste.org/program/story.html
Winner, L. (1988, January). The Whale and the Reactor : A Search for Limits in an Age of High Technology. Chicago: University of Chicago Press.
World Wide Web Consortium. (1997, December). Platform for Privacy Preferences Project.
Web site containing overview and status of P3P. W3C recommendations with regards to P3P
will be posted here when they become available publicly. Available [Online]: http://www.w3.org/P3P