The W3C Workshop on Privacy for Advanced Web APIs brought together about 45 participants from industry (including browser vendors, mobile operators, device manufacturers and service providers), academia, and standardization. The workshop was held as a joint workshop with the PrimeLife EU project, and was hosted by Vodafone at their facilities in London, UK.
The workshop’s main goal was to outline next steps for the W3C concerning the privacy considerations for advanced APIs that make personal information and sensor data available to Web applications, following up on W3C’s previous work on exposing a user’s geographical location through the geolocation API.
More generally, workshop participants reviewed the W3C’s overall direction in the privacy space, discussed approaches toward better privacy on the Web, and toward standards bodies’ roles and responsibilities in this space.
A number of members of the W3C Device APIs and Policy WG (DAP) attended this workshop and discussed privacy implications from the workshop during the subsequent DAP face-to-face meeting.
Summary and Next Steps
The two practical proposals that drew most interest and discussions were the Mozilla privacy icon approach (slides) and CDT’s privacy rule-set idea (slides). Both proposals received a lot of positive feedback, and questions about their viability. In addition to technical and user interface challenges, there were questions about the business incentives for browser vendors and large Web providers, as one of the main obstacles for getting privacy from research and standardization to deployment. Nevertheless, further investigation and experimentation with both approaches seems worthwhile and was encouraged.
There was agreement that it is useful to capture best current practices gained during early implementation efforts (such as those presented during the workshop regarding the geolocation API). Furthermore, investigating how to help specification writers and implementers to systematically analyze privacy characteristics in W3C specifications was seen as a worthwhile effort. To this end, the W3C staff plans to propose a charter for a Privacy Interest Group that can serve as a forum for this work. Such an Interest Group could also provide a focal point for privacy-related coordination with other interested standard development organizations.
Areas of Discussion
Scene setting and standards bodies’ roles
Workshop participants started out by surveying basic privacy concepts and the overall landscape, including child protection concerns, presented by John Carr. There was broad agreement in the room that child safety is an important issue on the Web. However, there was sharp disagreement by the participants on the role child safety should take in the workshop and the role technology organizations and standards bodies could play in this important area.
In a second session, three presentations helped to frame a discussion around standards development organizations’ role in contributing to privacy online. The session started out with David Singer’s (Apple) presentation (slides) of a set of questions concerning W3C’s role in privacy space. He noted that privacy is difficult due to accumulation, distribution, and correlation of personal information. As an example, sharing a few photographs might be acceptable, but if many are shared, then relationships and other information can be mined. Pat Walshe (GSMA) introduced the GSMA’s work (slides) on mobile privacy principles to guide its technical work. Hannes Tschofenig (Nokia Siemens Networks and Internet Architecture Board) discussed the privacy philosophy commonly used in the IETF’s standards work (slides), characterized as a hybrid of “privacy by design” and “privacy by policy.” He noted that different approaches relate to different communities, with the first for engineers, the second for policy makers. He called out education and awareness building; guidelines for privacy-friendly protocol design; review; privacy-related coordination among standards bodies; agreements on terminology as concrete steps that could be taken by standards organizations.
During a subsequent discussion it was noted that by raising privacy to the user, especially with too many dialogs, it is possible to “spook the users” and reduce the benefit of new services and technologies, yet there are risks (such as combining “my location” with known aspects of various locations).
User behavior and privacy economics
Sören Preibusch (University of Cambridge) and Pat Kelley (CMU) presented empirical research on the consumers’ privacy decision-making. Preibusch (slides) focused on economic incentives: How do consumers value the privacy of different pieces of personal data? How willing are they to reveal such data? And how do their statements about willingness to reveal data correspond to their actions when faced with a choice between two offers for the same good, where one makes a lower price available, in return for personal information? Preibusch’s finding was that differences in data collection alone did not make test subjects prefer one online vendor. Their decisions were dominated by the economic factor, even against their stated preferences. Kelley (slides) reported about experiments with a highly customizable social location sharing service. His study found that users’ privacy preferences were complex and did not permit modeling as simple policies – a typical policy would take time, location, recipients and the user’s overall context into account. Considering user burden, Kelley noted that pre-configured privacy profiles helped study subjects to model their preferences. In response to the ability to better audit access to their location information, users tended to relax their privacy preferences over time, eventually leading to additional sharing.
Geolocation: Implementation experience
The first day’s final session focused on experience gathered with implementations of the W3C geolocation API: Jochen Eisinger (Google) spoke about the Google Chrome Team’s observations. He noted that the specification left several aspects of the API’s privacy properties, including user interface considerations, unspecified. He noted Chrome’s approach of granting permissions to the pair of invoking origin and top-most origin: e.g., authorization extended to maps.google.com does not apply when maps.google.com is embedded by evil.example.com, and vice versa. Several workshop participants noted this approach as an appropriate solution that should become part of future specifications. Eisinger also demonstrated the indicators in use in Google Chrome, and noted that users were barely interacting with them, based on available (and possibly skewed) data; users were also found to barely, if ever, revoke permissions to access location data, once granted.
Marcos Caceres (Opera Software) presented a study of various browser UI implementations (slides), based on three criteria: Can the end-user access options and information pertaining to privacy? Does the system afford control over privacy settings? Does the system afford anonymity or alternative means of protecting their privacy? He studied mobile Safari on iOS 4, Firefox, Opera, and Google Chrome. The study found implementations and their iconography to be largely inconsistent. Further guidance on user interface design may be a useful work item for best practices work. During discussion, participants brought up the value of consistent iconography across different implementations.
Ioannis Krontiris (Goethe Universität Frankfurt) (slides) analyzed the W3C geolocation API and various mechanisms that could be used by attackers to track users’ location. He concluded that incorporating privacy policies into the W3C geolocation API would not be by itself sufficient to protect the privacy of mobile users; that additional privacy properties could be implemented in Web browsers that include the geolocation API; and that, privacy controls should be kept as close to the mobile device as possible. He noted that location privacy is not only about current location but also past location, increasing the privacy concerns related to location.
The second day of the workshop focused on specific technologies to help privacy on the Web. Thomas Dübendorfer (Google) emphasized the need for both transparency and user control and presented a proposal for social network based access control mechanisms in OpenSocial 1.1 (slides).
There was substantive feed-back and discussion about the proposal’s specific aspects. There was general agreement that current privacy policies are neither read nor understood by most users and a new simpler approach is needed. While the privacy icons approach rests on the assumption that “unusual” departures from “normal” practices are highlighted, workshop participants noted that many of the “privacy-unfriendly” practices identified in the proposal were, in fact, almost universally used by e-commerce sites. Implicitly, summarizing privacy icons would also assume universal privacy preferences: by focusing on privacy practices causing “surprise” amongst users, they rely on common, baseline privacy concerns. It was also noted that the approach presented took up the same basic paradigm as P3P, and would fail in the market for the same reasons. Further, in the current market environment, trust seals are often shown by the sites least likely to respect users’ privacy.
John Morris (CDT) presented the general approach of binding privacy rules to data (slides), and of creating an environment in which the legal environment might enforce the user’s privacy preferences. He gave the analog of binding a copyright notice to copyright law, relying on non-technical enforcement mechanisms. The key is to express privacy needs, to legally establish a “reasonable expectation of privacy”.
The approach was further refined in presentations by Alissa Cooper (CDT) (slides) and Robin Berjon (Vodafone) (slides), who reviewed specific proposals under discussion in the W3C Device API and Policy Working Group: Cooper outlined an approach that would be based on three main parameters, sharing, secondary use, and retention. Similar to privacy negotiations, users would attach a rule-set to personal information as they disclose it to the Web site. The least permissive setting in this approach would still permit the use of personal information for behavioral advertising.
Reservations during discussion of this approach included: The relative value of proactive vs. reactive privacy technologies (data minimization vs. policies enforced by law); (lack of) incentives for service providers to actually honor privacy preferences; (lack of) integration of individual privacy decisions when they occur as part of a complex, e.g., augmented reality related, interaction.
Participants also noted that standardizing both privacy icons and privacy rule-sets would require several agreements: (a) a definition of nomenclature and background terms, and (b) a specific set of privacy properties that matter to users, and are acceptable to service providers. It was pointed out that P3P attempted to cover both of these areas, but suffered from the complexity that ensued on a technical level. Workshop participants pointed out, however, that details of how data is used is not needed in rule-sets so this is somewhat different. Members of the workshop agreed however that it makes sense for the Device APIs and Policy WG to consider the rule-set approach further.