Privacy is an essential part of the Web ([ETHICAL-WEB]). This document provides definitions for privacy and related concepts that are applicable worldwide. It also provides a set of privacy principles that should guide the development of the Web as a trustworthy platform. People using the Web would benefit from a stronger relationship between technology and policy, and this document is written to work with both.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document is a Draft Finding of the Technical Architecture Group (TAG) which we are releasing as a Draft Note. The intent is for this document to become a W3C Statement. It was prepared by the Web Privacy Principles Task Force, which was convened by the TAG. Publication as a Draft Finding or Draft Note does not imply endorsement by the TAG or by the W3C Membership.
This draft does not yet reflect the consensus of the TAG or the task force and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as anything other than a work in progress.
It will continue to evolve and the task force will issue updates as often as needed. At the conclusion of the task force, the TAG intends to adopt this document as a Finding.
This document was published by the Technical Architecture Group as a Group Draft Note using the Note track.
Group Draft Notes are not endorsed by W3C nor its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
The W3C Patent Policy does not carry any licensing requirements or commitments on this document.
This document is governed by the 2 November 2021 W3C Process Document.
This document elaborates on the privacy principle in the W3C TAG Ethical Web Principles. It doesn't address how to balance the different principles in that document if they come into conflict.
Privacy is covered by legal frameworks and this document recognises that existing data protection laws take precedence for legal matters. However, because the Web is global, we benefit from having shared concepts to guide its evolution as a system built for the people using it ([RFC8890]). A clear and well-defined view of privacy on the Web, informed by research, can hopefully help all the Web's participants in different legal regimes. Our shared understanding is that the law is a floor, not a ceiling.
The Web is for everyone ([For-Everyone]). It is "a platform that helps people and provides a net positive social benefit" ([ETHICAL-WEB], [design-principles]). One of the ways in which the Web serves people is by protecting them in the face of asymmetries of power, and this includes establishing and enforcing rules to govern the power of data.
The Web is a social and technical system made up of information flows. Because this document is specifically about privacy as it applies to the Web, it focuses on privacy with respect to information flows. Our goal is not to cover all privacy issues, but rather to provide enough background to support the Web community in making informed decisions about privacy and to weave privacy into the architecture of the Web. Few architectural principles are absolute, and privacy is no exception: privacy can come into tension with other desirable properties of an ethical architecture, and when that happens the Web community will have to work together to strike the right balance.
Information is power. It can be used to predict and to influence people, as well as to design online spaces that control people's behaviour. The collection and processing of information in greater volume, with greater precision and reliability, with increasing interoperability across a growing variety of data types, and at intensifying speed is leading to an unprecedented concentration of power that threatens private and public liberties.
What's more, automation and the increasing computerisation of all aspects of our lives both increase the power of information and decrease the cost of a number of intrusive behaviours that would be more easily kept in check if the perpetrator had to be in the same room as the victim.
These asymmetries of information and of automation create commanding asymmetries of power.
Data governance is the system of principles that regulate information flows. When people are involved in information flows, data governance determines how these principles constrain and distribute the power of information between different actors. The principles describe the way in which different actors may, must, or must not produce or process flows of information from, to, or about other actors ([GKC-Privacy], [IAD]).
Typically, actors are people but they can also be collective endeavours such as companies, associations, or political parties (eg. in the case of espionage), which are known as parties. It is important to keep in mind that not all people are equal in how they can resist the imposition of unfair principles: some people are more vulnerable and therefore in greater need of protection. The focus of this document is on the impact that this power differential can have against people, but it can also impact other actors, such as companies or governments.
Principles vary from context to context ([Understanding-Privacy], [Contextual-Integrity]): people have different expectations of privacy at work, at a café, or at home for instance. Understanding and evaluating a privacy situation is best done by clearly identifying:
It is important to keep in mind that there are always privacy principles and that all of them imply different power dynamics. Some principles may be more permissive, but that does not render them neutral — it merely indicates that they are supportive of the power dynamic that emerges from permissive processing. We must therefore determine which principles best align with ethical Web values in Web contexts ([ETHICAL-WEB], [Why-Privacy]).
Information flows as understood in this document are information exchanged or processed by actors. The information itself need not necessarily be personal data. Disruptive or interruptive information flowing to a person is in scope, as is de-identified data that can be used to manipulate people or that was extracted by observing people's behaviour on someone else's website.
Information flows need to be understood from more than one perspective: there is the flow of information about a person (the subject) being processed or transmitted to any other party, and there is the flow of information towards a person (the recipient). Recipients can have their privacy violated in multiple ways such as unexpected shocking images, loud noises while one intends to sleep, manipulative information, interruptive messages when a person's focus is on something else, or harassment when they seek social interactions.
A person's autonomy is their ability to make decisions of their own volition, without undue influence from other parties. People have limited intellectual resources and time with which to weigh decisions, and by necessity rely on shortcuts when making decisions. This makes their preferences, including privacy preferences, malleable and susceptible to manipulation ([Privacy-Behavior], [Digital-Market-Manipulation]). A person's autonomy is enhanced by a system or device when that system offers a shortcut that aligns more with what that person would have decided given arbitrary amounts of time and relatively unlimited intellectual ability; and autonomy is decreased when a similar shortcut goes against decisions made under such ideal conditions.
Affordances and interactions that decrease autonomy are known as dark patterns. A dark pattern does not have to be intentional ([Dark-Patterns], [Dark-Pattern-Dark]).
Because we are all subject to motivated reasoning, the design of defaults and affordances that may impact autonomy should be the subject of independent scrutiny.
Given the large volume of potential data-related decisions in today's data economy, complete informational self-determination is impossible. This fact, however, should not be confused with the idea that privacy is dead. Studies show that people remain concerned over how their data is processed, feeling powerless and like they have lost agency ([Privacy-Concerned]). Careful design of our technological infrastructure can ensure that people's autonomy with respect to their own data is enhanced through appropriate defaults and choice architectures.
Different procedural mechanisms exist to enable people to control how they interact with systems in the world. Mechanisms that increase the number of purposes for which their data is being processed are referred to as opt-in or consent; mechanisms that decrease this number of purposes are known as opt-out.
When deployed thoughtfully, these mechanisms can enhance people's autonomy. Often, however, they are used as a way to avoid putting in the difficult work of deciding which types of processing are appropriate and which are not, offloading privacy labour to the people using a system.
In specific cases, people should be able to consent to more sensitive purposes, such as having their identity recognised across contexts or their reading history shared with a company. The burden of proof on ensuring that informed consent has been obtained needs to be very high in this case. Consent is comparable to the general problem of permissions on the Web platform. In the same way that it should be clear when a given device capability is in use (eg. you are providing geolocation or camera access), sharing data should be set up in such a way that it requires deliberate, specific action from the person (eg. triggering a form control that is not on a modal dialog) and if that consent is persistent, there should be a vivid indicator that data is being transmitted shown at all times, in such a way that the person can easily switch it off. In general, providing consent should be rare, difficult, highly intentional, and temporary.
When an opt-out mechanism exists, it should preferably be complemented by a global opt-out mechanism. The function of a global opt-out mechanism is to rectify the automation asymmetry whereby service providers can automate data processing but people have to take manual action to prevent it. A good example of a global opt-out mechanism is the Global Privacy Control [GPC].
Conceptually, a global opt-out mechanism is an automaton operating as part of the user agent, which is to say that it is equivalent to a robot that would carry out a person's bidding by pressing an opt-out button with every interaction that the person has with a site, or more generally conveys an expression of the person's rights in a relevant jurisdiction. (For instance, under [GDPR], the person may be conveying objections to processing based on legitimate interest or the withdrawal of consent to specific purposes.) It should be noted that, since a global opt-out signal is reaffirmed automatically with every interaction, it will take precedence in terms of specificity over any general obtention of consent by a site, and only superseded by specific consent obtained through a deliberate action taken by the user with the intent of overriding their global opt-out.
Privacy labour is the practice of having a person carry out the work of ensuring data processing of which they are the subject or recipient is appropriate, instead of having the parties be responsible for that work. Data systems that are based on asking people for their consent tend to increase privacy labour.
More generally, implementations of privacy are often dominated by self-governing approaches that offload labour to people. This is notably true of the regimes descended from the Fair Information Practices (FIPs), a loose set of principles initially elaborated in the 1970s in support of individual autonomy in the face of growing concerns with databases. The FIPs generally assume that there is sufficiently little data processing taking place that any person will be able to carry out sufficient diligence to enable autonomy in their decision-making. Since they entirely offload the privacy labour to people and assume perfect, unlimited autonomy, the FIPs do not forbid specific types of data processing but only place them under different procedural requirements. Such an approach is appropriate for parties that are processing data in the 1970s.
One notable issue with procedural, self-governing approaches to privacy is that they tend to have the same requirements in situations where people find themselves in a significant asymmetry of power with a party — for instance a person using an essential service provided by a monopolistic platform — and those where people and parties are very much on equal footing, or even where the person may have greater power, as is the case with small businesses operating in a competitive environment. It further does not consider cases in which one party may coerce other parties into facilitating its inappropriate practices, as is often the case with dominant players in advertising or in content aggregation ([Consent-Lackeys], [CAT]).
Reference to the FIPs survives to this day. They are often referenced as "transparency and choice", which, in today's digital environment, is often an indication that inappropriate processing is being described.
Privacy principles are socially negotiated and the definition of privacy is essentially contested ([Privacy-Contested]). This makes privacy a problem of collective action ([GKC-Privacy]). Group-level data processing may impact populations or individuals, including in ways that people could not control even under the optimistic assumptions of consent. For example, based on group-level analysis, a company may know that site.example is predominantly visited by people of a given race or gender, and decide not to run its job ads there. Visitors to that page are implicitly having their data processed in inappropriate ways, with no way to discover the discrimination or seek relief ([Relational-Governance]).
What we consider is therefore not just the relation between the people who share data and the parties that invite that disclosure ([Relational-Turn]), but also between the people who may find themselves categorised indirectly as part of a group even without sharing data. One key understanding here is that such relations may persist even when data is de-identified. What's more, such categorisation of people, voluntary or not, changes the way in which the world operates. This can produce self-reinforcing loops that can damage both individuals and groups ([Seeing-Like-A-State]).
In general, collective issues in data require collective solutions. Web standards help with data governance by defining structural controls in user agents and establishing or delegating to institutions that can handle issues of privacy. Governance will often struggle to achieve its goals if it works primarily by increasing individual control instead of acting collectively.
Collecting data at large scales can have significant pro-social outcomes. Problems tend to emerge when actors process data for collective benefit and for self-dealing purposes at the same time. The self-dealing purposes are often justified as bankrolling the pro-social outcomes but this requires collective oversight to be appropriate.
There are different ways for people to become members of a group. Either they can join it deliberately, making it a self-constituted group such as when joining a club, or they can be classified into it by an external party, typically a bureaucracy or its computerised equivalent ([Beyond-Individual]). In the latter case, people may not be aware that they are being grouped together, and the definition of the group may not be intelligible (for instance if it is created from opaque machine learning techniques).
Protecting group privacy can take place at two different levels. The existence of a group or at least its activities may need to be protected even in cases in which its members are guaranteed to remain anonymous. We refer to this as "group privacy." Conversely, People may wish to protect knowledge that they are members of the group even though the existence of the group and its actions may be well known (eg. membership in a dissidents movement under authoritarian rule), which we call "membership privacy". An example privacy violation for the former case is the fitness app Strava that did not reveal individual behaviour or identity but published heat maps of popular running routes. In doing so, it revealed secret US bases around which military personnel took frequent runs ([Strava-Debacle], [Strava-Reveal-Military]).
When people do not know that they are members of a group, when they cannot easily find other members of the group so as to advocate for their rights together, or when they cannot easily understand why they are being categorised into a given group, their ability to protect themselves through self-governing approaches to privacy is largely eliminated.
One common problem in group privacy is when the actions of one member of a group reveals information that other members would prefer were not shared in this way (or at all). For instance, one person may publish a picture of an event in which they are featured alongside others while the other people captured in the same picture would prefer their participation not to be disclosed. Another example of such issues are sites that enable people to upload their contacts: the person performing the upload might be more open to disclosing their social networks than the people they are connected to are. Such issues do not necessarily admit simple, straightforward solutions but they need to be carefully considered by people building websites.
While transparency rarely helps enough to inform the individual choices that people may make or in increasing their autonomy, it plays a critical role in letting researchers and reporters inform our collective decision-making about privacy principles. This consideration extends the TAG's resolution on a Strong and Secure Web Platform to ensure that "broad testing and audit continues to be possible" where information flows and automated decisions are involved.
Such transparency can only function if there are strong rights of access to data (including data derived from one's personal data) as well as mechanisms to explain the outcomes of automated decisions.
The user agent acts as an intermediary between a person (its user) and the web. User agents implement, to the extent possible, the principles that collective governance establishes in favour of individuals. They seek to prevent the creation of asymmetries of information, and serve their user by providing them with automation to rectify automation asymmetries. Where possible, they protect their user from receiving intrusive messages.
The user agent is expected to align fully with the person using it and operate exclusively in that person's interest. It is not the first party. The user agent serves the person as a trustworthy agent: it always puts that person's interest first. In some occasions, this can mean protecting that person from themselves by preventing them from carrying out a dangerous decision, or by slowing down the person in their decision. For example, the user agent will make it difficult for that person to connect to a site if it can't verify that the site is authentic. It will check that that person really intends to expose a sensitive device to a page. It will prevent that person from consenting to the permanent monitoring of their behaviour. Its user agent duties include ([Taking-Trust-Seriously]):
These duties ensure the user agent will care for its user. In academic research, this relationship with a trustworthy agent is often described as "fiduciary" [Fiduciary-UA]. Some jurisdictions may have a distinct legal meaning for "fiduciary."
Many of the principles described in the rest of this document extend the user agent's duties and make them more precise.
While privacy principles are designed to work together and support each other, occasionally a proposal to improve how a system follows one privacy principle may reduce how well it follows another principle.
Given any initial design that doesn't perfectly satisfy all principles, there are usually some other designs that improve the situation for some principles without sacrificing anything about the other principles. Work to find those designs.
Another way to say this is to look for Pareto improvements before starting to trade off between principles.
Once one is choosing between different designs at the Pareto frontier, the choice of which privacy principles to prefer is complex and depends heavily on the details of each particular situation. Note that people's privacy can also be in tension with non-privacy concerns. As discussed in the W3C TAG Ethical Web Principles, "it is important to consider the context in which a particular technology is being applied, the expected audience(s) for the technology, who the technology benefits and who it may disadvantage, and any power dynamics involved". Despite this complexity, there is a basic ground rule to follow:
This is a special case of the more general principle that data should not be used for more purposes than the data's subjects understood it was being collected for.
A service should explain how it uses people's data to protect them and other people, and how it might additionally use someone's data if it believes that person has broken the rules.
It is attractive to say that if someone violates the norms of a service they're using, then they sacrifice a proportionate amount of their privacy protections, but
The following examples illustrate some of the tensions:
As indicated above, different contexts require different principles. This section describes a set of principles designed to apply to the Web context in general. The Web is a big place, and we fully expect more specific contexts of the Web to add their own principles to further constrain information flows.
To the extent possible, user agents are expected to enforce these principles. However, this is not always possible and additional enforcement mechanisms are needed. One particularly salient issue is that a context is not defined in terms of who owns or controls it (it is not a party). Sharing data between different contexts of a single company is just as much a privacy violation as if the same data were shared between unrelated parties.
A person's identity is the set of characteristics that define them. Their identity in a context is the set of characteristics they present to that context. People frequently present different identities to different contexts, and also frequently share an identity among several contexts.
Cross-context recognition is the act of recognising that an identity in one context is the same person as an identity in another context. Cross-context recognition can at times be appropriate but anyone who does it needs to be careful not to apply the principles of one context in ways that violate the principles around use of information acquired in a different context. (For example, if you meet your therapist at a cocktail party, you expect them to have rather different discussion topics with you than they usually would, and possibly even to pretend they do not know you.) This is particularly true for vulnerable people as recognising them in different contexts may force their vulnerability into the open.
In computer systems and on the Web, an identity seen by a particular website is typically assigned an identifier of some type, which makes it easier for an automated system to store data about that person. Examples of identifiers for a person can be:
Strings derived from identifiers, for instance through hashing, are still identifiers so long as they may identify a person.
Sometimes this means the UA should ensure that one site can't learn anything about their user's behavior on another site, while at other times the UA should help their user prove to one site that they have a particular identity on another site.
To do this, user agents have to make some assumptions about the borders between contexts. By default, user agents define a machine-enforceable context or partition as:
Even though this is the default, user agents are free to restrict this context as people need. For example, some user agents may help people present different identities to subdivisions of a single site.
User agents should prevent people from being recognized across machine-enforceable contexts unless they intend to be recognized. This is a "should" rather than a "must" because there are many cases where the user agent isn't powerful enough to prevent recognition. For example if two or more services that a person needs to use insist that they share a difficult-to-forge piece of their identity in order to use the services, it's the services behaving inappropriately rather than the user agent.
If a site includes multiple contexts whose principles indicate that it's inappropriate to share data between the contexts, the fact that those distinct contexts fall inside a single machine-enforceable context doesn't make sharing data or recognizing identities any less inappropriate.
Contributes to surveillance, correlation, and identification.
As described in 2.1 Identity on the Web, cross-context recognition can sometimes be appropriate, but users need to be able to control when websites do it as much as possible.
Partitions are separated in two ways that lead to distinct kinds of user-visible recognition. When their divisions between different sites are violated, that leads to 126.96.36.199 Unwanted cross-site recognition. When a violation occurs at their other divisions, for example between different browser profiles or at the point someone clears their cookies and site storage, that leads to 188.8.131.52 Same-site recognition.
The web platform offers many ways for a website to recognize that a person is using the same
identity over time, including cookies,
CacheStorage, and other forms of storage. This allows
sites to save the person's preferences, shopping carts, etc., and people have come to expect this
behavior in some contexts.
A privacy harm occurs if a person reasonably expects that they'll be using a different identity on a site, but the site discovers and uses the fact that the two or more visits probably came from the same person anyway.
User agents can't, in general, determine exactly where intra-site context boundaries are, or how a site allows a person to express that they intend to change identities, so they're not responsible to enforce that sites actually separate identities at those boundaries. The principle here instead requires separation at partition boundaries.
Cross-partition recognition is generally accomplished by either "supercookies" or browser fingerprinting.
Supercookies occur when a browser stores data for a site but makes that data more difficult to clear than other cookies or storage. Fingerprinting Guidance § Clearing all local state discusses how specifications can help browsers avoid this mistake.
Fingerprinting consists of using attributes of the person's browser and platform that are consistent between two or more visits and probably unique to the person.
The attributes can be exposed as information about the person's device that is otherwise benign (as opposed to 2.3 Sensitive Information). For example:
See [fingerprinting-guidance] for how to mitigate this threat.
A privacy harm occurs if a site determines with high probability and uses the fact that a visit to that site comes from the same person as another visit to a different site, unless the person could reasonably expect the sites to discover this. Traditionally, sites have accomplished this using cross-site cookies, but it can also be done by having someone navigate to a link that has been decorated with an identifier, collecting the same piece of identifying information on both sites, or by correlating the timestamps of an event that occurs nearly-simultaneously on both sites.
Contributes to correlation, identification, secondary use, and disclosure.
Many pieces of information about someone could cause privacy harms if disclosed. For example:
A particular piece of information may have different sensitivity for different people. Language preferences, for example, might typically seem innocent, but also can be an indicator of belonging to an ethnic minority. Precise location information can be extremely sensitive (because it's identifying, because it allows for in-person intrusions, because it can reveal detailed information about a person's life) but it might also be public and not sensitive at all, or it might be low-enough granularity that it is much less sensitive for many people.
When considering whether a class of information is likely to be sensitive to a person, consider at least these factors:
Issue(16): This description of what makes information sensitive still needs to be refined.
People have certain rights over data that is about themselves, and these rights should be facilitated by their user agent and the parties that are processing their data.
While data rights alone are not sufficient to satisfy all privacy principles for the Web, they do support self-determination and help improve accountability. Such rights include:
This right includes being able to discover data about oneself (implying no databases are kept secret) and to review what information has been collected or inferred.
The right to erase applies whether or not terminating use of a service altogether, though what data can be erased may differ between those two cases. On the Web, people may wish to erase data on their device, on a server, or both, and the distinctions may not always be clear.
Portability is needed to realize the ability for people to make choices about services with different data practices. Standards for interoperability are essential for effective re-use.
The right to be free from automated decision-making based on data about oneself.
For some kinds of decision-making with substantial consequences, there is a privacy interest in being able to exclude oneself from automated profiling. For example, some services may alter the price of products (price discrimination) or offers for credit or insurance based on data collected about a person. Those alterations may be consequential (financially, say) and objectionable to people who believe those decisions based on data about them are inaccurate or unjust. As another example, some services may draw inferences about a user's identity, humanity or presence based on facial recognition algorithms run on camera data. Because facial recognition algorithms and training sets are fallible and may exhibit certain biases, people may not wish to submit to decisions based on that kind of automated recognition.
People may change their decisions about consent or may object to subsequent uses of data about themselves. Retaining rights requires ongoing control, not just at the time of collection.
The OECD Privacy Principles [OECD-Guidelines], [Records-Computers-Rights], and the [GDPR], among other places, include many of the rights people have as data subjects. These participatory rights by people over data about themselves are inherent to autonomy.
Data is de-identified when there exists a high level of confidence that no person described by the data can be identified, directly or indirectly (e.g. via association with an identifier, user agent, or device), by that data alone or in combination with other available information. Note that further considerations relating to groups are covered in the Collective Issues in Privacy section.
We talk of controlled de-identified data when there are strict controls that prevent the re-identification of people described by the data except for a well-defined set of purposes.
Different situations involving controlled de-identified data will require different controls. For instance, if the controlled de-identified data is only being processed by one party, typical controls include making sure that the identifiers used in the data are unique to that dataset, that any person (e.g. an employee of the party) with access to the data is barred (e.g. based on legal terms) from sharing the data further, and that technical measures exist to prevent re-identification or the joining of different data sets involving this data, notably against timing or k-anonymity attacks.
In general, the goal is to ensure that controlled de-identified data is used in a manner that provides a viable degree of oversight and accountability such that technical and procedural means to guarantee the maintenance of pseudonymity are preserved.
This is more difficult when the controlled de-identified data is shared between several parties. In such cases, good examples of typical controls that are representative of best practices would include making sure that:
when these identifiers are shared with a third party, they are made unique to that third party such that if they are shared with more than one third party these cannot then match them up with one another;
technical measures exist to prevent re-identification or the joining of different data sets involving this data, notably against timing or k-anonymity attacks; and
Note that controlled de-identified data, on its own, is not sufficient to render data processing appropriate.
Privacy principles are often defined in terms of extending rights to individuals. However, there are cases in which deciding which principles apply is best done collectively, on behalf of a group.
One such case, which has become increasingly common with widespread profiling, is that of information relating to membership of a group or to a group's behaviour, as detailed in 1.2.1 Group Privacy. As Brent Mittelstadt explains, “Algorithmically grouped individuals have a collective interest in the creation of information about the group, and actions taken on its behalf.” ([Individual-Group-Privacy]) This justifies ensuring that grouped people can benefit from both individual and collective means to support their autonomy with respect to data processing. It should be noted that processing can be unjust even if individuals remain anonymous, not from the violation of individual autonomy but because it violates ideals of social equality ([Relational-Governance]).
Another case in which collective decision-making is preferable is for processing for which informed individual decision-making is unrealistic (due to the complexity of the processing, the volume or frequency of processing, or both). Expecting laypeople (or even experts) to make informed decisions relating to complex data processing or to make decisions on a very frequent basis even if the processing is relatively simple, is unrealistic if we also want them to have reasonable levels of autonomy in making these decisions.
The purpose of this principle is to require that data governance provide ways to distinguish appropriate data processing without relying on individual decisions whenever the latter are impossible, which is often ([Relational-Governance], [Relational-Turn]).
Which forms of collective governance are recognised as legitimate will depend on domains. These may take many forms, such as governmental bodies at various administrative levels, standards organisations, worker bargaining units, or civil society fora.
It must be noted that, even though collective decision-making can be better than offloading privacy labour to individuals, it is not necessarily a panacea. When considering such collective arrangements it is important to keep in mind the principles that are likely to support viable and effective institutions at any level of complexity ([IAD]).
A good example of a failure in collective privacy decisions was the standardisation of the
ping attribute. Search engines, social sites, and other algorithmic media in the same vein
have an interest in knowing which sites that they link to people choose to visit (which in turn
could improve the service for everyone). But people may have an interest in keeping that
information private from algorithmic media companies (as do the sites being linked to, as that
facilitates timing attacks to recognise people there). A person's exit through a specific
and difficult for user agents to defend against. The value proposition of the
in this context is therefore straightforward: by providing declarative support for this
functionality it can be made fast (the browser sends an asynchronous notification to a ping
endpoint after the person exits through a link) and the user agent can provide its user with
the option to opt out of such tracking — or disable it by default.
Unfortunately, this arrangement proved to be unworkable on the privacy side (the performance gains,
however, are real). What prevents a site from using
ping for people who have it activated
and bounce tracking for others? What prevents a browsers from opting everyone out because it wishes
to offer better protection by default? Given the contested nature of the
ping attribute and
the absence of a forcing function to support collective enforcement, the scheme failed to deliver
Most of our thinking about stakeholders on the web follows the priority of constituencies: we consider users, authors, implementors, and specifiers. Sometimes we also consider society or national governments. In w3ctag/design-reviews#606 and other enterprise-focused designs we need to address device owners or administrators, and I haven't been able to find any W3C discussion of how to consider them.
This is a broader question than just the privacy principles we're working on here, but some of the answer belongs here. We should say something about the power dynamics of employees, children, romantic partners, etc. who have to use devices owned by other people who might not always have their best interests in mind, and how the UA is responsible to protect the user while also respecting whatever rights the device owner ought to have.
Receiving unsolicited information that either may cause distress or waste the recipient's time or resources is a violation of privacy.
Unwanted information covers a broad range of unsolicited communication, from messages that are typically harmless individually but that become a nuisance in aggregate (spam) to the sending of images that will cause shock or disgust due to their graphic, violent, or explicit nature (eg. pictures of one's genitals). While it is impossible, in a communication system involving many people, to offer perfect protection against all kinds of unwanted information, steps can be taken to make the sending of such messages more difficult or more costly, and to render the senders more accountable. Examples of mitigations include:
Attempts to obtain consent to processing that is not in accordance with the person's true preferences result in imposing unwanted privacy labour on the person, and may result in people erroneously giving consent that they regret later.
Examples of alternatives to interrupting users with consent requests include:
Considering the information sharing norms in the site's audience and category, and requesting only consent that is appropriate to the purpose of the site. (For example, a photo sharing site's users might expect to be prompted for consent to share their uploaded work.) Sites should consider conducting user research on people's expectations for how data is processed.
Delaying a prompt for consent until a user does something that puts the request in context, which will also help them give an informed response.
Notifications and other interruptive UI can be a powerful way to capture attention. Depending on the operating system in use, a notification can appear outside of the browser context (for example, in a general notifications tray) or even cause a device to buzz or play an alert tone. Like all powerful features, notifications can be misused and can become an annoyance or even used to manipulate behaviour and thus reduce autonomy.
User agents should provide UI that allows their users to audit which web sites have been granted permission to display alerts and to revoke these permissions. User agents should also apply some quality metric to the initial request for permissions to receive notifications (for example, disallowing sites from requesting permission on first visit).
Web sites should tell their users what specific kind of information people can expect to receive, and how notifications can be turned off, when requesting permission to send interruptive notifications. Web sites should not request permission to send notifications when the user is unlikely to have sufficient knowledge (e.g. information about what kinds of notifications they are signing up for) to make an informed response. If it's unlikely that such information could have been provided then the user agent should apply mitigations (for example, warning about potential malicious use of the notifications API). Permissions should be requested in context.
Whenever people have the ability to cause a party to process less of their data or to stop carrying out some given set of data processing that is not essential to the service, they must be allowed to do so without the party retaliating, for instance by artificially removing an unrelated feature, by decreasing the quality of the service, or by trying to cajole, badger, or trick the person into opting back into the processing.
A person (also user or data subject) is any natural person. Throughout this document, we primarily use person or people to refer to human beings, as a reminder of their humanity. When we use the term user, it is to talk about the specific person who happens to be using a given system at that time.
A vulnerable person is a person who may be unable to exercise sufficient self-determination in a context. Amongst other things, they should be treated with greater default privacy protections and may be considered unable to consent to various interactions with a system. People can be vulnerable for different reasons, for example because they are children, are employees with respect to their employers, are facing a steep asymmetry of power, are people in some situations of intellectual or psychological impairment, are refugees, etc.
A context is a physical or digital environment that a person interacts with for a set of purposes (that they typically share with other people who interact with the same environment).
A party is an entity that a person can reasonably understand as a single "thing" they're interacting with. Uses of this document in a particular domain are expected to describe how the core concepts of that domain combine into a user-comprehensible party, and those refined definitions are likely to differ between domains.
User agents tend to explain to people which origin or site provided the web page they're looking at. The party that controls this origin or site is known as the web page's first party. When a person interacts with a UI element on a web page, the first party of that interaction is usually the web page's first party. However, if a different party controls how data collected with the UI element is used, and a reasonable person with a realistic cognitive budget would realize that this other party has this control, this other party is the first party for the interaction instead.
The first party to an interaction is accountable for the processing of data produced by that interaction, even if another party does the processing.
A third party is any party other than the person visiting the website or the first parties they expect to be interacting with.
The Vegas Rule is a simple implementation of privacy in which "what happens with the first party stays with the first party." Put differently, the Vegas Rule is followed when the first party is the only data controller. While the Vegas Rule is a good guideline, it's neither necessary nor sufficient for appropriate data processing. A first party that maintains exclusive access to a person's data can still process it inappropriately, and there are cases where a third party can learn information about a person but still treat it appropriately.
We define personal data as any information that is directly or indirectly related to an identified or identifiable person, such as by reference to an identifier ([GDPR], [OECD-Guidelines], [Convention-108]).
If a person could reasonably be identified or re-identified when data is combined with other data, then that makes that data personal data.
Privacy is achieved in a given context that either involves personal data or involves information being presented to people when the principles of that context are followed appropriately. When the principles for that context are not followed, there is a privacy violation. Similarly, we say that a particular interaction is appropriate when the principles are adhered to) or inappropriate otherwise.
A party processes data if it carries out operations on personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, sharing, dissemination or otherwise making available, selling, alignment or combination, restriction, erasure or destruction.
A party shares data if it provides it to any other party. Note that, under this definition, a party that provides data to its own service providers is not sharing it.
A party sells data when it shares it in exchange for consideration, monetary or otherwise.
The purpose of a given processing of data is an anticipated, intended, or planned outcome of this processing which is achieved or aimed for within a given context. A purpose, when described, should be specific enough to be actionable by someone familiar with the relevant context (ie. they could independently determine means that reasonably correspond to an implementation of the purpose).
The means are the general method of data processing through which a given purpose is implemented, in a given context, considered at a relatively abstract level and not necessarily all the way down to implementation details. Example: a person will have their preferences restored (purpose) by looking up their identifier in a preferences store (means).
A data controller is a party that determines the means and purposes of data processing. Any party that is not a service provider is a data controller.
A service provider or data processor is considered to be the same party as the actor contracting it to perform the relevant processing if it:
User agents should attempt to defend the people using them from a variety of high-level threats or attacker goals, described in this section.
These threats are an extension of the ones discussed by [RFC6973].
These threats combine into the particular concrete threats we want web specifications to defend against, described in the sections that follow.
Some of the definitions in this document build on top of the work in Tracking Preference Expression (DNT).
The following people, in alphabetical order of their first name, were instrumental in producing this document: Amy Guy, Christine Runnegar, Dan Appelquist, Don Marti, Jonathan Kingston, Nick Doty, Peter Snyder, Sam Weiler, Tess O'Connor, and Wendy Seltzer.