This document lists a set of questions one could ask about the security and privacy impact of a new feature or specification. It is meant as a tool that groups or individuals can use as a guide during a self-review, pointing towards important questions in areas where expertise might be lacking.

It is not meant as a "security checklist", nor does an editor or group’s use of this questionnaire obviate the editor or group’s responsibility to obtain "wide review" of a specification’s security and privacy properties before publication.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document was published by the Technical Architecture Group as a First Public Working Group Note.
Please raise issues on this document on the Security and Privacy Questionnaire GitHub Issue page.
If you wish to make other comments regarding this document, please send them to www-tag@w3.org (subscribe, archives). All comments are welcome.

Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 September 2015 W3C Process Document.

Table of Contents

1. Introduction

Adding features to the web is a tricky thing; on the one hand, we want to provide developers with access to all the things they need in order to build amazing experiences. On the other, we need to ensure that we don’t accidentally hand over too much power to malicious folks who could abuse it, or unintentionally expose people’s private data without adequate controls. Ideally, careful review of every specification we publish will allow us to strike the right balance.

Working groups can (and should) begin this review process early, of course. It’s easy to mitigate risks to users on the web before a feature is finalized and shipped in user agents. Changing APIs or introducing restrictions becomes nigh impossible once the web begins to depend on a particular implementation.

This document encourages early review by posing a number of questions that you as a individual reader of a specification can ask—and that working groups and spec editors might consider themselves, before asking for more formal review. The intent is to highlight areas which have historically had interesting implications on a user’s security or privacy, and thereby to focus the editor’s attention and working group’s attention and reviewers' attention on areas that might previously have been overlooked.

Note: Answering these questions obviously doesn’t constitute "wide review" in and of itself, but could provide a helpful basis of understanding upon which future reviewers can build.

2. Threat Models

"Security" and "Privacy" are big concepts. In order to pare them down to something which could feasibly guide working groups' decisions, let’s consider the types of threats to both which the web makes possible:

2.1. Passive Network Attackers

A passive network attacker has read-access to the bits going over the wire between users and the servers they’re communicating with. She can’t modify the bytes, but she can collect and analyze them.

Due to the decentralized nature of the internet, and the general level of interest in user activity, it’s reasonable to assume that practically every unencrypted bit that’s bouncing around the network of proxies, routers, and servers you’re using right now is being read by someone. It’s equally likely that some of these attackers are doing their best to understand the encrypted bits as well (though that requires significantly more effort).

2.2. Active Network Attackers

An active network attacker has both read- and write-access to the bits going over the wire between users and the servers they’re communicating with. She can collect and analyze data, but also modify it in-flight, injecting and manipulating JavaScript and HTML at will. This is more common than you might expect, for both benign and malicious purposes:

2.3. Same-Origin Policy Violations

The same-origin policy is the cornerstone of security on the web; one origin should not have direct access to another origin’s data (the policy is more formally defined in Section 3 of [RFC6454]). A corollary to this policy is that an origin should not have direct access to data that isn’t associated with any origin: the contents of a user’s hard drive, for instance. Various kinds of attacks bypass this protection in one way or another. For example:

2.4. Third-Party Tracking

Flesh this out. <https://github.com/w3ctag/security-questionnaire/issues/7>

3. Questions to Consider

3.1. Does this specification deal with personally-identifiable information?

Personally-identifiable information (PII) includes a large swath of data which could be used on its own, or in combination with other information, to identify a single person. The exact definition of what’s considered PII varies from jurisdiction to jurisdiction, but could certainly include things like a home address, an email address, birthdates, usernames, fingerprints etc. Wikipedia has a fairly good description at [PII].

If the specification under consideration exposes PII to the web, it’s important to consider ways to mitigate the obvious impacts. For instance:

3.2. Does this specification deal with high-value data?

Data which isn’t personally-identifiable can still be quite valuable. Sign-in credentials (like username/password pairs, or OAuth refresh tokens) can be extrememly powerful in the wrong hands, as can financial instruments like credit card data. Making this data available to JavaScript, for instance, could expose it to XSS attacks and active network attackers who could inject code to read and exfiltrate the data. For instance:

3.3. Does this specification introduce new state for an origin that persists across browsing sessions?

For example:

3.4. Does this specification expose persistent, cross-origin state to the web?

For example:

3.5. Does this specification expose any other data to an origin that it doesn’t currently have access to?

As noted above in §2.3 Same-Origin Policy Violations, the same-origin policy is an important security barrier that new features need to carefully consider. If a specification exposes details about another origin’s state, or allows POST or GET requests to be made to another origin, the consequences can be severe.

3.6. Does this specification enable new script execution/loading mechanisms?

3.7. Does this specification allow an origin access to a user’s location?

A user’s location is highly-desirable information for a variety of use cases. It is also, understandably, information which many users are reluctant to share, as it can be both highly identifying, and potentially creepy. New features which make use of geolocation information, or which expose it to the web in new ways should carefully consider the ways in which the risks of unfettered access to a user’s location could be mitigated. For instance:

3.8. Does this specification allow an origin access to sensors on a user’s device?


3.9. Does this specification allow an origin access to aspects of a user’s local computing environment?

(e.g. screen sizes, installed fonts, installed plugins, bluetooth or network interface identifiers)?


3.10. Does this specification allow an origin access to other devices?

Specifically, it’s interesting whether or not this specification allows access to devices on a user’s local network that would be otherwise inaccessible to a web origin. In particular, connection via Bluetooth and USB should be carefully evaluated to avoid exposing devices to the web that aren’t created with the web in mind; doing so has security implications, as these devices may not be hardened against malicious input as well as they should be.

3.11. Does this specification allow an origin some measure of control over a user agent’s native UI?

(showing, hiding, or modifying certain details, especially if those details are relevant to security)?


3.12. Does this specification expose temporary identifiers to the web?

(e.g. TLS features like Channel ID, session identifiers/tickets, etc)?


3.13. Does this specification distinguish between behavior in first-party and third-party contexts?

3.14. How should this specification work in the context of a user agent’s "incognito" mode?

3.15. Does this specification persist data to a user’s local device?

3.16. Does this specification have a "Security Considerations" and "Privacy Considerations" section?

Interesting features added to the web platform generally have security and/or privacy impacts. Documenting the various concerns and potential abuses in "Security Considerations" and "Privacy Considerations" sections of a document is a good way to help implementers and web developers understand the risks that a feature presents, and to ensure that adequate mitigations are in place.

If it seems like a feature does not have security or privacy impacts, then say so inline in the spec section for that feature:

There are no known security or privacy impacts of this feature.

Saying so explicitly in the specification serves several purposes:

  1. Shows that a spec author/editor has explicitly considered security and privacy when designing a feature.
  2. Provides some sense of confidence that there are no such impacts.
  3. Challenges security and privacy minded individuals to think of and find even the potential for such impacts.
  4. Demonstrates the spec author/editor’s receptivity to feedback about such impacts.

3.17. Does this specification allow downgrading default security characteristics?

4. Mitigation Strategies

4.1. Secure Contexts

In the presence of an active network attacker, offering a feature to an insecure origin is the same as offering that feature to every origin (as the attacker can inject frames and code at will). Requiring an encrypted and authenticated connection in order to use a feature can mitigate this kind of risk.

4.2. Explicit user mediation

If a feature has privacy or security impacts that are endemic to the feature itself, then one valid strategy for exposing it to the web is to require user mediation before granting an origin access. For instance, [GEOLOCATION-API] reveals a user’s location, and wouldn’t be particularly useful if it didn’t; user agents generally gate access to the feature on a permission prompt which the user may choose to accept.

Designing such prompts is difficult. Choosers are good. Walls of text are bad.

Bring in some of felt@'s ideas here.

4.3. Drop the feature

One way to mitigate the risks that a feature presents is to remove it from a specification.

The easiest way to mitigate potential negative security or privacy impacts of a feature, and even discussing the possibility, is to drop the feature.

Every feature in a spec should be considered guilty (of harming security and/or privacy) until proven otherwise. Every specification should seek to be as small as possible, even if only for the reasons of reducing and minimizing security/privacy attack surface(s).

By doing so we can reduce the overall security (and privacy) attack surface of not only a particular feature, but of a module (related set of features), a specification, and the overall web platform.

Ideally this is one of many motivations to reduce each of those to the minimum viable:

  1. Minimum viable feature: cut/drop values, options, or optional aspects.
  2. Minimum viable web format/protocol/API: cut/drop a module, or even just one feature.
  3. Minimum viable web platform: Cut/drop/obsolete entire specification(s).

Move Tantek’s thoughts somewhere. They don’t really fit well here, though the sentiment of a minimum viable web platform might fit into some other TAG finding.


Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.


Terms defined by this specification

Terms defined by reference


Normative References

Robin Berjon; et al. HTML5. 28 October 2014. REC. URL: http://www.w3.org/TR/html5/
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119

Informative References

Jeffrey Yasskin; Vincent Scheib. Web Bluetooth. URL: https://webbluetoothcg.github.io/web-bluetooth/
David Kravets. Comcast Wi-Fi serving self-promotional ads via JavaScript injection. URL: http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
Mike West. Credential Management. URL: https://w3c.github.io/webappsec/specs/credentialmanagement/
Brandon Sterne; Adam Barth. Content Security Policy 1.0. 19 February 2015. NOTE. URL: http://www.w3.org/TR/CSP1/
Rich Tibbett. Network Service Discovery. URL: http://dvcs.w3.org/hg/dap/raw-file/tip/discovery-api/Overview.html
Mike West. 'First-Party-Only' Cookies. URL: https://tools.ietf.org/html/draft-west-first-party-cookies
Alex Russell. Geofencing Explained. URL: https://github.com/slightlyoff/Geofencing/blob/master/explainer.md
Egor Homakov. Using Content-Security-Policy for Evil. URL: http://homakov.blogspot.de/2014/01/using-content-security-policy-for-evil.html
Personally identifiable information. URL: https://en.wikipedia.org/wiki/Personally_identifiable_information
Adam Barth. The Web Origin Concept. URL: https://tools.ietf.org/html/rfc6454
Stephen Farrell; Hannes Tschofenig. Pervasive Monitoring Is an Attack. URL: http://tools.ietf.org/html/rfc7258
Paul Stone. Pixel Perfect Timing Attacks with HTML5. URL: http://www.contextis.com/documents/2/Browser_Timing_Attacks.pdf
Mark Bergen; Alex Kantrowitz. Verizon looks to target its mobile subscribers with ads. URL: http://adage.com/article/digital/verizon-target-mobile-subscribers-ads/293356/
Ilya Grigorik; et al. Beacon. 29 September 2015. WD. URL: http://www.w3.org/TR/beacon/
Anne van Kesteren. Cross-Origin Resource Sharing. 16 January 2014. REC. URL: http://www.w3.org/TR/cors/
Andrei Popescu. Geolocation API Specification. 28 May 2015. PER. URL: http://www.w3.org/TR/geolocation-API/
Dimitri Glazkov; Hajime Morita. HTML Imports. 11 March 2014. WD. URL: http://www.w3.org/TR/html-imports/
Alex Russell; Jungkee Song; Jake Archibald. Service Workers. 25 June 2015. WD. URL: http://www.w3.org/TR/service-workers/
Ian Hickson. HTML5 Web Messaging. 19 May 2015. REC. URL: http://www.w3.org/TR/webmessaging/

Issues Index

Flesh this out. <https://github.com/w3ctag/security-questionnaire/issues/7>
Bring in some of felt@'s ideas here.
Move Tantek’s thoughts somewhere. They don’t really fit well here, though the sentiment of a minimum viable web platform might fit into some other TAG finding.