Self-Review Questionnaire: Security and Privacy

W3C Working Group Note,

This version:
https://www.w3.org/TR/2019/NOTE-security-privacy-questionnaire-20190523/
Latest published version:
https://www.w3.org/TR/security-privacy-questionnaire/
Editor's Draft:
https://w3ctag.github.io/security-questionnaire/
Previous Version:
https://www.w3.org/TR/2015/NOTE-security-privacy-questionnaire-20151210/
Version History:
https://github.com/w3ctag/security-questionnaire/commits/master/index.src.html
Issue Tracking:
GitHub
Editors:
Lukasz Olejnik (Independent researcher)
Jason Novak (Apple Inc.)
Former Editor:
(Google Inc.)
Bug Reports:
via the w3ctag/security-questionnaire repository on GitHub

Abstract

This document provides a points to help in considering the privacy impact of a new feature or specification as well as common mitigation strategies for common privacy impacts. The questions are meant to be useful when considering the security and privacy aspects of a new feature or specification and the mitigation strategies are meant to assist in the design of the feature or specification. Authors of a new proposal or feature should implement the mitigations as appropriate; doing so will assist in addressing the respective points in the questionnaire. Given the variety and nature of specifications, it is likely that the listed questions will not be comprehensive in a way enabling to reason about the full privacy impact, and some mitigations may not be appropriate or other mitigations may be necessary. It is nonetheless the aim to present the questions and mitigations as a starting point, helping to consider security and privacy at the start of work on a new feature, and throughout the lifecycle of a feature.

It is not meant as a "security checklist", nor does an editor or group’s use of this questionnaire obviate the editor or group’s responsibility to obtain "wide review" of a specification security and privacy properties before publication. Furthermore, the filled questionnaire should not be understood as security and privacy considerations, although part of the answers may be relevant in drafting the considerations.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Technical Architecture Group as a Working Group Note.
Please raise issues on this document on the Security and Privacy Questionnaire GitHub Issue page.
If you wish to make other comments regarding this document, please send them to www-tag@w3.org (subscribe, archives). All comments are welcome.

Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy.

This document is governed by the 1 March 2019 W3C Process Document.

1. Introduction

New features make the web a stronger and livelier platform. Throughout the feature development process there are both foreseeable and unexpected security and privacy risks. These risks may arise from the nature of the feature, some of its part(s), or unforeseen interactions with other features. Such risks and may be mitigated through careful design and application of security and privacy design patterns.

Standardizing web features presents unique challenges. Descriptions, protocols and algorithms need to be considered strictly before they are broadly adopted by vendors with large user bases. If features are found to have undesirable privacy properties after they are standardized, then, browser vendors may break compatibility in their implementations to protect users' privacy as the user agent is the user’s agent.

This is why each Working Group needs to consider security and privacy by design and by default. This consideration is mandatory. Assessing the impact of a mechanism on privacy should be done from the ground up, during each iteration of the specification. Thinking about security and privacy risks and mitigations early in a project is the best approach as it helps ensure the privacy of your feature at an architectural level and ensures the result, descriptions, protocols, and algorithms incorporate privacy by default as opposed to through possible implementation mitigations.

1.1. How To Use The Questionnaire

This document is meant to help developers and reviewers. It encourages early review by posing a number of questions that you as an individual reader, writer, or contributor can ask and that working groups and spec editors need to consider, prior requesting a formal review. The intent includes highlighting areas which historically may have had implications to user’s security or privacy, and thereby to focus the editor, working group attention, and reviewers' attention. As such, the questionnaire is evidence and research based and provides examples and past cases.

The audience of this document is general:

• The editors and contributors who are responsible for the development of the feature, • The W3C TAG, who receive the questionnaire along with the request, and in line with the W3C Process, • External audience (developers, designers, etc.) wanting to understand the possible security and privacy implications.

2. Questions to Consider

2.1. What information might this feature expose to Web sites or other parties, and for what purposes is that exposure necessary?

Just because information can be exposed to the web doesn’t mean that it should be. How does exposing this information to an origin benefit a user? Is the benefit outweighed by the potential risks? If so, how?

In answering this question, it often helps to ensure that the use cases your feature and specification is enable are made clear in the specification itself to ensure that TAG and PING understand the feature-privacy tradeoffs being made.

2.2. Is this specification exposing the minimum amount of information necessary to power the feature?

Regardless of what data is being exposed, is the specification exposing the bare minimum necessary to achieve the desired use cases? If not, why not and why expose the additional information?

2.3. How does this specification deal with personal information or personally-identifiable information or information derived thereof?

Personally information is data about a user (home address) or information that could be used to identify a user (alias or email address). This is distinct from personally identifiable information (PII), as the exact definition of what’s considered PII varies from jurisdiction to jurisdiction.

If the specification under consideration exposes personal information or PII or their derivatives that could still identify an individual to the web, it’s important to consider ways to mitigate the obvious impacts. For instance:

2.4. How does this specification deal with sensitive information?

Just because data is not personally information or PII, that does not mean that it is not sensitive information; moreover, whether any given information is sensitive may vary from user to user. Data to consider if sensitive includes: financial data, credentials, health information, location, or credentials. When this data is exposed to the web, steps should be taken to mitigate the risk of exposing it; for example:

2.5. Does this specification introduce new state for an origin that persists across browsing sessions?

Allowing an origin to persist data on a user’s device across browsing sessions introduces the risk that this state may be used to track a user without their knowledge or control, either in a first party or third party contexts. New state persistence mechanisms should not be introduced without mitigations to prevent it from being used to track users across domains or without control over clearing this state. And, are there specific caches that a user agent should specially consider?

For example:

2.6. What information from the underlying platform, e.g. configuration data, is exposed by this specification to an origin?

If so, is the information exposed from the underlying platform consistent across origins? This includes but is not limited to information relating to the user configuration, system information including sensors, and communication methods.

When a specification exposes specific information about a host to an origin, if that information changes rarely and is not variable across origins, then it can be used to uniquely identify a user across two origins — either directly because any given piece of information is unique or because the combination of disparate pieces of information are unique and can be used to form a fingerprint [DOTY-FINGERPRINTING]. Specifications and user agents should treat the risk of fingerprinting by carefully considering the surface of available information, and the relative differences between software and hardware stacks. Sometimes reducing fingerprintability may as simple as ensuring consistency, i.e. ordering the list of fonts, but sometimes may be more complex.

Such information should not be revealed to an origin without a user’s knowledge and consent barring mitigations in the specification to prevent the information from being uniquely identifying or able to unexpectedly exfiltrate data.

For example:

2.7. Does this specification allow an origin access to sensors on a user’s device

If so, what kind of sensors and information derived from those sensors does this standard expose to origins?

Information from sensors may serve as a fingerprinting vector across origins. In addition, sensor also reveals something about my device or environment and that fact might be what is sensitive. In addition, as technology advances, mitigations in place at the time a specification is written may have to be reconsidered as the threat landscape changes.

Sensor data might even become a cross-origin identifier when the sensor reading is relatively stable, for example for short time periods (seconds, minutes, even days), and is consistent across-origins. In fact, if two user-agents expose the same sensor data the same way, it may become a cross-browser, possibly even a cross-device identifier.

These are not theoretical attacks, for example:

2.8. What data does this specification expose to an origin? Please also document what data is identical to data exposed by other features, in the same or different contexts.

As noted above in §3.3 Same-Origin Policy Violations, the same-origin policy is an important security barrier that new features need to carefully consider. If a specification exposes details about another origin’s state, or allows POST or GET requests to be made to another origin, the consequences can be severe.

2.9. Does this specification enable new script execution/loading mechanisms?

2.10. Does this specification allow an origin to access other devices?

If so, what devices does this specification allow an origin to access?

Accessing other devices, both via network connections and via direct connection to the user’s machine (e.g. via Bluetooth, NFC, or USB), could expose vulnerabilities - some of these devices were not created with web connectivity in mind and may be inadequately hardened against malicious input, or with the use on the web.

Exposing other devices on a user’s local network also has significant privacy risk:

Example mitigations include:

2.11. Does this specification allow an origin some measure of control over a user agent’s native UI?

Features that allow for control over a user agent’s UI (e.g. full screen mode) or changes to the underlying system (e.g. installing an ‘app’ on a smartphone home screen) may surprise users or obscure security / privacy controls. To the extent that your feature does allow for the changing of a user agent’s UI, can it effect security / privacy controls? What analysis confirmed this conclusion?

2.12. What temporary identifiers might this this specification create or expose to the web?

If a standard exposes a temporary identifier to the web, the identifier should be short lived and should rotate on some regular duration to mitigate the risk of this identifier being used to track a user over time. When a user clears state in their user agent, these temporary identifiers should be cleared to prevent re-correlation of state using a temporary identifier.

If this specification does create or expose a temporary identifier to the web, how is it exposed, when, to what entities, and, how frequently is it rotated?

Example temporary identifiers include TLS Channel ID, Session Tickets, and IPv6 addresses.

An example implementations of a privacy friendly temporary identifier include:

2.13. How does this specification distinguish between behavior in first-party and third-party contexts?

The behavior of a feature should be considered not just in the context of its being used by a first party origin that a user is visiting but also the implications of its being used by an arbitrary third party that the first party includes. When developing your specification, consider the implications of its use by third party resources on a page and, consider if support for use by third party resources should be optional to conform to the specification. If supporting use by third party resources is mandatory for conformance, please explain why and what privacy mitigations are in place. This is particularly important as user agents may take steps to reduce the availability or functionality of certain features to third parties if the third parties are found to be abusing the functionality.

2.14. How does this specification work in the context of a user agent’s Private \ Browsing or "incognito" mode?

Each major user agent implements a private browsing / incognito mode feature with significant variation across user agents in threat models, functionality, and descriptions to users regarding the protections afforded [WU-PRIVATE-BROWSING].

One typical commonality across user agents' private browsing / incognito modes is that they have a set of state than the user agents’ in their ‘normal’ modes.

Does the specification provide information that would allow for the correlation of a single user’s activity across normal and private browsing / incognito modes? Does the specification result in information being written to a user’s host that would persist following a private browsing / incognito mode session ending?

There has been research into both:

2.15. Does this specification have a "Security Considerations" and "Privacy Considerations" section?

Documenting the various concerns and potential abuses in "Security Considerations" and "Privacy Considerations" sections of a document is a good way to help implementers and web developers understand the risks that a feature presents, and to ensure that adequate mitigations are in place. Simply adding a section to your specification with yes/no responses to the questions in this document is insufficient.

If it seems like a feature does not have security or privacy impacts, then say so inline in the spec section for that feature:

There are no known security or privacy impacts of this feature.

Saying so explicitly in the specification serves several purposes:

  1. Shows that a spec author/editor has explicitly considered security and privacy when designing a feature.
  2. Provides some sense of confidence that there might be no such impacts.
  3. Challenges security and privacy minded individuals to think of and find even the potential for such impacts.
  4. Demonstrates the spec author/editor’s receptivity to feedback about such impacts.
  5. Demonstrates a desire that the specification should not be introducing security and privacy issues

[RFC3552] provides general advice as to writing Security Consideration sections. Generally, there should be a clear description of the kinds of privacy risks the new specification introduces to for users of the web platform. Below is a set of considerations, informed by that RFC, for writing a privacy considerations section.

Authors must describe:

  1. What privacy attacks have been considered?
  2. What privacy attacks have been deemed out of scope (and why)?
  3. What privacy mitigations have been implemented?
  4. What privacy mitigations have considered and not implemented (and why)?

In addition, attacks considered must include:

  1. Fingerprinting risk;
  2. Unexpected exfiltration of data through abuse of sensors;
  3. Unexpected usage of the specification / feature by third parties;
  4. If the specification includes identifiers, the authors must document what rotation period was selected for the identifiers and why.
  5. If the specification introduces new state to the user agent, the authors must document what guidance regarding clearing said storage was given and why.
  6. There should be a clear description of the residual risk to the user after the privacy mitigations has been implemented.

The crucial aspect is to actually considering security and privacy. All new specifications must have security and privacy considerations sections to be considered for wide reviews. Interesting features added to the web platform generally often already had security and/or privacy impacts.

2.16. Does this specification allow downgrading default security characteristics?

Does this feature allow for a site to opt-out of security settings to accomplish some piece of functionality? If so, in what situations does your specification allow such security setting downgrading and what mitigations are in place to make sure optional downgrading doesn’t dramatically increase risks?

2.17. What should this questionnaire have asked?

This questionnaire is not exhaustive. After completing a privacy review, it may be that there are privacy aspects of your specification that a strict reading, and response to, this questionnaire, would not have revealed. If this is the case, please convey those privacy concerns, and indicate if you can think of improved or new questions that would have covered this aspect.

3. Threat Models

To consider security and privacy it is convenient to think in terms of threat models, a way to illuminate the possible risks.

There are some concrete privacy concerns that should be considered when developing a feature for the web platform:

In the mitigations section, this document outlines a number of techniques that can be applied to mitigate these risks.

Enumerated below are some broad classes of threats that should be considered when developing a web feature.

3.1. Passive Network Attackers

A passive network attacker has read-access to the bits going over the wire between users and the servers they’re communicating with. She can’t modify the bytes, but she can collect and analyze them.

Due to the decentralized nature of the internet, and the general level of interest in user activity, it’s reasonable to assume that practically every unencrypted bit that’s bouncing around the network of proxies, routers, and servers you’re using right now is being read by someone. It’s equally likely that some of these attackers are doing their best to understand the encrypted bits as well, including storing encrypted communications for later cryptanalysis (though that requires significantly more effort).

3.2. Active Network Attackers

An active network attacker has both read- and write-access to the bits going over the wire between users and the servers they’re communicating with. She can collect and analyze data, but also modify it in-flight, injecting and manipulating Javascript, HTML, and other content at will. This is more common than you might expect, for both benign and malicious purposes:

3.3. Same-Origin Policy Violations

The same-origin policy is the cornerstone of security on the web; one origin should not have direct access to another origin’s data (the policy is more formally defined in Section 3 of [RFC6454]). A corollary to this policy is that an origin should not have direct access to data that isn’t associated with any origin: the contents of a user’s hard drive, for instance. Various kinds of attacks bypass this protection in one way or another. For example:

3.4. Third-Party Tracking

Part of the power of the web is its ability for a page to pull in content from other third parties — from images to javascript — to enhance the content and/or a user’s experience of the site. However, when a page pulls in content from third parities, it inherently leaks some information to third parties — referer information and other information that may be used to track and profile a user. This includes the fact that cookies go back to the domain that initially stored them allowing for cross origin tracking. Moreover, third parties can gain execution power through third party Javascript being included by a webpage. While pages can take steps to mitigate the risks of third party content and browsers may differentiate how they treat first and third party content from a given page, the risk of new functionality being executed by third parties rather than the first party site should be considered in the feature development process.

3.5. Legitimate Misuse

Even when powerful features are made available to developers, it does not mean that all the uses should always be a good idea, or justified; in fact, data privacy regulations around the world may even put limits on certain uses of data. In the context of first party, a legitimate website is potentially able to interact with powerful features to learn about the user behavior or habits. For example:

This point is admittedly different from others - and underlines that even if something may be possible, it does not mean it should always be done, including the need for considering a privacy impact assessment or even an ethical assessment. When designing a specification with security and privacy in mind, all both use and misuse cases should be in scope.

4. Mitigation Strategies

To mitigate the security and privacy risks you’ve identified in your specification as you’ve filled out the questionnaire, you may want to apply one or more of the mitigations described below to your feature.

4.1. Data Minimization

Minimization is a strategy that involves exposing as little information to other communication partners as is required for a given operation to complete. More specifically, it requires not providing access to more information than was apparent in the user-mediated access or allowing the user some control over which information exactly is provided.

For example, if the user has provided access to a given file, the object representing that should not make it possible to obtain information about that file’s parent directory and its contents as that is clearly not what is expected.

In context of data minimization it is natural to ask what data is passed around between the different parties, how persistent the data items and identifiers are, and whether there are correlation possibilities between different protocol runs.

For example, the W3C Device APIs Working Group has defined a number of requirements in their Privacy Requirements document. [DEVICE-API-PRIVACY]

Data minimization is applicable to specification authors and implementers, as well as to those deploying the final service.

As an example, consider mouse events. When a page is loaded, the application has no way of knowing whether a mouse is attached, what type of mouse it is (e.g., make and model), what kind of capabilities it exposes, how many are attached, and so on. Only when the user decides to use the mouse — presumably because it is required for interaction — does some of this information become available. And even then, only a minimum of information is exposed: you could not know whether it is a trackpad for instance, and the fact that it may have a right button is only exposed if it is used. For instance, the Gamepad API makes use of this data minimization capability. It is impossible for a Web game to know if the user agent has access to gamepads, how many there are, what their capabilities are, etc. It is simply assumed that if the user wishes to interact with the game through the gamepad then she will know when to action it — and actioning it will provide the application with all the information that it needs to operate (but no more than that).

The way in which the functionality is supported for the mouse is simply by only providing information on the mouse’s behaviour when certain events take place. The approach is therefore to expose event handling (e.g., triggering on click, move, button press) as the sole interface to the device.

Two features that have minimized the data they make available are:

4.2. Default Privacy Settings

Users often do not change defaults, as a result, it is important that the default mode of a specification minimizes the amount, identifiability, and persistence of the data and identifiers exposed. This is particularly true if a protocol comes with flexible options so that it can be tailored to specific environments.

4.3. Explicit user mediation

If the security or privacy risk of a feature cannot otherwise be mitigated in a specification, optionally allowing an implementer to prompt a user may be the best mitigation possible, understanding it does not entirely remove the privacy risk. If the specification does not allow for the implementer to prompt, it may result in divergence implementations by different user agents as some user agents choose to implement more privacy-friendly version.

It is possible that the risk of a feature cannot be mitigated because the risk is endemic to the feature itself. For instance, [GEOLOCATION-API] reveals a user’s location intentionally; user agents generally gate access to the feature on a permission prompt which the user may choose to accept. This risk is also present and should be accounted for in features that expose personal data or identifiers.

Designing such prompts is difficult as is determining the duration that the permission should provide.

Often, the best prompt is one that is clearly tied to a user action, like the file picker, where in response to a user action, the file picker is brought up and a user gives access to a specific file to an individual site.

Generally speaking, the duration and timing of the prompt should be inversely proportional to the risk posed by the data exposed. In addition, the prompt should consider issues such as:

These prompts should also include considerations for what, if any, control a user has over their data after it has been shared with other parties. For example, are users able to determine what information was shared with other parties?

4.4. Explicitly restrict the feature to first party origins

As described in the "Third-Party Tracking" section, a significant feature of the web is the mixing of first and third party content in a single page, but, this introduces risk where the third party content can use the same set of web features as the first party content.

Authors should explicit specify the feature’s scope of availability:

Third party’s access to a feature should be an optional implementation for conformance.

4.5. Secure Contexts

If the primary risk that you’ve identified in your specification is the threat posed by active network attacker, offering a feature to an insecure origin is the same as offering that feature to every origin because the attacker can inject frames and code at will. Requiring an encrypted and authenticated connection in order to use a feature can mitigate this kind of risk.

Secure contexts also protect against passive network attackers. For example, if a page uses the Geolocation API and sends the sensor-provided latitude and longitude back to the server over an insecure connection, then any passive network attacker can learn the user’s location, without any feasible path to detection by the user or others.

However, requiring a secure context is not sufficient to mitigate many privacy risks or even security risks from other threat actors than active network attackers.

4.6. Drop the feature

The simplest way to mitigate potential negative security or privacy impacts of a feature is to drop the feature. Every feature in a spec should be seen as potentially adding security and/or privacy risk until proven otherwise. Discussing dropping the feature as a mitigation for security or privacy impacts is a helpful exercise as it helps illuminate the tradeoffs between the feature, whether it is exposing the minimum amount of data necessary, and other possibly mitigations.

Every specification should seek to be as small as possible, even if only for the reasons of reducing and minimizing security/privacy attack surface(s).

By doing so we can reduce the overall security and privacy attack surface of not only a particular feature, but of a module (related set of features), a specification, and the overall web platform.

Examples

4.7. Making a privacy impact assessment

Some features are potentially supplying very sensitive data, and it is the responsibility of the end-developer, system owner, or manager to realize this and act accordingly in the design of his/her system. Some use may warrant conducting a privacy impact assessment, especially when data relating to individuals may be processed.

Specifications that expose such sensitive data should include a recommendation that websites and applications adopting the API — but not necessarily the implementing user agent — conduct a privacy impact assessment of the data that they collect.

A features that recommends such is:

Documenting these impacts is important for organizations although it should be noted that there are limitations to putting this onus on organizations. Research has shown that sites often do not comply with security/privacy requirements in specifications. For example, in [DOTY-GEOLOCATION], it was found that none of the studied websites informed users of their privacy practices before the site prompted for location.

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119

Informative References

[BATTERY-STATUS-API]
Anssi Kostiainen, Mounir Lamouri. Battery Status API W3C Candidate Recommendation, 07 July 2016. URL: https://www.w3.org/TR/2016/CR-battery-status-20160707/
[BEACON]
Ilya Grigorik; et al. Beacon. 13 April 2017. CR. URL: https://www.w3.org/TR/beacon/
[BLUETOOTH]
Jeffrey Yasskin; Vincent Scheib. Web Bluetooth. URL: https://webbluetoothcg.github.io/web-bluetooth/
[COMCAST]
David Kravets. Comcast Wi-Fi serving self-promotional ads via JavaScript injection. URL: http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
[CORS]
Anne van Kesteren. Cross-Origin Resource Sharing. 16 January 2014. REC. URL: https://www.w3.org/TR/cors/
[CREDENTIAL-MANAGEMENT]
Mike West. Credential Management. URL: https://w3c.github.io/webappsec/specs/credentialmanagement/
[CSP]
Mike West. Content Security Policy Level 3. 15 October 2018. WD. URL: https://www.w3.org/TR/CSP3/
[DEVICE-API-PRIVACY]
Alissa Cooper, Frederick Hirsh, John Morris. Device API Privacy Requirements, 29 June 2010. URL: https://www.w3.org/TR/2010/NOTE-dap-privacy-reqs-20100629/
[DISCOVERY]
Rich Tibbett. Network Service Discovery. URL: http://dvcs.w3.org/hg/dap/raw-file/tip/discovery-api/Overview.html
[DOTY-FINGERPRINTING]
Nick Doty. Fingerprinting Guidance for Web Specification Authors (Draft). URL: https://www.w3.org/TR/fingerprinting-guidance/
[DOTY-GEOLOCATION]
Nick Doty, Deirdre K. Mulligan, Erik Wilde. Privacy Issues of the W3C Geolocation API. URL: https://escholarship.org/uc/item/0rp834wf
[GAMEPAD]
Scott Graham, Ted Mielczarek, Brandon Jones, Steve Agoston. Gamepad W3C Working Draft, 18 October 2018. URL: https://www.w3.org/TR/2018/WD-gamepad-20181018/
[GEOFENCING]
Alex Russell. Geofencing Explained. URL: https://github.com/slightlyoff/Geofencing/blob/master/explainer.md
[GEOLOCATION-API]
Andrei Popescu. Geolocation API Specification 2nd Edition W3C Recommendation, 8 November 2016. URL: https://www.w3.org/TR/2016/REC-geolocation-API-20161108/
[GYROSPEECHRECOGNITION]
Yan Michalevsky; Dan Boneh; Gabi Nakibly. Gyrophone: Recognizing Speech from Gyroscope Signals. URL: https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-michalevsky.pdf
[HOMAKOV]
Egor Homakov. Using Content-Security-Policy for Evil. URL: http://homakov.blogspot.de/2014/01/using-content-security-policy-for-evil.html
[HTML-IMPORTS]
Dimitri Glazkov; Hajime Morita. HTML Imports. 25 February 2016. WD. URL: https://www.w3.org/TR/html-imports/
[OLEJNIK-ALS]
Lukasz Olejnik. Privacy analysis of Ambient Light Sensors. URL: https://blog.lukaszolejnik.com/privacy-of-ambient-light-sensors/
[OLEJNIK-BATTERY]
Lukasz Olejnik; et al. The leaking battery: A privacy analysis of the HTML5 Battery Status API. URL: https://eprint.iacr.org/2015/616
[OLEJNIK-PAYMENTS]
Lukasz Olejnik. Privacy of Web Request API. URL: https://blog.lukaszolejnik.com/privacy-of-web-request-api/
[PAYMENT-REQUEST-API]
Adrian Bateman; et al. Payment Request. URL: https://www.w3.org/TR/payment-request/
[PERMISSIONS]
Mounir Lamouri; Marcos Cáceres; Jeffrey Yasskin. Permissions API. URL: https://www.w3.org/TR/permissions/
[RFC3552]
E. Rescorla; B. Korver. Guidelines for Writing RFC Text on Security Considerations. URL: http://tools.ietf.org/html/rfc3552
[RFC6454]
Adam Barth. The Web Origin Concept. URL: https://tools.ietf.org/html/rfc6454
[RFC7258]
Stephen Farrell; Hannes Tschofenig. Pervasive Monitoring Is an Attack. URL: http://tools.ietf.org/html/rfc7258
[RIVERA]
David Rivera. Detect if a browser is in Private Browsing mode. URL: https://gist.github.com/jherax/a81c8c132d09cc354a0e2cb911841ff1
[SENSORS-API]
Rick Waldron, Mikhail Pozdnyakov, Alexander Shalamov, Tobie Langel. Generic Sensor API W3C Candidate Recommendation, 20 March 2018. URL: https://www.w3.org/TR/2018/CR-generic-sensor-20180320/
[SERVICE-WORKERS]
Alex Russell; et al. Service Workers 1. 2 November 2017. WD. URL: https://www.w3.org/TR/service-workers-1/
[TIMING]
Paul Stone. Pixel Perfect Timing Attacks with HTML5. URL: http://www.contextis.com/documents/2/Browser_Timing_Attacks.pdf
[VERIZON]
Mark Bergen; Alex Kantrowitz. Verizon looks to target its mobile subscribers with ads. URL: http://adage.com/article/digital/verizon-target-mobile-subscribers-ads/293356/
[WEBMESSAGING]
Ian Hickson. HTML5 Web Messaging. 19 May 2015. REC. URL: https://www.w3.org/TR/webmessaging/
[WEBUSB]
Reilly Grant, Ken Rockot, Ovidio Henriquez. WebUSB API Editor's Draft, 21 October 2018. URL: https://wicg.github.io/webusb/#security-and-privacy
[WU-PRIVATE-BROWSING]
Yuxi Wu; et al. Your Secrets Are Safe: How Browsers' Explanations Impact Misconceptions About Private Browsing Mode. URL: https://dl.acm.org/citation.cfm?id=3186088
[YUBIKEY-ATTACK]
Andy Greenberg. Chrome Lets Hackers Phish Even 'Unphishable' YubiKey Users. URL: https://www.wired.com/story/chrome-yubikey-phishing-webusb/