Private Script Context
- 1 Working Draft
- 2.1 Private Context: limiting back channels
- 2.2 Shared Context: limiting access to the UA state
- 2.3 Multiple contexts per page
- 2.4 Other models
- 2.5 Examples
- 3 CSS
- 4 Compatibility
- 5 Content Security Policy
- 6 Web Intents
- 7 Web Crypto API
- 8 Encrypted Media Extensions
The goals of the Private User Agent (PUA) CG are to minimize leakage of the User Agent (UA) state which might be considered private by the user. Limiting the leakage of UA state can reduce the fingerprint surface of the UA and thus reduce the ability to be covertly tracked. The current DOM/script APIs are able to monitor our every mouse movement, every key typed, and the rending effects of browser options and extensions, and are capable of covertly leaking this information to anywhere on the Internet - some people do not find this acceptable hence the goals of the PUA CG.
One option is to continue to support JS but to add support for extra restricted JS contexts and two restricted contexts are considered below: 1. Private Contexts that restrict access to back channels but allow access to the defined private UA state. 2. Shared Contexts that restrict access to defined private UA state but allow access to back channels;
A private context could support JS customized presentation based on the private UA state, and support HTML features that require access to interfaces that would be defined as private state. In a private context JS could continue to discriminate users based on private UA state, such as extensions that affect the presentation, however the presence and effects of browser extensions or defenses against discrimination becomes a private matter for the user.
A shared context could have access to XHR and be allowed to load resources so it could be used to support a larger range of AJAX website designs.
These two restricted contexts could be used together to support rich content. A share context can forward information to a private context, and some intentional user input could be deemed a low risk of leaking private UA state and be made available to the shared context, such as button clicks and form input.
These restricted contexts could be used in cooperation with the website author or used by browser extensions to override and limit JS to control privacy. Some existing webpages might already support the private context restrictions or could be adapted by the author or perhaps even by the UA. Making use of the restricted share context would require a new programming style for content.
Private Context: limiting back channels
One option is to make the private context have a separate origin from other origin types, although it need not be unique and private contexts could share state if allowed by the 'same origin' restrictions. A 'private context enabled' flag could be effectively added to the origin extra data and then the standard infrastructure used to implement the 'same origin' restrictions could be used to keep objects in a private context separate from other contexts.
The private context would have access to the DOM. However outgoing requests triggered by DOM changes would need to be blocked otherwise DOM changes could be used to leak state. Some declarative extensions that allow the resources needed by a page to be declared and preloaded may be necessary to overcome this issue.
JS APIs that can affect outgoing communication, such as XHR, would need to be blocked in a private context. A private context would only be able to post messages to other private contexts.
A web worker created from a private context must inherit the private context restrictions. A private context must not be able to create a web worker without the private context restrictions, otherwise the creation of the shared context could be used to leak state. A private context could potentially create a context with both the private and shared context restrictions if this were supported. Note that the private context can not initiate the loading of a new resource so the scripts needed would need to be declared, perhaps in a new extension as noted above.
A HTML markup extension to allow a document to declare that scripts should run in a restricted private context would be a useful signal to the UA. It would inform the user that the webpage was written with the restrictions in mind. Some users would be expected to enable such restrictions irrespective of the absence of a declaration tin order to preserve their privacy.
HTML elements that cause outgoing communication triggered by the user, and for which the information sent is either explicitly supplied by the user or supplied by the server and not affected by JS, could be secure methods for the user to send requests. However if JS could reliably affect the use of these elements then it could send some information when they are used. HTML elements that might be secure against leaking the UA state are:
- Navigation links and buttons explicitly triggered by the user and with a static URL.
- Forms explicitly submitted by the user where the form content is either supplied by the server or input by the user.
UI redressing vulnerability
If JS were able to change the selection, styling, or placement of HTML elements that affect outgoing requests then it could use this to leak UA state when the elements were used.
For example: JS could read some private UA state and then use the information to change links presented to the user and thus leak some state when the user clicks on a link.
TODO: General UI redressing attacks may share some ground with the redressing of the static navigation and form elements, and exploring issues and solutions to UI redressing may be useful.
One option to mitigate this vulnerability is to allow the UA to present all the links and forms separately from the main page content and without their layout or styling affected by JS, so that they could all be reachable. This would make the use of a particular element less of a signal and thus a less reliable way to leak UA state.
One option is to partition the secure HTML elements such as navigation links and buttons and forms into their own area(s) on the page that are not subject to JS modification or framing attacks. The UA would need a good deal of control over such regions to prevent framing attacks. For example, menu bars at the top or sides of a page and footer links could be defined in static content and if it could not be affected by JS then it could not leak UA state.
One option is to have the page top level document be static html, or JS executed in a shared context, and with embedded private contexts as necessary. The page top level document could implement all the navigation links and buttons, and any forms, and deal with the logic of updating content from remote sources. This page top level document could not execute a JS private context because then it may be able to mount a UI redressing attack on the content to leak state.
An important case is an iframe with a single navigation action when clicked anywhere - as might be used for an advertisement. In this case the only UI redressing vulnerability is enabling or disabling the ability to click and perhaps a single declarative navigation link could be added to the iframe to avoid UI redressing attacks in this case.
User confirmation of actions
A user confirmation stage for navigation and form submission actions would remove event timing as a means to leak state and this might be used to enable JS to still trigger a navigation event or form submission while reducing the risk of private UA state being leaked via the timing of events.
User confirmation upon the submission of form values would reduce, but not in general eliminate, the chance of leaking UA state in form content and thus could reduce the risk of allowing JS to affect forms. Note that if JS were given write access to the form content then it may be easy to encode and leak some state in the form content.
If the provenance of forms values were noted then this could be valuable information for the user when confirming a form submission. Form values tainted (modified or enabled/disabled) by a JS private context could be flagged for extra scrutiny. Form value untainted by a JS private context, such as values supplied by the server, would be no risk - for example a session ID. Form values supplied entirely by the user would be of no risk of leaking UA state.
Add a 'private' attribute to the <iframe> element, similar to the 'sandbox' attribute, that allows the parent to restrict the child. Add a 'private' keyword to the Content Security Policy with similar semantics that allows a document to declare restrictions within which it can operate. The 'private' attribute could have a number of flags to allow finer control over the private context restrictions:
- 'allow-navigation' - allows intentional navigation events such as clicking on links and buttons. This might also allow a click on these elements to post a message to a web worker. The target URL would not be modifiable by script to avoid it being used to leak state.
- 'allow-forms' - allows intentional forms to be submitted. This might also allow a form submission to post a message to a web worker. The form 'method' and 'action' attributes and the form values would not be modifiable by script to avoid these being used to leak state.
- 'allow-webworkers' - allows web workers to be created using declarative markup. Web workers might be used to implement AJAX updates to the document and could receive click and post messages from the private context.
- 'allow-script' - allows scripts in the current document only. The absence of this flag does not necessarily prohibit scripts in children. A parent with script disabled allows it to allow navigation and form elements without the risk of UI redressing attacks from the script on these elements leaking state.
The private context restricts scripts from affecting the content or timing of outgoing requests so only resources statically declared can be used and these can only be loaded or verified once and must be loaded even if scripts remove or disable the elements within which they are declared. The set of statically declared resources, from the initial HTML markup, linked style sheets, and manifest files, forms an effective manifest for the webpage.
The loading of the resources might be ordered by the UA to reduce the ability to fingerprint the UA. This could use a generic sorting algorithm, or perhaps randomize the order to some extent.
The UA may concede some state to CSS media queries in order to limit the amount of resources loads. For example, declaring a particular screen size. A UA may be extended to support a variable screen size choice irrespective of that actual size to reduce the meaningfulness of such a leak.
The UA could implement a resource load ordering algorithm that tries to load important resources first, and so long as this depends only on the initial markup then it does not leak state (apart from the algorithm).
It is common for scripts to inject resources that are essential to the webpage, such as style sheets, and images. In some cases this is done to customize the layout based on DOM inspection, such as the screen size or locale. In some cases this is done to delay resource loading until a time has elapsed or an event is triggered. For example, a slide show may not want to initiate the loading of all images at the same time, or may only load the next image in preparation. For example, the loading of images off screen may an delayed. These are important use cases, and it may be possible to cater for them in part by supporting a declarative load schedule which need not leak state so long as the schedule is static. Site specific script sniffing algorithms might be effective and practical for some popular websites, sniffing the scripts for the resources and creating an effective manifest so that these resources become usable even with the private context restrictions.
Another common use of script injection of resources is to choose between 'http' or 'https' resources. A script sniffing algorithm could detect some of these common scripts and add the appropriate resource to the manifest.
Web browsers appear to be restricting content to only loading secure resources on a secure page, so it may not be unreasonable to have a default manifest algorithm than excludes insecure resources on a secure page.
The UA may support an interface to allow users to examine resources not loaded and to add them to a page manifest. However such storage of resources would create a fingerprinting target - the server could add an ID to resources and not reloads of these to track a UA. This might be mitigate by flushing any stored manifest when resources change, but this would also limit it's effectiveness.
A UA may support loading of webpage manifests from curated third party lists, but it is not clear if this is of great value.
Many popular websites are designed to work without scripts enabled, or have a specific mobile site that it less demanding, and in some cases a webpage is more usable with scripts disabled than enabled under the private context restrictions. The UA might support the option to note and remember on which sites to view without scripts enabled versus which to view with the private context restrictions.
Enhanced manifest support could benefit general page load times, particularly with channels such as SPDY that could use the manifest to order resource transmission and schedule followup resource transmission. The advantages might be particularly noticeable on links with high latency as it would not be necessary to wait for the scripts to run and inject resources before they can be sent. This may help gain wider support for the extension independent of its need for the proposed private context restrictions.
Restricting a JS context from reading defined private UA state reduces the risk of it being leaked and in such a context JS could be given access to back channels, such as XHR, allowing the state in this context to be shared over the web, so lets name such a restricted context as a 'Shared Context'. All the JS state in this context would be considered shared state and not private UA state. The shared state in this context could be forwarded to a context with private state, but not vice versa.
One option is to make the shared context a separate script origin from other origin types - it would not need to be a unique origin and could well share information with other shared contexts allowed by the same origin rule. A 'shared context enabled' flag could be effectively added to the origin extra data and then the standard infrastructure used to implement the 'same origin' restrictions could be used to keep objects in a shared context separate from contexts with access to the private UA state.
A shared context could be similar to a web worker, without access to the DOM, and a document could well support many. A shared context could not be created from a private context, otherwise the creation of the shared context could be used as a back channel to leak private state. A declarative extension to the markup that defines the shared contexts to create many be appropriate. Note that a shared context could be allowed to create other shared contexts and private contexts.
A shared context could be allowed to forward information to a private context, such as posting messages. If a document has an associated private JS context then this could receive DOM updates from shared contexts. However DOM updates from a private context are restricted from triggering the loading of new resources and an mechanism to allow the shared context to update the DOM directly need not have this restriction and could cause new resources to load. Further the webpage may be restricted from running a private JS context as it does have some risks. Thus an extension that allows the a shared context to update the DOM is needed, without giving it actual access to the DOM object. Some options are: an API to post an update to a DOM element references by its ID; a declarative extension create DOM element update listeners that could be referenced from a shared context.
The shared context might be allowed to read or receive intentional information from the UA, such as button clicks and form submissions, and the range of information that could be received will be examined below when consider private contexts. The shared context could use this information to trigger updates from servers, such as fetching new email. Form content not tainted by private state or form content deemed to be intentionally supplied by the user could be sent to the shared context and used for form validation, auto-completion, or forwarded in an external request, etc. A new mechanism for the UA to pass on such information may be needed, and could perhaps be a UA initialed message posted to script in the shared context and perhaps could be a new form element action? Extensions to note tainting of form values may also be needed so that form elements written from a private context script are not leaked.
The UA could not in general expect to be able to customize code or state in a shared context without this activity being leaked to a server.
Given that the shared context has limited access to the private state and could only receive some intentional events, it would not be able to implement a rich UI entirely on its own, but might be able to implement a significant amount of page logic. Some authors of web content may choose to implement as much logic as possible in shared contexts so that they have visibility over UA changes to the logic, and this would limit the scope of private control that a user has over content - but this appears to be an inevitable compromise for the support of rich applications. An extreme example would be a layout engine implemented in this context that forwards only the final image for presentation which would make it difficult for extensions to modify the content and would be a disaster for accessibility but would limit the contents ability to adapt to the UA so would not be expected to be a common issue.
- code to pull in updates and forward them to a UA content with DOM write access but limited access to back channels for use in updating the DOM etc.
- a codec, possibly implementing decryption (would need a channel to forward the content).
- code to forward intentional form content from the user to a server and received responses that update the DOM in another UA context with DOM access.
- pulling in revolving advertisement content to forward to a UA content with DOM write access for presentation.
- JS code to load images for slide shows. It could be trigger by intentional start/stop events from the secure UA context or run on a schedule.
Multiple contexts per page
One approach to allowing a page to have access to both the private UA state and to back channels would be to support separate contexts each with different restrictions.
Communication between contexts
A shared context could be allowed to forward information to a private context without risking covert leakage of private UA state. A private context could share only untainted state that was already known to a shared context or deemed to be intentionally supplied by the user.
Infrastructure already exists to post messages between JS contexts, and this could be used to send information from a shared context to a private context.
Another option may to implement a shared DOM that could be used to share information between private contexts and shared contexts. Each element of the DOM could be flagged as tainted or untainted - tainted elements would only be available to the private context and untainted element would be available to both contexts. Some elements of the DOM filled from the initial HTML markup might be consider untainted. DOM updates supplied from a shared context could be considered untainted and thus flow from the shared context to the private context. DOM updates from a private context could taint the DOM elements, blocking them from leaking to a shared context. A shared context must always view an untainted value and must not be able to determine the tainted status of DOM elements, so a copy of the last untainted value may need to be kept after a tainted update from a private context. Intentional input from users could be deemed untainted and make available to both contexts via the DOM. Some DOM elements are read-only and derived from both potentially untainted DOM elements and the UA private state, for example the computed style, and such elements would not be accessible from the private context and access from the shared context would need to return a generic value the should be the same across UAs to minimize fingerprinting and should not leak the tainted status of values.
The view seen from a private context when there have been conflicting updates from both a private context and a shared context needs to be defined. One option would be to deem all tainted elements as private to the private context and updates from the shared context would not override the value seen. One option is to have updates from the shared context replace even tainted values seen by a private context.
Security of inbound private state
A shared context would be a risk of covertly sharing state passed though it from a server to a private context if it could be compromised. A mechanism to allow inbound data to be encrypted from the server to a private context could be implemented to mitigate this risk. A safe mechanism for generating and passing out a key would be needed and may be implemented by the UA. The private context would need to pass out a key, but it must not be able to leak information in this key, so the UA may need to handle it.
Other computation models could be explored.
- Tainted code paths, or data. It may be possible to tag code paths and data that accessed private UA state and then restrict the code path from accessing back channels.
- Allowing JS to fork a separate context might help such code path analysis, by allowing code to relinquishing global state.
- Proxy objects may be useful for passing objects between contexts. This could allow a remote reference to an object without exposing it to be shared.
If binary flags for both the private context and the shared context are both added to the origin extra data then a third new option become available - a context with both the shared and private context restrictions. It is not yet clear if this would have any utility - it could receive intentional user input and post messages to a private context.
This webpage has the Private Context restrictions and DOM updates are supported by a declarative extension for creating Shared Context web workers and for declaring HTML elements that receive updates from these workers via messages. The webpage also has an example button and a form that forward messages to the web workers.
This webpage has the Private Context restrictions and uses iframe elements to receive content from web workers. One option is to synchronously send GET and POST messages to web workers and wait for a response whith is used to update a target iframe. One option is a declarative extension to the 'iframe' element that declares an update message listener, and to leave the response targeting to the web worker. The 'iframe' might have default content supplied by the 'src' or 'srcdoc' attributes that could be replaced by an update from a web worker message.
CSS media queries can expose UA information by selectively loading resources. This could be solved by loading all resources before media queries are applied and developing alternatives to media queries.
For example, dependence on a media query for the selection of high contrast or black and white images might be reduced by a CSS extension to declare image color and contrast transforms that would suit such devices.
Securing the UA private state may largely involve restrictions. If this could be done using extensions to existing elements that if not understood by the UA would simply result in the restrictions not being applied then it may be possible to support content that works irrespective of the existence of the restrictive extensions. Such backward compatibility is critical to ease adoption and a good attempt should be made to constrain designs to be backward compatible.
Content Security Policy
The Content Security Protocol (CSP) can report violations back to the server and can request only monitoring not enforcement. This allows CSP to probe the UAs actual implementation of the CSP. It also allows the report to be used as a back channel to leak UA state - the document JS could trigger violations to send reports. This could be mitigated by making reporting op-in so that servers can not depend on such probing and also to make any reports intentional.
The CSP WG refused to allow a conforming UA to implement reporting as opt-in and my understanding of the reason is that they consider that CSP style monitoring and reporting can already be implemented using DOM/script so the CSP reporting does not create new leaks. The PUA proposal closes a lot of the leaks and would make it more difficult to leak UA monitoring reports back to the server without the user explicitly sending them so this excuse is not generally applicable to a PUA browser.
There is a concern that requiring reporting by a conforming CSP implementation may lead to servers depending on it and thus causing incompatibility with a PUA browser if reporting is opt-in. The CSP WG responded with the position that a server can not depend on receiving a report from a conforming UA but that a conforming UA is required to send a report!
The Chair of the CSP WG holds the view that the majority of CSP deployments will only use monitoring and not enforcement. There is a concern that this will lead to servers using policies that are overly narrow and further filtering reports at the server, and this would lead to the monitoring policy being of little real value to a UA for enforcement.
The server could also trip its own policy and check the UAs response to probe the UAs implementation of some CSP policies. This could be mitigated by reporting violations to the user to alert the user and by halting the JS content upon a violation to prevent further probing and make such probing less useful. The CSP WG holds the view that users would not want to be bothered by violation reports. It would seem that a UA could alert the user upon reports without the server knowing, but with most deployments expected to use only monitoring, the polices will likely be narrower than necessary with server side filtering of reports, so a user could expect a lot of unnecessary reports which would dilute their value.
Authors and content providers could develop for both the CSP standard and the PUA HTML restrictions by not depending on reports being returned and by not depending on being able to trip their own policy.
CSP 1.1 proposes to add a JS API to access the policy. So long as this API does not trigger outgoing requests, it could be used in a PUA private context to customize content, but the back channels available to a private context are restricted so this API may not be usable for some of the use cases such as discovery of the UA CSP implementation by the server. The CSP 1.1 API is accessed via the Document object so would not be accessible within a PUA shared context. From a security point of view, the API allows an attacking script to probe for holes or otherwise lay dormant and this may not be good for security.
Browser extensions are already available that allow users to setup customer CSP policies, see . The CSP 1.1 API would appear to expose the user CSP to being leaked to the server.
- The Web Intents has the potential to keep shared data at the client however when sharing with an open web app the data could be shared outside the client. Web Intents appears to be designed with the assumption that when a user selects a service that the user trusts the selected service to maintain the privacy and security of the data. For a local application the user may well be able to control the privacy and security of the data.
- Explicit Intents are not under the control of the user and could be a channel through with covert state could be shared.
- Web Intents is not design to give the user any control over the data being shared and does not support the user being able to vet the data being shared. The assumption is that by selecting a service the user is prepared to trust the service without any verification. The shared 'data' has been so broadly defined that the UA has little chance to understand it and present it to the user for validation before sharing. For example, when the data is text, HTML, or an image it should be presentable by the UA. This would still not eliminate the possibility of the client web app encoding covert information in the shared data, but would reduce the chance of completely inappropriate data being sent. Perhaps the range of data types should be restricted to those that a typical UA can present for confirmation. There may be some hope of salvaging some verification when combining Web Intents with specific intent definitions which may define the data being send and allow the UA to present it for verification.
- The proposal is designed to blur the distinction between local applications and open web apps and cloud services. For local applications and perhaps sandboxed web apps the security and privacy of the shared data may be control by the user which significantly reduces the risks of covert sharing. Web apps with access to back channels are open to covertly sharing the data and cloud services inherently share the date externally and for these services the user has no control over the privacy or security of the data which must be trusted to the service provider and hence there is a much greater risk of covert sharing. The risk of covert sharing might be reduced if only Web Intents allowed the user to vet the shared data. The shared data is prepared by JS code and there is no provision for the provenance to be known, unlike form values.
- Web apps that are downloaded and then run in a sandbox without access to back channels would be the obvious candidate to implement many of these services, such as the example of image editing included in the proposal. This key opportunity to keep data client-side and secure is not developed in the proposal.
- The proposal has no support for checking the returned data type and any checking the left to the application. This might be a source of vulnerability, and could use more scrutiny.
The Pick Media Intent has a lot of privacy and security notes.
The Pick Contacts Intent has a lot of privacy and security notes.
Web Crypto API
Some notes from Harry:
Need to consider this use case:
Encrypted Media Extensions
Need to ponder this too: