This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 25721 - extractable keys should be disabled by default
Summary: extractable keys should be disabled by default
Status: RESOLVED WONTFIX
Alias: None
Product: Web Cryptography
Classification: Unclassified
Component: Web Cryptography API Document (show other bugs)
Version: unspecified
Hardware: All All
: P2 normal
Target Milestone: ---
Assignee: Ryan Sleevi
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-05-15 05:39 UTC by elijah
Modified: 2014-10-22 21:16 UTC (History)
7 users (show)

See Also:


Attachments

Description elijah 2014-05-15 05:39:12 UTC
Allowing for extractable keys could provide for increased convenience, but at the cost of trusting the origin with your key material. 

Key material, like your location, should be considered sensitive and require a positive confirmation from the user that they want to allow a particular origin the ability to have access to their keys.

It is hard to imagine anything more sensitive than key material. If location is sensitive enough to warrant a confirmation from the user, surely keys are too.
Comment 1 Ryan Sleevi 2014-05-15 05:44:12 UTC
(In reply to elijah from comment #0)
> Allowing for extractable keys could provide for increased convenience, but
> at the cost of trusting the origin with your key material. 

The key material is key material the origin requested to be created. There is no inter-origin key material storage in this specification.

> 
> Key material, like your location, should be considered sensitive and require
> a positive confirmation from the user that they want to allow a particular
> origin the ability to have access to their keys.
> 
> It is hard to imagine anything more sensitive than key material. If location
> is sensitive enough to warrant a confirmation from the user, surely keys are
> too.

I'm afraid you've misunderstood this specification.

Keys created with this API are not like location, are not sensitive, and do not require UI confirmation. That's because the keys exposed by this API (as opposed to, say, Key Discovery, which is not part of this specification) are created at the request of the origin.

Every operation permitted or exposed by this API is an operation that could be implemented within Javascript today - with greater risk (to the site operator, not the user), but possible.
Comment 2 elijah 2014-05-15 06:07:10 UTC
> I'm afraid you've misunderstood this specification. Keys created with this API are not like location, are not sensitive, and do not require UI confirmation. That's because the keys exposed by this API (as opposed to, say, Key Discovery, which is not part of this specification) are created at the request of the origin.

Yes, obviously, I understand perfectly well. Everyone on the internet is concerned with privacy, and this means NOT trusting the origin for everything. The origin should be able to request that keys are created, and use those keys, but should not have the raw key unless the user wants them to.

> Every operation permitted or exposed by this API is an operation that could be implemented within Javascript today - with greater risk (to the site operator, not the user), but possible.

Exactly the point! Allowing key extraction makes the key handling basically no better than what we have today, with the added false sense of security.
Comment 3 Ryan Sleevi 2014-05-15 06:21:26 UTC
(In reply to elijah from comment #2)
> > I'm afraid you've misunderstood this specification. Keys created with this API are not like location, are not sensitive, and do not require UI confirmation. That's because the keys exposed by this API (as opposed to, say, Key Discovery, which is not part of this specification) are created at the request of the origin.
> 
> Yes, obviously, I understand perfectly well. Everyone on the internet is
> concerned with privacy, and this means NOT trusting the origin for
> everything. The origin should be able to request that keys are created, and
> use those keys, but should not have the raw key unless the user wants them
> to.

This is not a valid security model.

> 
> > Every operation permitted or exposed by this API is an operation that could be implemented within Javascript today - with greater risk (to the site operator, not the user), but possible.
> 
> Exactly the point! Allowing key extraction makes the key handling basically
> no better than what we have today, with the added false sense of security.

This is not a valid threat model.

This is explicitly out of scope of the WG, as discussed in the security considerations.

The WebCrypto API *requires* that you trust the source of the code you're running. Period, full-stop, there is no middle ground. This is by design, and is intrinsic to the provisioning of a low-level API.

As noted in the security considerations, ANY code executing in the context of the origin can fully compromise WebCrypto, as they can compromise any other Javascript.

I can appreciate your sensitivity towards privacy, but this is not how Javascript works.

WebCrypto provides a defense against limitations of the language - such as the inability to implement reliable constant-time algorithms and the significant difficulty of implementing these algorithms correctly. It does not and can not defend against threats to privacy.

Consider the Decrypt case - there is zero added privacy by requiring the user to be *prompted* to make the key extractable, as the origin has full access to all plaintext that results after the ciphertext. There are zero assurances provided to the end user about the 'security' of the site, as the origin can always decide to use Javascript implementations - as they inevitably would if this were needlessly gated behind added UI. This is the same way that sites can create phishing pages - it's a BY DESIGN product of how the Web works.

You either trust the source of the code - which is the origin (or, if used in a SysApp/Extension, the SysApp/Extension), or you do not use WebCrypto. The same as it is with every other API.
Comment 4 elijah 2014-05-15 20:36:26 UTC
> As noted in the security considerations, ANY code executing in the context
> of the origin can fully compromise WebCrypto, as they can compromise any
> other Javascript. I can appreciate your sensitivity towards privacy, but
> this is not how Javascript works.

This statement is entirely misleading for two reasons:

(1) There is HUGE DIFFERENCE between modifying javascript to capture the cleartext result of an encryption process and being able to capture the keys, which gives an attacker access to all prior and future communication that used that key. The attack surface of the former is a narrow window of time, requiring targeted injection of malicious javascript, where as the attack surface of the latter is as big as a freaking planet.

(2) the "way javascript works" is in flux. There are many interesting projects to separate the javascript app from data storage, using CORS or PostMessage, and also to verify the integrity of the js code (although this is less advanced). Code signing for JS will come eventually, although sadly not with WebCrypto.

> Consider the Decrypt case - there is zero added privacy by requiring the user
> to be *prompted* to make the key extractable, as the origin has full access
> to all plaintext that results after the ciphertext

Not true, see #1.
Comment 5 Ryan Sleevi 2014-05-15 20:44:53 UTC
The API forces the caller to choose extractability. This is a sufficient mitigation for all the threats you're now enumerating.

The user interaction point is entirely without merit, as has already been explained.

(In reply to elijah from comment #4)
> > As noted in the security considerations, ANY code executing in the context
> > of the origin can fully compromise WebCrypto, as they can compromise any
> > other Javascript. I can appreciate your sensitivity towards privacy, but
> > this is not how Javascript works.
> 
> This statement is entirely misleading for two reasons:
> 
> (1) There is HUGE DIFFERENCE between modifying javascript to capture the
> cleartext result of an encryption process and being able to capture the
> keys, which gives an attacker access to all prior and future communication
> that used that key. The attack surface of the former is a narrow window of
> time, requiring targeted injection of malicious javascript, where as the
> attack surface of the latter is as big as a freaking planet.

And the application author - NOT the user - is capable of making this tradeoff. There is zero value in presenting to the user, which is why this is INVALID.

An application author can still leave the attack surface the size of a planet through plenty of other means - eg: delivering over HTTP, storing messages in IndexedDB, etc.

Our charter is quite clear on this, which is why the bug is invalid.

> 
> (2) the "way javascript works" is in flux. There are many interesting
> projects to separate the javascript app from data storage, using CORS or
> PostMessage, and also to verify the integrity of the js code (although this
> is less advanced). Code signing for JS will come eventually, although sadly
> not with WebCrypto.

Codesigning can be implemented (with weak CSP). However, code signing as a first class is explicitly NOT part of our charter. That remains the realm of WebAppSec.


Look, it remains quite simple: You either trust the source of the code you are running, or you do not. If you do not trust it, they can lie to you a million different ways, or access your plaintext a million different ways. There is zero added value from forcing user interaction. If you do trust them, then user interaction is pointless to the point of harmful.

Please read http://tonyarcieri.com/whats-wrong-with-webcrypto to understand why the security model being argued for here is entirely broken.
Comment 6 elijah 2014-05-16 20:58:04 UTC
> And the application author - NOT the user - is capable of making this tradeoff.
> There is zero value in presenting to the user, which is why this is INVALID.

This is the very crux of the matter. Does choosing to run a javascript application mean that the user must accept all the choices made by this application? 

The answer should be a resounding NO. In the real world, we are often not presented with rational choices where we can decide not to use a particular website. Imagine, for example, my bank sends me documents that need to be digitally signed. They use the services of a web service that does secure digital signatures of documents. I don't have a meaningful choice to not use the web service, but I should be given the choice if the web service is allowed access to my private keys generated by that origin.

So long as I don't give up my private keys, I don't even care if someone has hacked the signature service and stolen their database of users. I might be forced to use the service, but that doesn't mean I need to give them the power to sign documents on my behalf.

The very fact that the extractable flag exists at all is evidence that key material is not the same as javascript code. If there really was no difference at all, and to run code you just need to trust it for everything and do whatever it wants, then there would be no purpose whatsoever in having an extractable flag. It would not be coherent, and all keys should be extractable.

Because of CORS and PostMessage, it is entirely probable that in the future javascript apps will request operations happens with keys created by other origins. One can imagine a million uses for this kind of thing, from signatures to payments to confidential messaging. Because of the likely monopoly power of the services creating these keys (think paypal, amazon, etc), it does not make sense to say to the user "you must submit to whatever decisions the monopoly service decides happens to your private keys for that service."
Comment 7 Ryan Sleevi 2014-05-16 21:04:09 UTC
(In reply to elijah from comment #6)
> > And the application author - NOT the user - is capable of making this tradeoff.
> > There is zero value in presenting to the user, which is why this is INVALID.
> 
> This is the very crux of the matter. Does choosing to run a javascript
> application mean that the user must accept all the choices made by this
> application? 

Yes. This is how the Web works.

> 
> The answer should be a resounding NO. In the real world, we are often not
> presented with rational choices where we can decide not to use a particular
> website. Imagine, for example, my bank sends me documents that need to be
> digitally signed. They use the services of a web service that does secure
> digital signatures of documents. I don't have a meaningful choice to not use
> the web service, but I should be given the choice if the web service is
> allowed access to my private keys generated by that origin.

Again, private keys generated by that origin.

Data generated by that origin - whether it be DOM Nodes, Javascript variables, or keys - ARE automatically trusted for that origin. It is only when something comes from outside that space - such as from the User Agent (eg: getUserMedia, geolocation, File API) do things like permissions make sense.

I'm sorry, but this is an INVALID bug.

The Web Service should have access to whatever the Web Service created - the same way it has access to Indexed DB, the same way it has access to cookies, the same way it has access to DOM nodes it creates.

This is really quite fundamental to how the web works. Reopening this bug is not going to change this.
Comment 8 elijah 2014-05-16 21:17:41 UTC
> Codesigning can be implemented (with weak CSP). However, code signing as a 
> first class is explicitly NOT part of our charter. That remains the realm of
> WebAppSec.
> ... Look, it remains quite simple: You either trust the source of the code you
>  are running, or you do not.

The point is that entirely trusting the code is not necessarily the the only way js apps will be run in the future, but WebCrypto API should not undermine this now.

> If you do not trust it, they can lie to you a million different ways, or access
> your plaintext a million different ways.

Again, this argument ignores (a) the important of past and future data, (b) that ciphertext <> plaintext is only one of many possible cryptographic functions, (c) the threats are not just in the browser (cloud service get hacked all the time) and I shouldn't be forced to give origin my keys.

> Please read http://tonyarcieri.com/whats-wrong-with-webcrypto to understand
> why the security model being argued for here is entirely broken.

Yep, WebCrypto is not going to make web crypto that much better, but always allowing extractable keys only makes it worse.
Comment 9 elijah 2014-05-16 21:24:15 UTC
> The Web Service should have access to whatever the Web Service created - the
> same way it has access to Indexed DB, the same way it has access to cookies,
> the same way it has access to DOM nodes it creates.

The origin web service did not create my private keys, it requested to have them created. This is a very important distinction. Keys are different than cookies or local storage, they are much more like location data. Otherwise, there is no point at all for any of the key stuff in WebCrypto: if keys were like cookies, then the keys should always just get generated on the server and sent to the client.

Why isn't this the model of WebCrypto? Because keys are not like cookies, because it would be ridiculous to write an API that just said "generate keys on the server". Once you make the decision to allow client-side key generation, these keys suddenly become something very different than everything else dictated by the origin, they become something that the client should be able to control.
Comment 10 Ryan Sleevi 2014-05-16 22:25:39 UTC
(In reply to elijah from comment #8)
> > Codesigning can be implemented (with weak CSP). However, code signing as a 
> > first class is explicitly NOT part of our charter. That remains the realm of
> > WebAppSec.
> > ... Look, it remains quite simple: You either trust the source of the code you
> >  are running, or you do not.
> 
> The point is that entirely trusting the code is not necessarily the the only
> way js apps will be run in the future, but WebCrypto API should not
> undermine this now.

Yes, it is. It's part of the way Javascript works.

When you change Javascript, then we can revisit this.

> 
> > If you do not trust it, they can lie to you a million different ways, or access
> > your plaintext a million different ways.
> 
> Again, this argument ignores (a) the important of past and future data, (b)
> that ciphertext <> plaintext is only one of many possible cryptographic
> functions, (c) the threats are not just in the browser (cloud service get
> hacked all the time) and I shouldn't be forced to give origin my keys.
> 
> > Please read http://tonyarcieri.com/whats-wrong-with-webcrypto to understand
> > why the security model being argued for here is entirely broken.
> 
> Yep, WebCrypto is not going to make web crypto that much better, but always
> allowing extractable keys only makes it worse.
Comment 11 Ryan Sleevi 2014-05-16 22:32:27 UTC
(In reply to elijah from comment #9)
> > The Web Service should have access to whatever the Web Service created - the
> > same way it has access to Indexed DB, the same way it has access to cookies,
> > the same way it has access to DOM nodes it creates.
> 
> The origin web service did not create my private keys, it requested to have
> them created. This is a very important distinction. Keys are different than
> cookies or local storage, they are much more like location data. Otherwise,
> there is no point at all for any of the key stuff in WebCrypto: if keys were
> like cookies, then the keys should always just get generated on the server
> and sent to the client.

Keys are not like location data at all. They are a series of random bytes. Servers can totally create series of random bytes. They can put them in Indexed DB. They can put them in cookies. This is no different.

> 
> Why isn't this the model of WebCrypto? Because keys are not like cookies,
> because it would be ridiculous to write an API that just said "generate keys
> on the server". Once you make the decision to allow client-side key
> generation, these keys suddenly become something very different than
> everything else dictated by the origin, they become something that the
> client should be able to control.

Keys being generated on the client is not to protect the user from evil servers. Nor is it to protect servers from evil users. It's to provide plausible deniability for servers (and applications) when they state they don't have access to the key material.

Let me state this as unambiguously as possible: The Web Crypto API does not, and cannot, protect you from malicious servers. You are running code from a third-party who is potentially hostile. This is how the Web works - and WHY it works.

Prompting the user to create an extractable key does nothing for security and is empirically unusuable.

I have tried to demonstrate the many technical reasons why the arguments you're raising are flawed. Over the course of the conversation, the exact requirements of what threat you're trying to solve have changed.

However, it's clear that you view there being a need to protect against Hostile Servers. This is explicitly out of scope for the charter (as a change to the web security model beyond the same-origin policy), and those in the W3C Web App Sec WG ( http://www.w3.org/2011/webappsec/ ) can no doubt happily explain the flaws and reasons why.

Extractable serves two purposes:
  - One, it provides servers a means to state they lack interest in a key. This is to reduce liability or risk of disclosure. It does not prevent them from being compelled in the future from requesting access to new keys being generated.
  - Two, it provides a limited means for servers that have script injected (via XSS) to not have messages compromised. However, in practice, this will generally not be the case - you will still need to rekey, and you have no guarantees at the time of XSS that past messages were not replayed, fraudulent messages were generated, etc. This is for the *server*, not the *user* to deal with.
Comment 12 elijah 2014-05-18 22:07:43 UTC
> Let me state this as unambiguously as possible: The Web Crypto API does not,
> and cannot, protect you from malicious servers.

Let me state this as unambiguously as possible: there is a big difference between a malicious server that can compromise your security moving forward and one that can also gain access to key material, allowing it access to all prior communication and/or the ability to sign anything it wants.

> I have tried to demonstrate the many technical reasons why the arguments
> you're raising are flawed. Over the course of the conversation, the exact
> requirements of what threat you're trying to solve have changed.

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" -- Upton Sinclair

What I am trying to solve has not "changed", I am just further enumerating the reasons why extractable keys are a horrible idea. 

(1) Extractable keys open up additional attacks, particularly of prior communication.

(2) Allowing the storage of keys on the server increasing the ways keys can be compromised.

(3) Although currently the browser must trust the origin's javascript entirely, this is likely to change in a future with code signing.

(4) In the real world, users never have informed consent when their browser runs javascript and users are often required to run particular javascript as part of their daily business. It is not sufficient to say that the origin can be trusted to make the right decision regarding key extraction.
Comment 13 Ryan Sleevi 2014-05-19 00:26:27 UTC
(In reply to elijah from comment #12)
> > Let me state this as unambiguously as possible: The Web Crypto API does not,
> > and cannot, protect you from malicious servers.
> 
> Let me state this as unambiguously as possible: there is a big difference
> between a malicious server that can compromise your security moving forward
> and one that can also gain access to key material, allowing it access to all
> prior communication and/or the ability to sign anything it wants.

No, there is not.

The server is not compromising the user's security. The server is compromised. There is a significant difference here.

However, more important things below than this misconception:

> 
> > I have tried to demonstrate the many technical reasons why the arguments
> > you're raising are flawed. Over the course of the conversation, the exact
> > requirements of what threat you're trying to solve have changed.
> 
> "It is difficult to get a man to understand something, when his salary
> depends upon his not understanding it!" -- Upton Sinclair
> 
> What I am trying to solve has not "changed", I am just further enumerating
> the reasons why extractable keys are a horrible idea. 
> 
> (1) Extractable keys open up additional attacks, particularly of prior
> communication.
> 

No more than existing cross-site scripting. This argument holds just the same - if someone XSS's your email provider, they have access to all prior communication.

This is not a change in the web security model, nor is our group chartered to. In fact, we're explicitly chartered NOT to, because we are NOT the group to do that (for that, go see WebAppSec).

> (2) Allowing the storage of keys on the server increasing the ways keys can
> be compromised.

This is at the server's discretion. It does not change the user experience for security one bit. It is empirically shown that prompting users on this is not going to improve security one lick.

> 
> (3) Although currently the browser must trust the origin's javascript
> entirely, this is likely to change in a future with code signing.

This is
1) Wrong
2) Out of scope.

> 
> (4) In the real world, users never have informed consent when their browser
> runs javascript and users are often required to run particular javascript as
> part of their daily business. It is not sufficient to say that the origin
> can be trusted to make the right decision regarding key extraction.

This statement does not parse.

The server is responsible for determining what policies it associates with the keys. The keys have no special meaning imposed by the UA.

The server is responsible for determining its security policies. It alone is responsible for ensuring things like XSS are not possible.

Your arguments are arguing for a web security model that does not exist (nor can/should it, since it's the very nature of the current model that makes the Web the Web).

Notions such as prompting users to "review the Javascript about to be run" are not realistic, as any sort of obfuscated code competition can prove.

Notions such as keys having any sort of "ramifications" for users - such as legally binding statements or the like - are demonstrably without merit, because such things are already possible today. Further, anyone familiar with that particular application of cryptography would know the complex requirements that underscore such legislative granting - and how frequently it fails as technology.

Again, this API does not change the web security model at all. Everything you claim this API should prevent is something that is already possible on the Web today. None of the threats or concerns you have enumerated are new, and our charter explicitly states that changes to the web security model (beyond the same origin policy) are explicitly out of scope - and for good reason.

I appreciate your enthusiastic concern, but it's unwarranted and, to a degree, either misrepresenting or misunderstanding what's available via Javascript today.

There is zero benefit - to users or site operators - by insisting a user interaction for such operations.
Comment 14 elijah 2014-05-19 23:07:46 UTC
Ryan, I understand that you don't personally like the idea of placing restrictions on extractable keys, but the topic is clearly "within scope". I just found this in the WebCrypto Charter:

> Primary API Features in scope are... the API should be asynchronous and
> must prevent or control access to secret key material and other sensitive
> cryptographic values and settings.

(http://www.w3.org/2011/11/webcryptography-charter.html)

In light of this, I wish to make a formal objection to the inclusion of extractable private keys in the WebCrypto API without user agent requirements to disable this by default or require user consent.
Comment 15 Ryan Sleevi 2014-05-19 23:50:12 UTC
(In reply to elijah from comment #14)
> Ryan, I understand that you don't personally like the idea of placing
> restrictions on extractable keys, but the topic is clearly "within scope". I
> just found this in the WebCrypto Charter:
> 
> > Primary API Features in scope are... the API should be asynchronous and
> > must prevent or control access to secret key material and other sensitive
> > cryptographic values and settings.
> 
> (http://www.w3.org/2011/11/webcryptography-charter.html)
> 
> In light of this, I wish to make a formal objection to the inclusion of
> extractable private keys in the WebCrypto API without user agent
> requirements to disable this by default or require user consent.

From the same document:

"Out of scope ... access-control mechanisms beyond the enforcement of the same-origin policy"

This API sufficiently meets it's primary API feature, by allowing application developers and site authors to choose whether or not they wish access to the generated key material. As these site authors are responsible for the code executing and using the Web Cryptography API, and are equally responsible for the security boundary (through the use of HTTPS, CSP, XSS mitigations, and other equivalent restrictions), they are equally capable and cognizant of determining whether or not they require persistent, extractable access to key material.

There are use cases that cannot be met without extractability - such as the safe escrow of keys, or of key wrapping in general between two peers.

I leave it to the chairs to note your formal objection. However, the technical reasons for why your request is unnecessary, unrealistic, and unfortunately based in misunderstanding the web security and privacy model have been explained, and I am confident that the WG will continue in the current path.
Comment 16 Tom Lowenthal 2014-05-21 00:10:06 UTC
These concerns are not based on a misunderstanding. Instead, they are only considered unrealistic because of an overly-contrained formal threat model which is manifoldly incompatible with plausible threats.

I also formally object to the inclusion of extractable keys as a required component of this API.

My objection could be mitigated by normatively recommending that browsers engage in user-interaction both when generating an extractable key and when it is requested that such a key be exported. A normative description of additional API parameters 

In addition, a non-normative recommendation should be given than web applications which request the generation of an extractable key check to see whether the key generated in this way has the extractabe flag. This would allow for the UA behavior of allowing a generation request for an extractable key to be resolved with a non-extractable key if the user chooses.

I think that browsers' user prompts for location information are a completely appropriate basic model for these types of requests.
Comment 17 virginie.galindo 2014-07-01 21:29:37 UTC
Hi all,

Provided the balanced discussions Web Crypto WG had with respect to the key extractability – how it is needed, how it can be used, what are the limitations. 
Provided the fact that implementers, developers and users are made clearly aware of the security expectation for the Web Crypto API [1] and extractable key in particular (see [extract] below). 
Provided that user interaction (suggested in the bug discussion as a possible technical answer to security concerns) is something that is usually out of scope of the W3C domain and does not have a real security value proposition. 
I suggest that we close this bugn with WONTFIX. 

Regards,
Virginie
Chair of the Web Crypto WG
 
[1] https://dvcs.w3.org/hg/webcrypto-api/raw-file/tip/spec/Overview.html#security-developers 
[extract] Applications may share a CryptoKey object across security boundaries, such as origins, through the use of the structured clone algorithm and APIs such as postMessage. While access to the underlying cryptographic key material may be restricted, based upon the extractable attribute, once a key is shared with a destination origin, the source origin can not later restrict or revoke access to the key. As such, authors must be careful to ensure they trust the destination origin to take the same mitigations against hostile script that the source origin employs. Further, in the event of script injection on the source origin, attackers may post the key to an origin under attacker control. Any time that the user agent visits the attacker's origin, the user agent may be directed to perform cryptographic operations using that key, such as the decryption of existing messages or the creation of new, fraudulent messages.
Comment 18 Harry Halpin 2014-07-28 15:55:12 UTC
Quick note Elijah and any others interested in this bug,

Per Virginie's comment, if we formally bring this larger issue up with the Web Security Model up to the WebAppSec (Web Application Security Model) WG, would that satisfy the reviewer?

   cheers,
      harry

(In reply to virginie.galindo from comment #17)
> Hi all,
> 
> Provided the balanced discussions Web Crypto WG had with respect to the key
> extractability – how it is needed, how it can be used, what are the
> limitations. 
> Provided the fact that implementers, developers and users are made clearly
> aware of the security expectation for the Web Crypto API [1] and extractable
> key in particular (see [extract] below). 
> Provided that user interaction (suggested in the bug discussion as a
> possible technical answer to security concerns) is something that is usually
> out of scope of the W3C domain and does not have a real security value
> proposition. 
> I suggest that we close this bugn with WONTFIX. 
> 
> Regards,
> Virginie
> Chair of the Web Crypto WG
>  
> [1]
> https://dvcs.w3.org/hg/webcrypto-api/raw-file/tip/spec/Overview.
> html#security-developers 
> [extract] Applications may share a CryptoKey object across security
> boundaries, such as origins, through the use of the structured clone
> algorithm and APIs such as postMessage. While access to the underlying
> cryptographic key material may be restricted, based upon the extractable
> attribute, once a key is shared with a destination origin, the source origin
> can not later restrict or revoke access to the key. As such, authors must be
> careful to ensure they trust the destination origin to take the same
> mitigations against hostile script that the source origin employs. Further,
> in the event of script injection on the source origin, attackers may post
> the key to an origin under attacker control. Any time that the user agent
> visits the attacker's origin, the user agent may be directed to perform
> cryptographic operations using that key, such as the decryption of existing
> messages or the creation of new, fraudulent messages.
>
(In reply to virginie.galindo from comment #17)
> Hi all,
> 
> Provided the balanced discussions Web Crypto WG had with respect to the key
> extractability – how it is needed, how it can be used, what are the
> limitations. 
> Provided the fact that implementers, developers and users are made clearly
> aware of the security expectation for the Web Crypto API [1] and extractable
> key in particular (see [extract] below). 
> Provided that user interaction (suggested in the bug discussion as a
> possible technical answer to security concerns) is something that is usually
> out of scope of the W3C domain and does not have a real security value
> proposition. 
> I suggest that we close this bugn with WONTFIX. 
> 
> Regards,
> Virginie
> Chair of the Web Crypto WG
>  
> [1]
> https://dvcs.w3.org/hg/webcrypto-api/raw-file/tip/spec/Overview.
> html#security-developers 
> [extract] Applications may share a CryptoKey object across security
> boundaries, such as origins, through the use of the structured clone
> algorithm and APIs such as postMessage. While access to the underlying
> cryptographic key material may be restricted, based upon the extractable
> attribute, once a key is shared with a destination origin, the source origin
> can not later restrict or revoke access to the key. As such, authors must be
> careful to ensure they trust the destination origin to take the same
> mitigations against hostile script that the source origin employs. Further,
> in the event of script injection on the source origin, attackers may post
> the key to an origin under attacker control. Any time that the user agent
> visits the attacker's origin, the user agent may be directed to perform
> cryptographic operations using that key, such as the decryption of existing
> messages or the creation of new, fraudulent messages.
Comment 19 Ryan Sleevi 2014-07-28 18:50:46 UTC
(In reply to Harry Halpin from comment #18)
> Quick note Elijah and any others interested in this bug,
> 
> Per Virginie's comment, if we formally bring this larger issue up with the
> Web Security Model up to the WebAppSec (Web Application Security Model) WG,
> would that satisfy the reviewer?
> 

Harry,

For the sake of the members of the WG, I don't see that in Virginie's comment, so could you please provide an example of what issue you believe should be brought to WebAppSec? Virginie's response correctly identified that UI is out of scope, and I'm not sure what you would want from WebAppSec to provide, other than "Yes, this is how the Internet works, ergo this is not a valid threat model".
Comment 20 Tom Lowenthal 2014-07-28 19:21:09 UTC
Virgin's suggestion that UI is out of scope removes one possible mitigation of the issue, not the issue itself.

A review by WebAppSec might well be useful in finding a more agreeable solution.
 
I remain in substantial objection to extractable keys as described.

They seem grossly incompatible with the goal of implementing secure application protocols on the level of web applications. Not least of which precisely *because* of risks such as XSS and the code delivery problem of which we are all aware.

As it stands, the spec doesn't seem on track to implement a solution which will be actually useful at achieving the first goal specified in the WG's charter. I hope to find a solution which will allow developers to implement trustworthy applications.
Comment 21 Ryan Sleevi 2014-07-28 19:40:58 UTC
(In reply to Tom Lowenthal from comment #20)
> As it stands, the spec doesn't seem on track to implement a solution which
> will be actually useful at achieving the first goal specified in the WG's
> charter. I hope to find a solution which will allow developers to implement
> trustworthy applications.

Tom,

This is a mischaracterization. The API allows you to generate such applications with unextractable keys. An application author is REQUIRED, by contract of the API, to specify whether or not they desire keys to be extractable.

Again, to reiterate, if the API made all keys unextractable, then an application author CAN, just the same, use a purely JS polyfill (as SJCL, Forge, End to End, and countless others are PROOF of this), and have the EXACT SAME API and capabilities as exposed through Web Crypto API. So it does absolutely nothing to improve security to arbitrarily limit the API, since there is no reduction of capabilities in a polyfill, only a real and tangible reduction of security.

Put differently: Your solution will make the web less secure. Provably.
Comment 22 Harry Halpin 2014-07-28 20:01:21 UTC
(In reply to Ryan Sleevi from comment #19)
> (In reply to Harry Halpin from comment #18)
> > Quick note Elijah and any others interested in this bug,
> > 
> > Per Virginie's comment, if we formally bring this larger issue up with the
> > Web Security Model up to the WebAppSec (Web Application Security Model) WG,
> > would that satisfy the reviewer?
> > 
> 
> Harry,
> 
> For the sake of the members of the WG, I don't see that in Virginie's
> comment, so could you please provide an example of what issue you believe
> should be brought to WebAppSec? Virginie's response correctly identified
> that UI is out of scope, and I'm not sure what you would want from WebAppSec
> to provide, other than "Yes, this is how the Internet works, ergo this is
> not a valid threat model".


Virginie noted it was out of scope for WebCrypto but you noticed that the issue is more suitable for discussion in WebAppSec (https://www.w3.org/Bugs/Public/show_bug.cgi?id=25721#c11). Thus, the issue is something valid for discussion and possible future work. However, you already noted that is not how the Web itself is designed right now, as shown by the current design of the Web Crypto API. 


> (In reply to Harry Halpin from comment #18)
> > Quick note Elijah and any others interested in this bug,
> > 
> > Per Virginie's comment, if we formally bring this larger issue up with the
> > Web Security Model up to the WebAppSec (Web Application Security Model) WG,
> > would that satisfy the reviewer?
> > 
> 
> Harry,
> 
> For the sake of the members of the WG, I don't see that in Virginie's
> comment, so could you please provide an example of what issue you believe
> should be brought to WebAppSec? Virginie's response correctly identified
> that UI is out of scope, and I'm not sure what you would want from WebAppSec
> to provide, other than "Yes, this is how the Internet works, ergo this is
> not a valid threat model".
Comment 23 Tom Lowenthal 2014-07-28 21:45:12 UTC
Ryan, I chose my words carefully. I said “trustworthy” not “secure”. I think that the option of extractable keys makes it harder for applications built on this API to be worthy of users' trust.

As you say — if someone wants to make a key which they can extract, they can do that right now. My objection is based on the firm belief that the ability to extract keys is a harmful design pattern. I think that this choice would give developers enough rope to shoot themselves in the foot which would be harmful to web security.
Comment 24 Harry Halpin 2014-08-04 15:33:26 UTC
The existence of extractable keys has some use cases that the Working Group has already gone over in detail. For example, backing up keys. 

Also, Ryan, Tom, and Elijah all agree that currently with the design of Javascript and Web Crypto,  private keys can always be extracted by the server, even if the keys are marked unextractable. Does anyone have any text to put under the definition of "extractabie" (Currently just "Whether or not the raw keying material may be exported by the application") to help Web developers understand that keys marked unextractable may not actually give protection of private key material from the server? 

I think the question is what are the use cases for truly non-extractable keys that can't be accessed by the server, so the server  has no actual way of decrypting the data. The obvious use-case is applications where there is to be genuine user-to-user (end-to-end encryption) where the server cannot retrieve the private keys (at least without the user's knowledge). Right now on the Web, the server that serves the code is *always* trusted (Trent) as it can modify the code it wants, and thus always modify the code to get the keys. So Ryan is right, this is basically impossible. 

My observation is that while there are valid use-cases for user-to-user (end-to-end) encryption, Ryan is right insofar as it seems impossible to build these types of applications on the Web. However, it would seem possibly desirable in the future for the Web to support such use-cases. Thus, we are hoping that broaching this topic with the wider WebAppSec group at W3C, and perhaps later with the relevant other standards bodies, would be at least a start.

Would this satisfy the reviewers? 

(In reply to Tom Lowenthal from comment #23)
> Ryan, I chose my words carefully. I said “trustworthy” not “secure”. I think
> that the option of extractable keys makes it harder for applications built
> on this API to be worthy of users' trust.
> 
> As you say — if someone wants to make a key which they can extract, they can
> do that right now. My objection is based on the firm belief that the ability
> to extract keys is a harmful design pattern. I think that this choice would
> give developers enough rope to shoot themselves in the foot which would be
> harmful to web security.
Comment 25 Richard Barnes 2014-08-04 22:14:29 UTC
(In reply to Harry Halpin from comment #24)
> The existence of extractable keys has some use cases that the Working Group
> has already gone over in detail. For example, backing up keys. 

For more example, I spoken with multiple developers who intend to extract keys and wrap them with PBKDF2 for storage.  This is actually safer in the face of an adversary with the ability to read the local disk (but without the ability to hook the browser).  There's a trade-off here between protecting against script injection and protecting against local processes; disabling extractable keys just forces developers to accept the local attackers.

 
> Also, Ryan, Tom, and Elijah all agree that currently with the design of
> Javascript and Web Crypto,  private keys can always be extracted by the
> server, even if the keys are marked unextractable. Does anyone have any text
> to put under the definition of "extractabie" (Currently just "Whether or not
> the raw keying material may be exported by the application") to help Web
> developers understand that keys marked unextractable may not actually give
> protection of private key material from the server? 

Isn't the definition of "extractable" just "extract() and wrap() work"?  

In any case, I'm not clear what you mean by "private keys can always be extracted by the server, even if the keys are marked unextractable".  I'm assuming that by "server" here, you mean "JS".   If the JS calls generateKey() with extractable == false, then it certainly cannot access the private key material.


> I think the question is what are the use cases for truly non-extractable
> keys that can't be accessed by the server, so the server  has no actual way
> of decrypting the data. The obvious use-case is applications where there is
> to be genuine user-to-user (end-to-end encryption) where the server cannot
> retrieve the private keys (at least without the user's knowledge). Right now
> on the Web, the server that serves the code is *always* trusted (Trent) as
> it can modify the code it wants, and thus always modify the code to get the
> keys. So Ryan is right, this is basically impossible. 

I'm not as negative as Ryan on this.  Even if the server can modify the code, if the code that creates a key the first time is good, then at the very least, the server has to re-key the browser in order to have an extractable private key.  And that action is visible to anyone that the endpoint corresponds with, so you can apply techniques analogous to Certificate Transparency.

(And this is not to mention things like sub-resource integrity, which can prevent JS from changing without authorization.)


> My observation is that while there are valid use-cases for user-to-user
> (end-to-end) encryption, Ryan is right insofar as it seems impossible to
> build these types of applications on the Web. However, it would seem
> possibly desirable in the future for the Web to support such use-cases.
> Thus, we are hoping that broaching this topic with the wider WebAppSec group
> at W3C, and perhaps later with the relevant other standards bodies, would be
> at least a start.
> 
> Would this satisfy the reviewers? 
> 
> (In reply to Tom Lowenthal from comment #23)
> > Ryan, I chose my words carefully. I said “trustworthy” not “secure”. I think
> > that the option of extractable keys makes it harder for applications built
> > on this API to be worthy of users' trust.
> > 
> > As you say — if someone wants to make a key which they can extract, they can
> > do that right now. My objection is based on the firm belief that the ability
> > to extract keys is a harmful design pattern. I think that this choice would
> > give developers enough rope to shoot themselves in the foot which would be
> > harmful to web security.
Comment 26 Harry Halpin 2014-08-11 01:11:35 UTC
(In reply to Richard Barnes from comment #25)
> (In reply to Harry Halpin from comment #24)
> > The existence of extractable keys has some use cases that the Working Group
> > has already gone over in detail. For example, backing up keys. 
> 
> For more example, I spoken with multiple developers who intend to extract
> keys and wrap them with PBKDF2 for storage.  This is actually safer in the
> face of an adversary with the ability to read the local disk (but without
> the ability to hook the browser).  There's a trade-off here between
> protecting against script injection and protecting against local processes;
> disabling extractable keys just forces developers to accept the local
> attackers.

All solid points, although I assume again it all depends on your threat model. 

> 
>  
> > Also, Ryan, Tom, and Elijah all agree that currently with the design of
> > Javascript and Web Crypto,  private keys can always be extracted by the
> > server, even if the keys are marked unextractable. Does anyone have any text
> > to put under the definition of "extractabie" (Currently just "Whether or not
> > the raw keying material may be exported by the application") to help Web
> > developers understand that keys marked unextractable may not actually give
> > protection of private key material from the server? 
> 
> Isn't the definition of "extractable" just "extract() and wrap() work"?  
> 
> In any case, I'm not clear what you mean by "private keys can always be
> extracted by the server, even if the keys are marked unextractable".  I'm
> assuming that by "server" here, you mean "JS".   If the JS calls
> generateKey() with extractable == false, then it certainly cannot access the
> private key material.

Sounds like a test case to me. My point is that some text that highlights the problems inherent in trying to do end-to-end user-centric encryption  on the Web (where the server doesn't have the ability to decrypt the user's data without the user's knowledge) could prevent app designers from being misled about the security properties of their WebApps. I'm thinking of you Protonmail :)  

Again, this seems to boil down to a problem of checking the veracity of JS code. Independent code verification and auditing of JS code would probably help here in cases where the user really wanted to be assured of the code that the server was running. 

> 
> 
> > I think the question is what are the use cases for truly non-extractable
> > keys that can't be accessed by the server, so the server  has no actual way
> > of decrypting the data. The obvious use-case is applications where there is
> > to be genuine user-to-user (end-to-end encryption) where the server cannot
> > retrieve the private keys (at least without the user's knowledge). Right now
> > on the Web, the server that serves the code is *always* trusted (Trent) as
> > it can modify the code it wants, and thus always modify the code to get the
> > keys. So Ryan is right, this is basically impossible. 
> 
> I'm not as negative as Ryan on this.  Even if the server can modify the
> code, if the code that creates a key the first time is good, then at the
> very least, the server has to re-key the browser in order to have an
> extractable private key.  And that action is visible to anyone that the
> endpoint corresponds with, so you can apply techniques analogous to
> Certificate Transparency.

This sounds like an idea for a new standard :) Ben Laurie has a mailing list on this, and I'm sure W3C would be happy to see a draft of something in this space. 

Again, would this response satisfy the reviewers? 

> 
> (And this is not to mention things like sub-resource integrity, which can
> prevent JS from changing without authorization.)
> 
> 
> > My observation is that while there are valid use-cases for user-to-user
> > (end-to-end) encryption, Ryan is right insofar as it seems impossible to
> > build these types of applications on the Web. However, it would seem
> > possibly desirable in the future for the Web to support such use-cases.
> > Thus, we are hoping that broaching this topic with the wider WebAppSec group
> > at W3C, and perhaps later with the relevant other standards bodies, would be
> > at least a start.
> > 
> > Would this satisfy the reviewers? 
> > 
> > (In reply to Tom Lowenthal from comment #23)
> > > Ryan, I chose my words carefully. I said “trustworthy” not “secure”. I think
> > > that the option of extractable keys makes it harder for applications built
> > > on this API to be worthy of users' trust.
> > > 
> > > As you say — if someone wants to make a key which they can extract, they can
> > > do that right now. My objection is based on the firm belief that the ability
> > > to extract keys is a harmful design pattern. I think that this choice would
> > > give developers enough rope to shoot themselves in the foot which would be
> > > harmful to web security.
Comment 27 Mark Watson 2014-09-22 18:08:53 UTC
Following the above discussion, do there remain any objections to closing this with won't fix ?
Comment 28 Richard Barnes 2014-09-22 21:07:20 UTC
*Please* close it wontfix :)
Comment 29 Tom Lowenthal 2014-09-22 22:30:15 UTC
I continue to object to extractable keys. None of the comments here suggest a change to the recommendation which would mitigate my objections.

Conversely, it seems that the arguments *for* extractable keys come from a place of security nihilsm. It's true that as long as JavaScript is distributed unsafely, many things are at risk. This seems to make it even more important that keys not be extractable. Indeed, approaches like sub-resource integrity and CT-like work would make things even safer — if users are confident that keys can't be extracted.
Comment 30 Mark Watson 2014-09-23 22:40:44 UTC
Ok, so I re-read the comments.

I am not at all sure that simply removing the possibility of extractable secret/private keys would provide any of the benefits that are sought: If the API did not support extractable secret/private keys it remains straightforward for a web application to do something equivalent by generating the key themselves in Javascript and then importing it. This would look identical to the user, so any perceived security benefit for the user of disallowing extractability would be illusory.

However, I do see in the discussion what appears to me to be a feature request: a mode where the UA attests to the user that some keying material will be kept secret from the site that is using it. Trust is not black-and-white and I can imagine scenarios where the user might find it valuable to know that a site can perform operations with a given key only when the user is actually visiting the site and only on the users computer, and not at other times and in other places.

At first glance, though, this feature seems rather hard to implement: just because the UA tells me that some site has generated a non-extractable key does not mean this is the key the site is using for whatever operations it is performing. Even if all keys are non-extractable the site may still be using some other key. Only if I also trust the assertions of the site about the key it uses - or I have some further attestation from the UA about what is being done with the key - do I gain some value from the non-extractability. And if I trust the assertions from the site, I can trust it to set extractable=false anyway and the UA attestation is of no value.

So, can we treat this as a feature request, to be considered for future versions ? Some work is certainly required to describe the desired feature and indeed some liaision with WebAppSec might be necessary.

I would say, though, that I find arguments along the lines of 'This is not how the web works' as remarkably uncompelling. The implication is that the commenter just doesn't understand some fundamental immutable property web and if only they did they would see how spectacularly wrong they are. But the web is a highly complex and evolving human creation: I doubt that anyone 'understands' it entirely and would be skeptical of anyone who claimed to. Even if the web 'works' in a particular way now doesn't mean it always will. So, it's reasonable to request features which don't fit in a straightforward way into the existing models. The correct response (IMHO) is to explore the underlying motivation for the request and the next steps that would be necessary to further understand it.

There is clearly value in the UA making an attestation to the user about the to security properties of a site (cf green padlock in the https case), so on the face of it there seems to be nothing strange in considering whether there are useful attestations the UA could make about a sites use of WebCrypto. I don't think there is anything simple we can do right now, and simply removing extractability would break some things for no concrete benefit. So, future work ?
Comment 31 Richard Barnes 2014-09-24 03:07:44 UTC
Thanks for this analysis, Mark.  Treating this as possible future work seems sensible to me.  At lest the "non-extractable-only mode" feature is something for which I can understand how it works and roughly what the value proposition is, even if I don't necessarily think it's worth doing.
Comment 32 Harry Halpin 2014-09-24 12:00:23 UTC
(In reply to Richard Barnes from comment #31)
> Thanks for this analysis, Mark.  Treating this as possible future work seems
> sensible to me.  At lest the "non-extractable-only mode" feature is
> something for which I can understand how it works and roughly what the value
> proposition is, even if I don't necessarily think it's worth doing.

Again, I agree with Mark's analysis. The Web does not currently work this way, but that means a whole class of high-value applications with externally verified trust and end-to-end encryption without a totally trusted server are excluded from the Web. 

Yet simply making keys non-extractable all the time does not actually fix the situation.  Thus, I will formally raise the point of trusted Javascript with ensuring that private key material isn't extracted as a example to the Web Application Security Working Group.

I believe the Web should support such functionality and that this is within the scope of a re-chartered Web Application Security Working Group. I will email Web Application Security describing the problem. 

If we can get the charters to re-align, then it may even be within scope of joint work between the Web Application Security Working Group and a re-chartered Web Cryptography Working Group.

However, right now I don't see how we can address this issue in a way that meaningfully resolves Tom and Elijah's worry, because in effect if one doesn't trust the server 100%, the Web is broken for your application. 

I believe this will address the reviewers concerns.
Comment 33 Harry Halpin 2014-09-24 12:09:42 UTC
(In reply to Harry Halpin from comment #32)
> (In reply to Richard Barnes from comment #31)
> > Thanks for this analysis, Mark.  Treating this as possible future work seems
> > sensible to me.  At lest the "non-extractable-only mode" feature is
> > something for which I can understand how it works and roughly what the value
> > proposition is, even if I don't necessarily think it's worth doing.
> 
> Again, I agree with Mark's analysis. The Web does not currently work this
> way, but that means a whole class of high-value applications with externally
> verified trust and end-to-end encryption without a totally trusted server
> are excluded from the Web. 
> 
> Yet simply making keys non-extractable all the time does not actually fix
> the situation.  Thus, I will formally raise the point of trusted Javascript
> with ensuring that private key material isn't extracted as a example to the
> Web Application Security Working Group.
> 
> I believe the Web should support such functionality and that this is within
> the scope of a re-chartered Web Application Security Working Group. I will
> email Web Application Security describing the problem. 
> 
> If we can get the charters to re-align, then it may even be within scope of
> joint work between the Web Application Security Working Group and a
> re-chartered Web Cryptography Working Group.
> 
> However, right now I don't see how we can address this issue in a way that
> meaningfully resolves Tom and Elijah's worry, because in effect if one
> doesn't trust the server 100%, the Web is broken for your application. 
> 
> I believe this will address the reviewers concerns.

Formal request sent to Web Application Security Working Group about how to included attestations for Javascript and to Web Cryptography Working Group for secure key storage in their re-chartering, as well as cc'ing Web Security IG. I believe that resolves the bug and formal objection, hoping that the work item is taken on in future re-chartering. 

Thus, we accept your use-case Tom, but it's going to be a major change to the Web to get it to work - a change that goes outside of the scope of this API in its current form, but one that can be tackled by a larger effort around attestations of Javascript and possibly better key storage. If you can provide any pointers to possible solutions on the public Web Security IG, we'd be interested. 

http://lists.w3.org/Archives/Public/public-webappsec/2014Sep/0098.html
Comment 34 Tom Lowenthal 2014-09-25 22:01:20 UTC
To be clear, I don't think that no-extractable-keys solves the JS delivery quandry, or several other web security issues. However, this isn't the WG for solving JS delivery, only crypto primitives. I'm looking forward to lots of exciting pieces combining into one giant secure/trustworth applications robot — including some other pieces which are much further from being finished.

To Mark's suggestion about this being future work, I remain unsure. I think that the sensible approach is to leave extractable keys as default-disabled until other mitigations can be added to make it safer to enable them.

I appreciate adding this as a use case Harry. I think that the most fruitful approach is to try to completely implement this use case — as far as this WG's work is able — while carefully noting what use case requirements this places on other WGs, and hoping that they solve those problems sensibly.
Comment 35 Mark Watson 2014-09-25 22:53:14 UTC
(In reply to Tom Lowenthal from comment #34)
> To be clear, I don't think that no-extractable-keys solves the JS delivery
> quandry, or several other web security issues. However, this isn't the WG
> for solving JS delivery, only crypto primitives. I'm looking forward to lots
> of exciting pieces combining into one giant secure/trustworth applications
> robot — including some other pieces which are much further from being
> finished.
> 
> To Mark's suggestion about this being future work, I remain unsure. I think
> that the sensible approach is to leave extractable keys as default-disabled
> until other mitigations can be added to make it safer to enable them.

When you say 'default-disabled' what exactly do you mean ? Are you suggesting we change the API ? If so, how. If not, what would happen if a script tries to generate an extractable key ? 'default' implies there is a way to trigger alternative behaviour. What would what be ? Just trying to make sure I have a full understanding.

That it might be 'safer' in future to enable them, based on other mitigations, implies there is some risk or attack that arises if they are enabled now. And that that risk or attack would be mitigated in the meantime by disabling them. What is that ?

> 
> I appreciate adding this as a use case Harry. I think that the most fruitful
> approach is to try to completely implement this use case — as far as this
> WG's work is able — while carefully noting what use case requirements this
> places on other WGs, and hoping that they solve those problems sensibly.

When you say 'this use case' what exactly do you mean ? So far, I understand that you see a class of use-cases with the following properties
1) The UA generates a key which a site can use, but it cannot extract
2) The User is aware that the UA will not release the key to the site
3) The User derives some security benefit or privacy assurance from this

Specifically, the user is assured by the UA that the site can only use the key on the users computer whilst the user is visiting the site rather than at some other place or time and this assurance is of value to the user.

Is this right ?

It seems to me the assurance can only be of value to the user if they know what the key is being used for, right ?
Comment 36 virginie.galindo 2014-10-14 09:44:03 UTC
In order to progress towards exit to Last Call for the Web Crypto API, the chair suggests the following resolution for that bug. 

Resolution : Bug RESOLVED as WONTFIX. Based on the fact that the suggested feature/use case requires further analysis/work with respect to implementation and user trust. 

If none objects before the 20th of Oct @20:00 UTC, this resolution will be endorsed.
Comment 37 Mark Watson 2014-10-22 21:16:37 UTC
Closing per Chair's resolution.