This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 25972 - Please require a secure origin
Summary: Please require a secure origin
Status: RESOLVED MOVED
Alias: None
Product: Web Cryptography
Classification: Unclassified
Component: Web Cryptography API Document (show other bugs)
Version: unspecified
Hardware: PC All
: P2 normal
Target Milestone: ---
Assignee: Ryan Sleevi
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-06-04 08:04 UTC by Anne
Modified: 2016-05-23 22:38 UTC (History)
10 users (show)

See Also:


Attachments

Description Anne 2014-06-04 08:04:16 UTC
As with service workers implementations want to require a secure origin in order to get access to cryptographic functionality. We should make that a requirement in the specification so that implementations do not have to reverse engineer each other.

In particular you want to refer to the origin of http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#entry-settings-object

Secure origin is defined by https://w3c.github.io/webappsec/specs/mixedcontent/

Ryan probably knows which flavor of secure origin is to be used here.
Comment 1 Boris Zbarsky 2014-06-04 13:01:40 UTC
Some implementations want to do this.  Not all.  

> Secure origin is defined by https://w3c.github.io/webappsec/specs/mixedcontent/

Fwiw, this doesn't match Blink's definition at <http://www.chromium.org/Home/chromium-security/security-faq#TOC-Which-origins-are-secure-> in various ways.  Are they willing to change their behavior?  Should this spec change?
Comment 2 Mike West 2014-06-04 13:11:27 UTC
1. The spec is an editor's draft that I posted just this week. As much as I love everything I wrote, let's not consider it normative just yet. :)

2. The wiki and the spec will be aligned as soon as everyone involved agrees on how that alignment should look. public-webappsec@ is probably the right place for that conversation.
Comment 3 Ryan Sleevi 2014-06-04 18:51:40 UTC
(In reply to Boris Zbarsky from comment #1)
> Some implementations want to do this.  Not all.  
> 

Right. And we want to find a suitable way of expressing that policy.

> > Secure origin is defined by https://w3c.github.io/webappsec/specs/mixedcontent/
> 
> Fwiw, this doesn't match Blink's definition at
> <http://www.chromium.org/Home/chromium-security/security-faq#TOC-Which-
> origins-are-secure-> in various ways.  Are they willing to change their
> behavior?  Should this spec change?

As Mike notes, the spec is super-drafty.

There's also a clear set of implementation-specific secure origin considerations. For example, Netflix described a scenario where resources were loaded over HTTP, except the scripts themselves were baked into the device. Does that represent a secure origin? (Maybe, maybe not - depends on if the HTTP origin is allowed to inject non-baked scripts into the execution context)

What about situations where a UA (device) may communicate with a server using an alternatively 'comparable' secure means. For example, consider device with a host of http://foo.local, that communicates with a series of other devices at http://bar.local and http://baz.local. The device-specific implementation establishes a secure tunnel between foo.local and bar/baz, using 'out of bands' means. Should this be considered a secure origin?

My inclination - and response to Anne during the Blink intent to implement - is that I believe the spec already provides suitable flexibility for this. As noted, no algorithms are normative requirements - only the API surface. Algorithms which are not implemented by a UA have a defined behaviour (raising "NotImplementedError"). I believe that UAs that wish to restrict the availability of Web Crypto to secure origins - or to restrict the set of operations (for example, perhaps only allowing digest() operations, but no keyed operations) - seem to have a valid path for doing so.

I think the only question is what the spec would need to say regarding this - that the set of algorithms supported by a UA may differ based upon (country of operation, local laws, configuration by administrator, origin of the document, optional components installed, etc)
Comment 4 Boris Zbarsky 2014-06-04 20:54:51 UTC
Ryan, the issue with just telling implementations to do whatever they want is that you don't get to interop that way.  So I think we _do_ want to define exactly where this API is available.

My personal preference is that it's available everywhere; I haven't seen particularly good arguments against doing that, honestly.  But just having the spec explicitly say that the API can either work or not, completely randomly from the point of view of authors, is really not OK.
Comment 5 Ryan Sleevi 2014-06-04 21:00:40 UTC
(In reply to Boris Zbarsky from comment #4)
> Ryan, the issue with just telling implementations to do whatever they want
> is that you don't get to interop that way.  So I think we _do_ want to
> define exactly where this API is available.

The spec already says that.

There are zero mandatory algorithms.

> 
> My personal preference is that it's available everywhere; I haven't seen
> particularly good arguments against doing that, honestly.  But just having
> the spec explicitly say that the API can either work or not, completely
> randomly from the point of view of authors, is really not OK.

There is no way to build a secure system on an insecure transport. Even the 'best case' example, provided by Netflix, requires bootstrapping trust either using a UA-specific means (named key discovery, among others), or first bootstrapping on TLS. Even when bootstrapped over TLS, the resultant API provides no meaningful security guarantees for the user - it's just an authentication mechanism.

Unlike exposing low-level cryptographic primitives, which CAN be combined into ways that grant REAL security (eg: the combination of CBC+HMAC+random IV in EtM, ala the mcgrew AEAD), this is something which CAN NOT be used to provide meaningful security in isolation.
Comment 6 Boris Zbarsky 2014-06-04 21:12:22 UTC
> There are zero mandatory algorithms.

I think that's a problem!

> There is no way to build a secure system on an insecure transport.

I've seen you object to people making the same argument about some of the algorithms the spec defines.

I don't see why a webpage that's served over http shouldn't be allowed to verify signatures or compute hashes, frankly, even if we buy the argument that it shouldn't be allowed to do encryption/decryption (which I'm not sure I do).

Fundamentally, it seems like you have some set of use cases (Netflix's?) in mind but don't care about things that aren't in that set.  Or something.  I really can't tell what's behind this drive to disable this API except in some Google-specific set of cases.
Comment 7 Ryan Sleevi 2014-06-04 21:22:19 UTC
(In reply to Boris Zbarsky from comment #6)
> > There are zero mandatory algorithms.
> 
> I think that's a problem!

Then file a bug! And we can discuss (again) all of the technical, political, and legal reasons that makes mandatory algorithms challenging. This is the same discussion that has been had repeatedly, since the WG started, but if you feel the answers have not been satisfactory, you should file a bug with your concerns.

> 
> > There is no way to build a secure system on an insecure transport.
> 
> I've seen you object to people making the same argument about some of the
> algorithms the spec defines.

In the very post being replied to, I attempted to address this before you even raised it, by showing how the two views - insecure transport vs insecure primitive - are *not* similar, nor the same objection.

> 
> I don't see why a webpage that's served over http shouldn't be allowed to
> verify signatures or compute hashes, frankly, even if we buy the argument
> that it shouldn't be allowed to do encryption/decryption (which I'm not sure
> I do).

Digest is something whose security is contingent upon use cases, but which there exists the possibility of low-risk/no-risk uses. Of course, actually establishing that there is a use case for those (eg: in the non-security space) has yet to be established.

Verifying signatures, however, is something I do object to. Having signature verification go over HTTP is no different than file sharing sites listing the MD5/SHA1 right next to the HTTP download link to a file. You are not protected from an attacker, at all, who can modify the code and insist that any arbitrary file has a "valid" signature. Or replacing the signature in transit. Even if the signature is delivered via secure transport, if the code to verify the signature is not, then you're still at the same place. And I don't mean sourcing the script over HTTPS, the entire execution environment has to be secure in order to securely validate a signature.

So, while in theory it sounds like "this is just a public key operation, no secrets here", the actual security goals of a "signature verification" system are immediately defeated by the mutability of the environment when it's an insecure environment.

> 
> Fundamentally, it seems like you have some set of use cases (Netflix's?) in
> mind but don't care about things that aren't in that set.  Or something.  I
> really can't tell what's behind this drive to disable this API except in
> some Google-specific set of cases.

Within the Chromium security team, we've discussed this API at length, as well as examined use cases (including those in the document) and their security expectations. In no case do we believe that real world security expectations can be met, and we have seen time and time again the misuse of this.

Similar, although for different reasons, to Service Worker, as User Agent Vendors, Spec Authors and Web Developers, we should stop the proliferation of insecure-by-default, particularly when the insecurity does nothing to actually permit building secure applications.

Again, this is *different* than the algorithm argument. Several algorithms noted as "insecure by themselves" CAN be combined into "secure" primitives, and are NECESSARY to support use cases - including those that we've documented. The INRIA security analysis notes this - that while many of the ciphers lack CCA, it IS possible to build CCA through the use of MACs.
Comment 8 Mark Watson 2014-06-04 22:49:21 UTC
Since Netflix was mentioned above ...

Our site is served over HTTP because this is necessary in order to access content files (which are also served over HTTP from CDNs) without triggering mixed-mode warnings. We use WebCrypto with our control protocol. Our security goals are relatively modest: for example we would like to keep our control protocol data secret from passive monitoring. There may be information of competitive value that could be obtained from widespread monitoring of our control traffic - and there is passive monitoring equipment widely deployed - but that value is much less than the cost of establishing a widespread active man-in-the-middle attack.

So, I agree with Boris that the API should be available everywhere. As repeatedly discussed, the API contains more than enough cryptographic rope for the non-expert to hang themselves with. Restricting to secure origins won't help with that. On the other hand, contrary to Ryan's assertion, there exist some modest security goals which can be achieved using WebCrypto on an insecure origin.
Comment 9 Boris Zbarsky 2014-06-05 00:53:16 UTC
> - are *not* similar, nor the same objection.

I think part of your problem is that you're confusing "secure origin" with "secure transport".

For example, an <iframe srcdoc="whatever" sandbox="allow-scripts"> in an https document will not be a "secure origin" (because it's a nonce origin), but is a "secure transport" (because the data came from the https page).  Why should this API be disabled in that situation?

The point being that all the definitions of "secure origin" I've seen people suggesting so far are so restrictive that they _do_ in fact cut out cases that can be secure.

> Having signature verification go over HTTP

Again, you're confusing origins and transports.  The two are not all that related on the web as it is nowadays, except in the simplest cases.

> the actual security goals of a "signature verification" system
> are immediately defeated by the mutability of the environment when it's an
> insecure environment.

If I have an https page that loads some API in a sandboxed iframe over https (sandboxed because I don't actually trust the API provider all that much) and then I want to prove to the API provider that I am actually me and postMessage stuff to it signed with my key, why should it not be able to perform client-side signature verification?  Which part of what I described is an "insecure environment" exactly?

> we should stop the proliferation of insecure-by-default

The rules you plan to adopt are _way_ more restrictive than that.
Comment 10 Ryan Sleevi 2014-06-05 00:57:17 UTC
(In reply to Boris Zbarsky from comment #9)
> > - are *not* similar, nor the same objection.
> 
> I think part of your problem is that you're confusing "secure origin" with
> "secure transport".
> 
> For example, an <iframe srcdoc="whatever" sandbox="allow-scripts"> in an
> https document will not be a "secure origin" (because it's a nonce origin),
> but is a "secure transport" (because the data came from the https page). 
> Why should this API be disabled in that situation?
> 
> The point being that all the definitions of "secure origin" I've seen people
> suggesting so far are so restrictive that they _do_ in fact cut out cases
> that can be secure.
> 
> > Having signature verification go over HTTP
> 
> Again, you're confusing origins and transports.  The two are not all that
> related on the web as it is nowadays, except in the simplest cases.
> 
> > the actual security goals of a "signature verification" system
> > are immediately defeated by the mutability of the environment when it's an
> > insecure environment.
> 
> If I have an https page that loads some API in a sandboxed iframe over https
> (sandboxed because I don't actually trust the API provider all that much)
> and then I want to prove to the API provider that I am actually me and
> postMessage stuff to it signed with my key, why should it not be able to
> perform client-side signature verification?  Which part of what I described
> is an "insecure environment" exactly?
> 
> > we should stop the proliferation of insecure-by-default
> 
> The rules you plan to adopt are _way_ more restrictive than that.

Boris,

Thanks for the color details. It's not clear from your response, and with respect to Comment 1, whether you agree (in spirit) with "Secure Transport", but disagree with "Secure Origin" (as proposed), or if you disagree with both.

Your objections here focused on the definition of Secure Origin - which is a problem that we can and should engage in - but I got the feeling that on a more basic level, you disagreed with a requirement for a Secure Transport. Is that correct?
Comment 11 Boris Zbarsky 2014-06-05 01:01:47 UTC
1) I think the secure origin definitions we have right now are way too restrictive no matter how you slice it.

2) I strongly suspect, though I have not performed exhaustive analysis to prove this, that there are parts of the SubtleCrypto for which the secure transport requirement is too restrictive.  I further believe that it's very hard to define "secure transport".  Is data: a secure transport?  javascript:?  It sort of depends... just like http:// can be sometimes, depending on various things as you noted.

3) I think having something this basic not interoperable across UAs is a really bad idea, so whatever it is we do here we should aim for agreement across UAs and then actually specify that agreement, not just have them ship incompatible things.
Comment 12 Ryan Sleevi 2014-06-05 01:26:15 UTC
(In reply to Boris Zbarsky from comment #11)
> 1) I think the secure origin definitions we have right now are way too
> restrictive no matter how you slice it.

OK. This is a (presumably) solvable problem, if we wish to engage it - although I presume we really mean WebAppsSec, as this WG is not qualified to do that definition.

> 
> 2) I strongly suspect, though I have not performed exhaustive analysis to
> prove this, that there are parts of the SubtleCrypto for which the secure
> transport requirement is too restrictive.  I further believe that it's very
> hard to define "secure transport".  Is data: a secure transport? 
> javascript:?  It sort of depends... just like http:// can be sometimes,
> depending on various things as you noted.

So really, this is two things that I think we should treat separately.

1) What is an insecure transport (which is, in many ways, revisiting the first point)

2) Can you achieve meaningful security over an insecure transport?

I suspect that you might disagree with how I've phrased (2). An alternative way would be "Can you achieve something useful over an insecure transport", but I purposely avoided posing it like that, because I think it misses the point - that is, the most useful systems are the least secure, and the most secure systems are often the least useful (eg: being only suitable for a single task, no networks, etc). Thus the question should not be phrased in terms of >0 "utility", but whether or not there is ">0 security", especially given this is a cryptographic API.

> 
> 3) I think having something this basic not interoperable across UAs is a
> really bad idea, so whatever it is we do here we should aim for agreement
> across UAs and then actually specify that agreement, not just have them ship
> incompatible things.

I think I'd disagree with how dire things are, or in the least, what represents incompatibility/interoperability.

To continue the discussion regarding interop better, I've opened https://www.w3.org/Bugs/Public/show_bug.cgi?id=25985
Comment 13 Mike West 2014-08-22 09:43:21 UTC
Started a thread to define the concept in more detail at http://lists.w3.org/Archives/Public/public-webappsec/2014Aug/0107.html (based on a strawman at https://w3c.github.io/webappsec/specs/mixedcontent/#authenticated-origin)
Comment 14 Henri Sivonen 2014-09-10 09:38:03 UTC
From the Gecko Intent to Ship thread:
https://groups.google.com/d/msg/mozilla.dev.platform/bwC_Srw12CM/ReD8hQ1EsDsJ
Comment 15 Ehsan Akhgari [:ehsan] 2014-10-22 16:50:09 UTC
(In reply to Henri Sivonen from comment #14)
> From the Gecko Intent to Ship thread:
> https://groups.google.com/d/msg/mozilla.dev.platform/bwC_Srw12CM/ReD8hQ1EsDsJ

Ryan, I'm curious to know what your take is on the last paragraph of Henri's post?
Comment 16 Ryan Sleevi 2014-10-22 17:59:06 UTC
(In reply to Ehsan Akhgari [:ehsan] from comment #15)
> (In reply to Henri Sivonen from comment #14)
> > From the Gecko Intent to Ship thread:
> > https://groups.google.com/d/msg/mozilla.dev.platform/bwC_Srw12CM/ReD8hQ1EsDsJ
> 
> Ryan, I'm curious to know what your take is on the last paragraph of Henri's
> post?

You're talking about the post with the paragraph "As for making new features unavailable without TLS in order to promote  the use of TLS", right?

I disagree with Henri's explanations for the motivations, or the analysis of risk, with providing the API to secure origins only.

There are two dimensions of discussion captured on this bug:
1) What is a secure origin/Can we find a necessary sufficient definition? 
2) Whether there is sufficient risk / sufficient benefit to exposing this API to unauthenticated origins

For 1, I'm not going to spend much time discussing it here. Boris has already captured some of the nuance, and I think the conversation is rightfully continuing on in the context of WebAppSec for discussions of things like Mixed Content (and how things like Blob URLs behave, service workers, sandboxed iframes, etc)

For 2, the issue is not and has never been about "promoting TLS" as an ideological point. It's about a belief, from careful evaluation of the security properties of this API, that there is no possible net-positive impact of this API surface without some form of code authentication. That is, we would have zero interest in implementing this API if it was normatively required to be available to unauthenticated origins, because this API would provide zero benefit to the web platform, while introducing significant complexity for implementors. The only benefits from this API are realized when delivered over authenticated transports to authenticated origins.

Our principal is we want new APIs to be secure by default. Having the UA ultimately place that decision in the author's hand by saving "caveat author", and providing no guidance, continues the trend of insecure by default, where authors can and will write applications that attempt to use security features insecurely. It's correct that the web platform is vast and, for many reasons, there are still lots of ways to shoot yourself in the foot. It's also true that even with this API, there are ways to shoot yourself in the foot (for example, unauthenticated encryption). However, for a new API with zero deployment, and with very real risks, it seems we have a unique opportunity to pursue "secure by default".

If this analysis is wrong, which we don't believe it is, it's something we can always revisit. We can expand upon the definition of what is an authenticated origin if we find we're too restrictive. We can potentially expose operations that have "no" security surface (as some claim hashing does not, although that's highly debatable). But if we ship it all by default, that's a decision that can never be revisited. Based on the many uninformed discussions that have happened in this WG regarding the security properties of the Web, we believe the risk is real enough that the value is great enough to promote "secure by default".

Again, this is nothing to do with "requiring TLS because TLS is great". It's about taking a measured analysis of the intended use cases for this API, to take into account public usage of this API, and to make a documented part of the web platform work 'securely', to the best of our ability, by default.
Comment 17 Mark Watson 2014-10-22 18:22:17 UTC
(In reply to Ryan Sleevi from comment #16)
> 
> For 2, the issue is not and has never been about "promoting TLS" as an
> ideological point. It's about a belief, from careful evaluation of the
> security properties of this API, that there is no possible net-positive
> impact of this API surface without some form of code authentication. That
> is, we would have zero interest in implementing this API if it was
> normatively required to be available to unauthenticated origins, because
> this API would provide zero benefit to the web platform, while introducing
> significant complexity for implementors. The only benefits from this API are
> realized when delivered over authenticated transports to authenticated
> origins.

The thing is, I have explained many times in this group why the above is not true. For example, the ability to provide confidentiality against passive monitoring is a net-positive, even if confidentiality is not provided against active attackers.

There are further net-positives available on a TOFU basis even in face of active attackers, as I have explained.

You might not value these particular security benefits, for they are indeed modest, and that is fine. You might ague that those benefits are so modest that they should not be provided. But it is not true that there is no possible benefit.

Anyway, I think the point you were being asked to address was the fact that an HTTPS restriction for WebCrypto is easy to work around with an I-Frame and indeed this is what we are doing on Chrome in the field today at some scale.
Comment 18 Ehsan Akhgari [:ehsan] 2014-10-22 18:25:39 UTC
(In reply to Ryan Sleevi from comment #16)
> (In reply to Ehsan Akhgari [:ehsan] from comment #15)
> > (In reply to Henri Sivonen from comment #14)
> > > From the Gecko Intent to Ship thread:
> > > https://groups.google.com/d/msg/mozilla.dev.platform/bwC_Srw12CM/ReD8hQ1EsDsJ
> > 
> > Ryan, I'm curious to know what your take is on the last paragraph of Henri's
> > post?
> 
> You're talking about the post with the paragraph "As for making new features
> unavailable without TLS in order to promote  the use of TLS", right?

Yes.

> I disagree with Henri's explanations for the motivations, or the analysis of
> risk, with providing the API to secure origins only.
> 
> There are two dimensions of discussion captured on this bug:
> 1) What is a secure origin/Can we find a necessary sufficient definition? 
> 2) Whether there is sufficient risk / sufficient benefit to exposing this
> API to unauthenticated origins
> 
> For 1, I'm not going to spend much time discussing it here. Boris has
> already captured some of the nuance, and I think the conversation is
> rightfully continuing on in the context of WebAppSec for discussions of
> things like Mixed Content (and how things like Blob URLs behave, service
> workers, sandboxed iframes, etc)

Fair enough.

> For 2, the issue is not and has never been about "promoting TLS" as an
> ideological point.

FWIW that is not what I was suggesting at all, and I don't believe you're arguing for that either.

> It's about a belief, from careful evaluation of the
> security properties of this API, that there is no possible net-positive
> impact of this API surface without some form of code authentication. That
> is, we would have zero interest in implementing this API if it was
> normatively required to be available to unauthenticated origins, because
> this API would provide zero benefit to the web platform, while introducing
> significant complexity for implementors. The only benefits from this API are
> realized when delivered over authenticated transports to authenticated
> origins.
> 
> Our principal is we want new APIs to be secure by default. Having the UA
> ultimately place that decision in the author's hand by saving "caveat
> author", and providing no guidance, continues the trend of insecure by
> default, where authors can and will write applications that attempt to use
> security features insecurely. It's correct that the web platform is vast
> and, for many reasons, there are still lots of ways to shoot yourself in the
> foot. It's also true that even with this API, there are ways to shoot
> yourself in the foot (for example, unauthenticated encryption). However, for
> a new API with zero deployment, and with very real risks, it seems we have a
> unique opportunity to pursue "secure by default".

I agree with all of the above in the abstract, but my specific point was about Henri's *last* pragraph, which I don't think you addressed in comment 16.  Here's the scenario again:

Assuming that we require a secure origin for whether or not an app can access this API, the developer of example.com which is hosted on a non-secure HTTP server can embed an iframe to a page coming from a secure origin, and postMessage() the data that it, for example, wants to encrypt to the iframe and receive the results back.

IOW, it seems to me that restricting the exposure of this API to secure origins doesn't actually accomplish what you're going for here.

> If this analysis is wrong, which we don't believe it is, it's something we
> can always revisit. We can expand upon the definition of what is an
> authenticated origin if we find we're too restrictive. We can potentially
> expose operations that have "no" security surface (as some claim hashing
> does not, although that's highly debatable).

I may have missed that discussion, do you mind pointing me to some explanation on why hashing would be insecure if the document has not been transported securely?

> But if we ship it all by
> default, that's a decision that can never be revisited. Based on the many
> uninformed discussions that have happened in this WG regarding the security
> properties of the Web, we believe the risk is real enough that the value is
> great enough to promote "secure by default".

It's true that shipping something later is easier than unshipping something, but there's also the interoperability concern, which I think is reason enough to try to come to an agreement before shipping incompatible implementations, as Boris already suggested.
Comment 19 Ryan Sleevi 2014-10-22 18:40:51 UTC
(In reply to Ehsan Akhgari [:ehsan] from comment #18)
> > For 2, the issue is not and has never been about "promoting TLS" as an
> > ideological point.
> 
> FWIW that is not what I was suggesting at all, and I don't believe you're
> arguing for that either.

Correct, but it was indeed mentioned in the Firefox post - "As for making new features unavailable without TLS in order to promote the use of TLS,"

> IOW, it seems to me that restricting the exposure of this API to secure
> origins doesn't actually accomplish what you're going for here.

No, I felt that I did address this, but since you missed it, I'll try restating. The goal is to be secure by default, which we believe there to be a >0 value. Mark's analysis is one we fundamentally disagree with, and so I'm not going to spend much time trying to explain why it's a poor security model.

Yes, it's correct that one can do a lot of things to smuggle the information across origin. However, that can equally be said of other web platform features - from geolocation to microphone access. That is, two origins, acting in concerted effort, can compromise or undermine many security boundaries that UAs interact. That doesn't mean there isn't value in recognizing or attempting to make such separations, however, and they provide value.

Consider geolocation, which is granted on a per-origin basis. Nothing prevents there being an evil.com site, which accesses the user's location, and allows any arbitrary origin to iframe it and inquire as to the user's location. The user will never know that anotherevilsite.com or hostile.com also have access to the users location (by way of iframing). Yet we still recognize there being value in per-origin prompts.

> 
> It's true that shipping something later is easier than unshipping something,
> but there's also the interoperability concern, which I think is reason
> enough to try to come to an agreement before shipping incompatible
> implementations, as Boris already suggested.

Agreed. Which is why we're encouraging Firefox to adopt conversatism, so that secure by default can still be attainable.

That said, we believe the security risks are real enough, and examples such as those provided by the WG members are so demonstrably and clearly insecure, that the value of encouraging secure by default outweighs the interoperability concern.
Comment 20 Ehsan Akhgari [:ehsan] 2014-10-22 20:46:29 UTC
(In reply to Ryan Sleevi from comment #19)
> (In reply to Ehsan Akhgari [:ehsan] from comment #18)
> > > For 2, the issue is not and has never been about "promoting TLS" as an
> > > ideological point.
> > 
> > FWIW that is not what I was suggesting at all, and I don't believe you're
> > arguing for that either.
> 
> Correct, but it was indeed mentioned in the Firefox post - "As for making
> new features unavailable without TLS in order to promote the use of TLS,"

If I understand correctly, Henri was just enumerating the possible reasons to tie this API to secure origins.  At any rate, I'm not particularly interested in debating that aspect, especially since you're not advocating it.  :-)

> > IOW, it seems to me that restricting the exposure of this API to secure
> > origins doesn't actually accomplish what you're going for here.
> 
> No, I felt that I did address this, but since you missed it, I'll try
> restating. The goal is to be secure by default, which we believe there to be
> a >0 value. Mark's analysis is one we fundamentally disagree with, and so
> I'm not going to spend much time trying to explain why it's a poor security
> model.
> 
> Yes, it's correct that one can do a lot of things to smuggle the information
> across origin. However, that can equally be said of other web platform
> features - from geolocation to microphone access. That is, two origins,
> acting in concerted effort, can compromise or undermine many security
> boundaries that UAs interact. That doesn't mean there isn't value in
> recognizing or attempting to make such separations, however, and they
> provide value.

I think there is an important difference between microphone access from getUserMedia and WebCrypto, in that the former gives you an object that you cannot postMessage to another frame, whereas WebCrypto just returns raw data that can be transferred through postMessage verbatim.  I think that for geolocation too, merely restricting the API to secure origins won't make a huge difference because of the exact same problem.  But I personally think that the right solution for geolocation is to restrict it to documents loaded from a secure origin in a top-level browsing context, and merely restricting it to secure origin doesn't buy us much.  (But of course, there are backwards compat concerns with geolocation, unfortunately.)

But more to the point, I think we need to also remember that most of the features of WebCrypto (I think perhaps all except for key storage) are implementable in pure JS.  Such JS implementations, if served through non-secure origins, are prone to all of your concerns here similar to WebCrypto.  However, providing WebCrypto for such origins means that the application can at least rely on secure key storage, which is a win over the status quo on the Web.

These two points being considered together is what makes me hesitate to agree that this API should be restricted to secure origins.

If I'm reading your comments correctly, you don't agree that the first point matters because you can already share information across secure and non-secure origins.  But I really don't think the "secure by default" argument applies here, since I think it's too easy here to bypass the security using the iframe technique.

It's not clear to me what your position is on the second issue.

> Consider geolocation, which is granted on a per-origin basis. Nothing
> prevents there being an evil.com site, which accesses the user's location,
> and allows any arbitrary origin to iframe it and inquire as to the user's
> location. The user will never know that anotherevilsite.com or hostile.com
> also have access to the users location (by way of iframing). Yet we still
> recognize there being value in per-origin prompts.

I actually think that's a terrible model to follow here.  That to me is a big mistake that we have made, and I'd rather not replicate the same in other Web platform features.  For all intents and purposes, if an iframe inside a page wants to access geolocation, we need to assume the worst and tie whatever prompts or restrictions on access to the API/data to the user visible page, which is the one loaded in the top-level browsing context.  iframe's don't really map to anything meaningful as far as the user is concerned.

> > It's true that shipping something later is easier than unshipping something,
> > but there's also the interoperability concern, which I think is reason
> > enough to try to come to an agreement before shipping incompatible
> > implementations, as Boris already suggested.
> 
> Agreed. Which is why we're encouraging Firefox to adopt conversatism, so
> that secure by default can still be attainable.

I'm all for conservatism, but I'm trying to understand the trade-offs specific to this issue.  To me, restricting WebCrypto:

1) Provides the false sense of security while ignoring the iframe issue, which makes the security benefits negligible.
2) Takes away the opportunity for non-TLS content to have some security provided by the WebCrypto API over their existing solution (which is implementing the crypto in JS on the client.)

I'm actually quite worried about #2 as well.

> That said, we believe the security risks are real enough, and examples such
> as those provided by the WG members are so demonstrably and clearly
> insecure, that the value of encouraging secure by default outweighs the
> interoperability concern.

That's unfortunate to hear.  I do hope that we both are still ready to be convinced otherwise.  I for one am quite open to change my mind here if you help me understand why your proposal provides a better trade-off on the two points above.
Comment 21 Ryan Sleevi 2014-10-22 21:08:27 UTC
(In reply to Ehsan Akhgari [:ehsan] from comment #20)
> 
> I think there is an important difference between microphone access from
> getUserMedia and WebCrypto, in that the former gives you an object that you
> cannot postMessage to another frame, whereas WebCrypto just returns raw data
> that can be transferred through postMessage verbatim.  I think that for
> geolocation too, merely restricting the API to secure origins won't make a
> huge difference because of the exact same problem.  But I personally think
> that the right solution for geolocation is to restrict it to documents
> loaded from a secure origin in a top-level browsing context, and merely
> restricting it to secure origin doesn't buy us much.  (But of course, there
> are backwards compat concerns with geolocation, unfortunately.)

I don't think the comparison of microphone/getUserMedia that you make is meaningful for a discussion of security, precisely because I can still smuggle by postMessage the ArrayBuffer's that result from MSE (or potentially the future Streams API). So it has the same effect.

> But more to the point, I think we need to also remember that most of the
> features of WebCrypto (I think perhaps all except for key storage) are
> implementable in pure JS.  Such JS implementations, if served through
> non-secure origins, are prone to all of your concerns here similar to
> WebCrypto.  However, providing WebCrypto for such origins means that the
> application can at least rely on secure key storage, which is a win over the
> status quo on the Web.

We'll have to disagree there. The entire point is that, without an authenticated origin, you *cannot* have secure key storage for any meaningful definition of secure. An attacker in a privileged position can

a) postMessage the Key object (structured clonable) to an origin of their choosing for later use at a time of their choice, able to fully decrypt or forge any messages
b) force the UA into a downlevel form such that it re-generates the key in an insecure way (this is the problem with the "TOFU" model, in that it trivially falls apart)

For an *unauthenticated* origin, everything WebCrypto provides can be met via polyfill, which is what I mentioned in my previous message. It's precisely because of this that it's entirely uninteresting to implement (as an unnecessary surface of the web platform).

Note that I'm also ignoring the other implementation issues that exist in a number of Web Crypto implementations that grant network level attackers the ability to influence cryptographic decisions of secure origins. That's a separate security issue on it's own, and one that TLS does not fully mitigate, OTHER than preventing it from being _any_ attacker on the wire to being an origin the user has explicitly navigated to.

> If I'm reading your comments correctly, you don't agree that the first point
> matters because you can already share information across secure and
> non-secure origins.  But I really don't think the "secure by default"
> argument applies here, since I think it's too easy here to bypass the
> security using the iframe technique.

The goal is not perfect security, and hasn't been. Again, the question isn't "Is it impossible to subvert these goals" (for which an extensible Web Platform will always offer enough rope to subvert any security goals of a UA), but whether it's "secure by default". The fact that you cannot trivially use broken crypto in an unauthenticated origin is unquestionably "more secure" than just allowing it, even if the end result is still achievable through careful (insecure) machinations.

> I'm all for conservatism, but I'm trying to understand the trade-offs
> specific to this issue.  To me, restricting WebCrypto:
> 
> 1) Provides the false sense of security while ignoring the iframe issue,
> which makes the security benefits negligible.

Again, the goal is to provide meaningful guidance as part of the platform as to what a reasonable security baseline is. Unauthenticated code performing cryptographic operations will always be insecure to trivial inspection by any skilled practioner.

To attempt to phrase it differently, is there any meaningful security that can be had on unauthenticated origins? Despite Mark's claims to the contrary, we feel the answer is resoundingly "No". This alone is the point - regardless of whether it's possible to do something insecure via an authenticated connection, it's impossible to do something secure via an unauthenticated connection.

> 2) Takes away the opportunity for non-TLS content to have some security
> provided by the WebCrypto API over their existing solution (which is
> implementing the crypto in JS on the client.)

Again, the claim is that there is no value of security for unauthenticated transports.
Comment 22 Mark Watson 2014-10-22 23:24:26 UTC
(In reply to Ryan Sleevi from comment #21)
> (In reply to Ehsan Akhgari [:ehsan] from comment #20)

> We'll have to disagree there. The entire point is that, without an
> authenticated origin, you *cannot* have secure key storage for any
> meaningful definition of secure. An attacker in a privileged position can
> 
> a) postMessage the Key object (structured clonable) to an origin of their
> choosing for later use at a time of their choice, able to fully decrypt or
> forge any messages
> b) force the UA into a downlevel form such that it re-generates the key in
> an insecure way (this is the problem with the "TOFU" model, in that it
> trivially falls apart)
> 
> For an *unauthenticated* origin, everything WebCrypto provides can be met
> via polyfill, which is what I mentioned in my previous message. It's
> precisely because of this that it's entirely uninteresting to implement (as
> an unnecessary surface of the web platform).
> 

I agree that if the goal is for something to be "secure" in a meaningful and unqualified sense, then an authenticated origin is needed.

But what if your goals are more modest than that ? What if confidentiality against passive monitoring is of value to you without confidentiality against active attackers ?

I get that your opinion is that such a limited form of confidentiality is without value. But that opinion must be based on assumptions about the likely attackers and the value of the information to which the confidentiality applies. Those assumptions may not hold for all use-cases and you should not impose them on others.
Comment 23 Mark Watson 2016-05-23 22:38:22 UTC
Moved to https://github.com/w3c/webcrypto/issues/28