This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 25985 - WebCrypto should be inter-operable
Summary: WebCrypto should be inter-operable
Status: RESOLVED LATER
Alias: None
Product: Web Cryptography
Classification: Unclassified
Component: Web Cryptography API Document (show other bugs)
Version: unspecified
Hardware: All All
: P2 normal
Target Milestone: ---
Assignee: Ryan Sleevi
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-06-05 01:25 UTC by Ryan Sleevi
Modified: 2014-11-20 00:16 UTC (History)
8 users (show)

See Also:


Attachments

Description Ryan Sleevi 2014-06-05 01:25:28 UTC
As stated on https://www.w3.org/Bugs/Public/show_bug.cgi?id=25972#c6

"Boris Zbarsky:
> There are zero mandatory algorithms.

I think that's a problem!"

and https://www.w3.org/Bugs/Public/show_bug.cgi?id=25972#c11

"Boris Zbarsky:

3) I think having something this basic not interoperable across UAs is a really bad idea, so whatever it is we do here we should aim for agreement across UAs and then actually specify that agreement, not just have them ship incompatible things."
Comment 1 Ryan Sleevi 2014-06-05 01:40:10 UTC
There are types of interoperability that WebCrypto has tried to capture normatively:

- Defining, explicitly, the structure of data inputs and outputs. This may seem pedantic, but real world APIs have expected inputs in a variety of forms, causing application developers all sorts of pain. An oft-surprising example is that the inputs to CryptoAPI (Windows' cryptographic services) for signature verification expect to be byte-reversed from the standard, on-the-wire form (and the form that 'virtually' every other cryptographic API expects)

- Providing consistent naming for an algorithm, as well as what is configurable within that set.

- Defining the structure of CryptoKeys and how they're serialized/deserialized into different formats.


However, there are types of interoperability that are more problematic. Cryptography is an extremely special case, for a variety of reasons. Some of them include:

Regulatory - The export of cryptography requires special licensing/approval from many governments.

Legal - The use of certain algorithms, key sizes, or 'purposes' are restricted, by force of law, in some jurisdictions. Likewise, the use of some algorithms with some purposes may be encumbered within some jurisdictions.

Political - Likewise, there are some algorithms that are mandated on a national level that, within the cryptographic community and within implementations, are not generally present outside that country (eg: SEED, GOST)

Administrative - Most cryptographic libraries allow administrative control over the set of algorithms permitted to be used. This may be required for compliance with some appropriate sector (eg: FIPS 140-2, PCI DSS) or may be motivated by an organization's security posture.

Practical - Historically speaking, user agents do not ship cryptography themselves. They interface with some cryptographic library - either one provided by the system (eg: IE, Safari on Windows, Safari on OS X) or through a 'third-party' library (eg: Firefox, Chrome, Opera, the GTK WebKit port). If you're surprised to see Firefox on this list - it's because Mozilla distributes a baseline module, but allows distros to remove that module, to point it at a system module, or, in some cases (Firefox in FIPS mode), encourages people to replace that module themselves.

This is also similar to browsers' existing TLS stacks - administrators can (and do) change or disable the ciphersuites used to negotiate with a server. Even the SSL "mandatory to implement" cipher may be disabled.

Thus, when it comes to cryptographic algorithms, there are a variety of reasons why, even if we can all agree on what to call it and how it behaves, there is not an intrinsic guarantee that something will be available.

With a normative, mandatory to implement suite, user agents in these circumstances are faced with a choice - enable all of the feature or none of it. Since they can't enable all of the feature in a variety of circumstances (note that NSS, the basis for Firefox, is still missing support for a number of the algorithms documented in here that are already present on Windows and Safari), the interoperable choice would be enabling none. However, that would be akin to tossing the baby out with the bathwater.

It also presents an unfortunate situation for the world going forward, as has been noted, algorithms only get weaker over time. As more and more users disable weak algorithms, user agents would be forced to remove *all* cryptographic capabilities (or wait for the WG to standardize a new version - which would, in effect, make that algorithm 'optional' by virtue of which "version" of the spec a UA has implemented)

At the end of the day, this does place the burden on the application developer to either design their system to work with a series of algorithms that may offer comparable security strength (in the hope one is supported), or to only support user agents that are configured to support their necessary suite. A given web application will not, generally speaking, require ALL of the algorithms ALL of the time - and may not require some of them EVER.


If we accept this, then the next state is a question about whether the User Agent can allow users to enable/disable algorithms directly (eg: within the UA, without disabling them at the OS or cryptographic library level). If we accept this, then we get to the point where UAs may allow the user to automatically disable certain algorithms under certain conditions - it's simply automating a manual task. 

If we can get to that point, then we're at a point where the choice of UA itself can imply the users' choice to automatically disable certain algorithms under certain conditions. For example, the UA may market itself as a "secure" UA, or at least a UA whose focus is on security, and thus the use of algorithms under insecure conditions would be a violation of the UA's focus.
Comment 2 Boris Zbarsky 2014-06-05 01:47:54 UTC
Are we in a position where we can require that at least one of certain subsets of algorithms be supported?  Right now a browser could ship the crypto API, implement no algorithms at all, and claim compliance.  That's clearly not desirable.
Comment 3 Ryan Sleevi 2014-06-05 01:58:50 UTC
(In reply to Boris Zbarsky from comment #2)
> Are we in a position where we can require that at least one of certain
> subsets of algorithms be supported?  Right now a browser could ship the
> crypto API, implement no algorithms at all, and claim compliance.  That's
> clearly not desirable.

I agree, not desirable, but I don't see how, given the situation I described, requiring algorithms can possibly be implemented.

Consider a hypothetical world where we require algorithms X, Y, and Z. Let's say that the user of a particular user agent decide's that Z is unacceptably insecure, because they believe a government they do not trust backdoored the design of Z. Thus, they wish to disable Z. Are they prevented now from using X and Y - even when they're perfectly secure?

If that sounds hypothetical, it's not. Z, in this case, is ECC with the NIST curves. So let's say we spec'd "NIST curves and Curve25519 and Brainpool" (Bug 25839). Now users within the US government are prohibited from using Curve25519/Brainpool, so they wish to disable those curves. Are they too now prevented from using WebCrypto?

Let's look how this applies to other specifications though. Is it against interoperability if a user agent doesn't implement <img> support? Or allows the users to disable images? What about Javascript? Cookies? Location APIs?

Is it against interoperability if a (generic, desktop) UA that supports the location API executes on a device that does not have access to location information? Or that the location information may be mediated by the OS, and does not provide signals as to when it doesn't have a location?

Similarly, if a UA restricts access to audio/video access (via getUserMedia()), is it against interoperability?

The API has been structured to provide normative requirements for the 'shape' of the API (the bindings, inputs, outputs), and normative requirements within an algorithm (to the degree possible), but treats each 'algorithm' as if it were one of these device capabilities - a webcam, a microphone, a location, image support, etc - because within the realm of (political, legal, administrative, regulatory) spheres, that's how they're treated: distinct, independent parts.
Comment 4 Boris Zbarsky 2014-06-05 02:20:55 UTC
> Consider a hypothetical world where we require algorithms X, Y, and Z.

I understand that this world is not reasonable.

Can we have a world where we require at least one of X, Y, and Z?  And if we can have requirements where if you implement less-secure X you also need to implement more-secure Y, that's good too.  I understand how this last may not be possible, though, given the various legal issues.

> Is it against interoperability if a user agent doesn't implement <img> support? 

It depends.  HTML defines several different conformance profiles which have slightly different requirements on UAs, and also has certain behaviors that are in fact optional.  For this particular case, a UA is allowed to not show images, but in that case it's required to do certain other things (e.g. show the alt text).

> What about Javascript? Cookies? Location APIs?

UAs are allowed to let the user do whatever they want, including the user explicitly instructing the UA to violate the spec.  That's what makes UAs _user_ agents.

The default UA configuration, however, for the "web browser" HTML conformance class, is expected to have JavaScript and cookies enabled.  Also location APIs, though whether to expose location information to pages is subject to user control, of course.  On the other hand, the "mail reader" HTML conformance class is not expected to have JavaScript enabled.

But the number of these conformance classes in HTML is finite and small, and the general aim is to minimize the number of conformance classes and possible different behaviors.

> Is it against interoperability if a (generic, desktop) UA that supports the
> location API executes on a device that does not have access to location
> information?

Obviously, yes, since the API won't actually work.  ;)

Whether such interop problems are _avoidable_ is a different question, of course.  Sometimes they're not.

> Similarly, if a UA restricts access to audio/video access (via
> getUserMedia()), is it against interoperability?

The spec for getUserMedia explicitly allows the UA to restrict access based on the user's decision here, for obvious reasons.   Yes, this means a page might not work as the page author intended if the user decides to not let it.

In practice, most modern consumer hardware (e.g. most laptops and pretty much every single phone and tablet) has a webcam, microphone, location support, etc.  And people who try to use such things understand when they might be missing and why.  That's a lot less obvious to me with algorithms.  Are we expecting most UAs to actually ship overlapping algorithm sets, for example, or disjoint ones?
Comment 5 Ryan Sleevi 2014-06-05 03:46:02 UTC
(In reply to Boris Zbarsky from comment #4)
> > Consider a hypothetical world where we require algorithms X, Y, and Z.
> 
> I understand that this world is not reasonable.
> 
> Can we have a world where we require at least one of X, Y, and Z?  And if we
> can have requirements where if you implement less-secure X you also need to
> implement more-secure Y, that's good too.  I understand how this last may
> not be possible, though, given the various legal issues.

Let's use a concrete example here. X is RSASSA-PKCS1-v1_5. It's popular (still used for most X.509 certificates, for example), but as noted by INRIA, lacks a proof, and is a bit long in the tooth. The replacement, RSA-PSS, is recommended for new systems.

I believe that Microsoft and Apple support RSA-PSS. NSS does and doesn't ("it's complicated"). Fixable, but this will only apply to users running the latest NSS - and distros are notoriously slow to update system NSS (because it's a security-sensitive package)

The same was true for RSA-OAEP support, a secure (barring the implementation bugs that every open source implementation has seemed to have) means of using RSA for encryption - which, again, Apple and Microsoft support. When I added support to NSS, to put it as opaquely as possible, I needed to have a number of non-technical discussions with decidedly non-engineers, even though the implementation was relatively straightforward. I would expect that Mozilla engineers are going to need to have those *exact same* discussions with their versions of the exact same people, prior to integrating support within Firefox. These conversations take time and cost money, of the billable hours sort.

And, yet again, it only applies to users with the latest (as of yet unreleased) version of NSS. This being over a decade since RSAES/RSASSA were "deprecated".

When we started WebCrypto, and began defining algorithms, NSS lacked AES-GCM support, even though Microsoft and Apple had been shipping for some time. It's also something that won't be available on certain Linux distributions for ~2-3 years, even though it will be part of NSS.

> 
> > Is it against interoperability if a user agent doesn't implement <img> support? 
> 
> It depends.  HTML defines several different conformance profiles which have
> slightly different requirements on UAs, and also has certain behaviors that
> are in fact optional.  For this particular case, a UA is allowed to not show
> images, but in that case it's required to do certain other things (e.g. show
> the alt text).
> 
> > What about Javascript? Cookies? Location APIs?
> 
> UAs are allowed to let the user do whatever they want, including the user
> explicitly instructing the UA to violate the spec.  That's what makes UAs
> _user_ agents.
> 
> The default UA configuration, however, for the "web browser" HTML
> conformance class, is expected to have JavaScript and cookies enabled.  Also
> location APIs, though whether to expose location information to pages is
> subject to user control, of course.  On the other hand, the "mail reader"
> HTML conformance class is not expected to have JavaScript enabled.

As it relates to cryptographic subsystems, it's hard for a user agent to know whether an algorithm was disabled by choice or not.

Further, unlike a variety of other things, UAs are generally in a Danger Zone if they ship polyfills of algorithms. The reality is more complex, and again, generally involves talking to people who measure time in billable hours.

> 
> But the number of these conformance classes in HTML is finite and small, and
> the general aim is to minimize the number of conformance classes and
> possible different behaviors.

In this respect, Web Crypto is more in the vein of a Device API (in practice) than a Web API - in that it's very much tied to the system it's running on. You *can* create it all in software, much like you could do WebGL software rendering, but unlike WebGL software rendering, you would have to navigate a vast minefield of related issues that will keep your counsel very happy, generally speaking.

> 
> > Is it against interoperability if a (generic, desktop) UA that supports the
> > location API executes on a device that does not have access to location
> > information?
> 
> Obviously, yes, since the API won't actually work.  ;)
> 
> Whether such interop problems are _avoidable_ is a different question, of
> course.  Sometimes they're not.
> 
> > Similarly, if a UA restricts access to audio/video access (via
> > getUserMedia()), is it against interoperability?
> 
> The spec for getUserMedia explicitly allows the UA to restrict access based
> on the user's decision here, for obvious reasons.   Yes, this means a page
> might not work as the page author intended if the user decides to not let it.
> 
> In practice, most modern consumer hardware (e.g. most laptops and pretty
> much every single phone and tablet) has a webcam, microphone, location
> support, etc.  And people who try to use such things understand when they
> might be missing and why.  That's a lot less obvious to me with algorithms. 

To extend the metaphor, the cryptographic situation - whether by library or OS - is akin to being on a platform where the OS can hide the availability of the microphone/webcam from the UA. The UA has no way of knowing whether it's simply not there, it's disabled, etc.

In theory, we could (try to?) normatively require that support for algorithms X, Y, and Z MUST be understood by the UA, even if the underlying cryptographic library doesn't support them, but that's a bit hard to do.

There's a set of cryptographic libraries that understand algorithm X, but which may have it disabled. But there's also a set of cryptographic libraries that simply do not have a way of communicating X - so the UA's parsing of the algorithm is for naught, because there's no way a user could 'enable' X.

A very real example of this is the PKCS#11 definitions (as used by NSS) for things like CONCAT, so there's no way to spec that a UA based on NSS MUST handle it.

In practice, this will apply for all new algorithms.

> Are we expecting most UAs to actually ship overlapping algorithm sets, for
> example, or disjoint ones?

Given by the draft implementation shipping by Microsoft (the pre-promises msCrypto) and by Apple (which I'm unclear whether it's prefixed or not, but looks to be prior to much of the change from operation params -> generate params, AFAICT), and the version in Chrome available behind a flag, I think we'll see reasonably sizeable overlap.

IE's support - http://msdn.microsoft.com/en-us/library/ie/dn302338(v=vs.85).aspx
Safari's support is roughly at https://bugs.webkit.org/show_bug.cgi?id=122679 , which is a very similar set of algorithms
Chromium's supported algorithms (for M-37, with expansion planned), from the Intent to Ship ( https://groups.google.com/a/chromium.org/d/msg/blink-dev/Tn3pfJZDcGg/nUlvUOFKL_QJ ) is https://docs.google.com/a/chromium.org/document/d/184AgXzLAoUjQjrtNdbimceyXVYzrn3tGpf3xQGCN10g/edit , which is very similar to MSFT.

However, please note the considerations. It's inconsistent between Chromium platforms, due to dependencies on libraries like the version of NSS or support within OpenSSL (which is missing a number of algorithms, or has them implemented insecurely)

This again goes back to the general statement about being a set of Device APIs (conceptually), even though they're exposed through a common interface.
Comment 6 Boris Zbarsky 2014-06-05 06:02:56 UTC
> Let's use a concrete example here. X is RSASSA-PKCS1-v1_5. It's popular (still
> used for most X.509 certificates, for example), but as noted by INRIA, lacks a
> proof, and is a bit long in the tooth. The replacement, RSA-PSS, is recommended
> for new systems.

OK.  Let's stick with this concrete example.  How feasible is requiring implementations to support at least one of RSASSA-PKCS1-v1_5 and RSA-PSS?
Comment 7 Ryan Sleevi 2014-06-05 06:38:01 UTC
(In reply to Boris Zbarsky from comment #6)
> OK.  Let's stick with this concrete example.  How feasible is requiring
> implementations to support at least one of RSASSA-PKCS1-v1_5 and RSA-PSS?

Variables that have to be considered:
- Key sizes:
  - 1024 is generally considered insecure for new usages, but may be secure for short-term keys.
  - Should <1024 be permitted?
  - OS X only supports keys < 8K. Windows/NSS support 16K or more.
  - What increment function should be used? Multiples of 8K?
- Hash algorithms
  - Support for SHA-1?
  - Support for SHA-2?
- Exponents
  - Should only F4 be supported? What about support for F0? Despite the attacks, there are still keys out there with F0.

Now let's say we solve all of these issues in the context of the three desktop browsers with implementations currently exposed (I don't believe Firefox's has landed yet).

What about for mobile devices? The answers to the above questions are different. Or platforms which implement key storage on a TPM (1.2 / 2.0) - still origin restricted, but more secure? On a device like Chromecast, the restriction might be 2K keys, at F4, with only SHA-1, only SSA. Is that acceptable? Or does all of WebCrypto get disabled then?

And that's not even beginning to touch on the (many) issues from http://www.cryptolaw.org/


I do think we're going to likely end up with algorithm profiles that result from the natural effect of implementations - both those now and those on (other) devices - and I do think we're going to end up with multiple profiles - I think at least within a realm of (embedded), (mobile), (desktop), if not more - based on the real platform limitations and constraints.
Comment 8 Boris Zbarsky 2014-06-05 07:00:26 UTC
> The answers to the above questions are different.

So let's turn this around.  How does one actually make use of this API in practice?  That is, how does one discover the supported key lengths, hash algorithms, and exponents for RSASSA-PKCS1-v1_5 and RSA-PSS?
Comment 9 Boris Zbarsky 2014-06-05 07:01:53 UTC
Or put another way, one of the goals of web APIs is to make it possible for all UAs to support the API and to minimize the amount of stuff that gets created that works with only one UA.

So far it seems to me like the default webcrypto workflow will be to create things that only work in the UA you tested in...
Comment 10 Ryan Sleevi 2014-06-05 07:22:00 UTC
(In reply to Boris Zbarsky from comment #8)
> > The answers to the above questions are different.
> 
> So let's turn this around.  How does one actually make use of this API in
> practice?  That is, how does one discover the supported key lengths, hash
> algorithms, and exponents for RSASSA-PKCS1-v1_5 and RSA-PSS?

This is the general algorithm discovery problem, which goes back to ISSUE-3 ( http://www.w3.org/2012/webcrypto/track/issues/3 )

Within the WG, the plan has been that during the CFI phase, each algorithm is independently treated as an interoperable component to be tested, and exit criteria for *each* algorithm having defined points of interoperability. This again relates to the Curve25519 discussion.

That said, even UAs may not know what the practical limitations are while implementing! PKCS#11, for example, doesn't have a way to discover the step size for keys, while CNG does ( http://msdn.microsoft.com/en-us/library/windows/desktop/aa375525(v=vs.85).aspx )

If this sounds like a horrible place to be in, from a UA, now you see why this has been such a long effort. This is akin to trying to define a set of 3D APIs, when the underlying implementation may be on OpenGL, DirectX, and Glide, during their heyday. While there is significant conceptual overlap, the translation is... lossy.

The current API is a reflection of attempting to find the lowest common denominator of potential implementability (eg: through cross-referencing PKCS#11, CAPI/CNG, CDSA/SecurityTransforms/CommonCrypto, OpenSSL, GnuTLS), while at the same time recognizing there are practical limitations towards the interop.


Broadly speaking, many of the interop issues come up with the asymmetric operations, due to their greater variability. Many of the symmetric operations have a very limited number of modes and key sizes. However, there's already been concern about whether or not the 192-bit AES key sizes are worthwhile from an implementation standpoint, or if practically speaking, only 128/256-bit are meaningful.
Comment 11 Boris Zbarsky 2014-06-05 13:01:06 UTC
> This is the general algorithm discovery problem

That doesn't answer my question, unfortunately.  How is this API envisioned to be used?

> Within the WG, the plan has been that during the CFI phase

To be honest, I'm more interested in practical in-the-field interop than in the process wrangling here.  The two-implementations interop requirement is meant to ensure in-the-field interop, but that generally requires, imo, that the same two implementations pass all the tests.  Of course it's common for working groups to try to use some weaker criterion there...  In any case, all that is a side-issue; the real issue is what happens out in the wild.

> If this sounds like a horrible place to be in

Yes, it does!

> This is akin to trying to define a set of 3D APIs, when the underlying
> implementation may be on OpenGL, DirectX, and Glide, during their heyday.

We have a set of 3D APIs where the underlying implementation may be OpenGL, DirectX, or software on the web today.  It's called WebGL.  This was done by only defining a lowest-common-denominator sort of API that could be implemented on top of other things.

We also have a set of 2D APIs on the web (canvas) that is implemented on top of all sorts of different graphics libraries.

Your situation is actually quite different, as I understand, because you're saying there _isn't_ actually meaningful lowest-common-denominator overlap of actual end-to-end functionality between crypto libraries.

> However, there's already been concern about whether or not the 192-bit AES
> key sizes are worthwhile from an implementation standpoint, or if practically
> speaking, only 128/256-bit are meaningful.

Well, requiring at least the 128 or 256 size might be at least a start...

It seems completely wrong to me that an implementation can implement absolutely no algorithms and reasonably claim to be implementing this spec.
Comment 12 Mark Watson 2014-06-05 15:12:45 UTC
Would it work for us to declare certain algorithms as "candidates" for becoming required ? Then, we will promote them to required only if the desktop browsers (say) are all successful in implementing them. And only the commonly implemented subset of parameters would be required.

We would be justified in placing this requirement on implementors in other environments by the evidence that several independent implementors have successfully navigated the various minefields outlined by Ryan (i.e. 'if they can do it, you can too').

It does seem that the only way to discover what can be commonly implemented across platforms is by empirical experiment, but we could use the results when they are in.
Comment 13 Boris Zbarsky 2014-06-05 15:14:19 UTC
Note that in W3C process terms you could do that by having those normative requirements in the spec, marking them at risk, and then after CR seeing where things stand...
Comment 14 Web Cryptography Working Group 2014-06-05 15:51:45 UTC
Precisely, I think that is what we discussing doing as well - i.e. change "recommended" to "suggested" and then see how tough interop is doing CR. I'm not adverse to, if we get widespread interop during CR, to adding some algorithms as mandatory to be implement.
Comment 15 Ryan Sleevi 2014-06-05 15:58:24 UTC
(In reply to Boris Zbarsky from comment #11)
> 
> That doesn't answer my question, unfortunately.  How is this API envisioned
> to be used?

The same way developers have, for the past two decades, worked on every other platform with a cryptographic API.

You try it. It either works, or it does not. If it does not, you inform your users.

This is the reality of using the OS provided cryptographic APIs. Unless you bake the cryptography in yourself - opening up a whole different can of worms that are non-technical - you have limited guarantees. Even when the OS/library has reference implementations for X, Y, Z - almost invariably, X/Y/Z can be disabled.

> We have a set of 3D APIs where the underlying implementation may be OpenGL,
> DirectX, or software on the web today.  It's called WebGL.  This was done by
> only defining a lowest-common-denominator sort of API that could be
> implemented on top of other things.

Without belaboring the metaphor too much, I was *not* talking about things today. I was describing what it would be like to try to standardize WebGL on the APIs available 15 years ago. It would not have worked, if you're familiar with the limitations of Glide/DirectX 3. You did not see the overlap.

> 
> We also have a set of 2D APIs on the web (canvas) that is implemented on top
> of all sorts of different graphics libraries.

Because software polyfills are possible. This is not the case with cryptography, generally speaking.

> 
> Your situation is actually quite different, as I understand, because you're
> saying there _isn't_ actually meaningful lowest-common-denominator overlap
> of actual end-to-end functionality between crypto libraries.

There is agreement in what things are called, but yes, there is very little practical agreement in capabilities.

I do *not* see this as a bad thing, because there are a lot of motivations for why the world is this way.
Comment 16 Boris Zbarsky 2014-06-05 16:02:13 UTC
> You try it. It either works, or it does not. If it does not, you inform your
> users.

That's sort of ok for controlled environments where you know where you'll be running (like operating systems), but seems _terrible_ for the web.  What it will mean in practice is sites being created to run in only one browser, and likely only in particular versions of one browser...
Comment 17 Ryan Sleevi 2014-06-05 16:06:50 UTC
(In reply to Web Cryptography Working Group from comment #14)
> Precisely, I think that is what we discussing doing as well - i.e. change
> "recommended" to "suggested" and then see how tough interop is doing CR. I'm
> not adverse to, if we get widespread interop during CR, to adding some
> algorithms as mandatory to be implement.

In case it hasn't been perfectly clear, I am strongly opposed to this and would view it as a blocking issue.

I suspect that the Google team is the most unique, so far, in their WebCrypto implementation. I'm not aware of Microsoft's mobile implementation plans. Apple's plans in theory share the same implementation infrastructure (CommonCrypto), but I do believe that the iOS implementation is slightly different than the OS X (certainly has been the case for every one of their previous APIs).

However, on the Chrome side, I'm daily dealing with a wide variety of hardware and platforms. Our abilities on Windows (where we have chosen to deal with the hassle of shipping crypto ourselves) are different than our abilities on Linux (where we have not) than our abilities on ChromeOS (where there is specialized hardware) than our abilities on Android (where there is variance in capabilities based on the OS level and various hardware capabilities) than our abilities on other platforms that use Chrome/Chromium as the basis for capabilities (such as ChromeOS).

Even if the WG were to say "Tough, that's Google's problem, you can afford more lawyers and engineers", every other UA that would have these different profiles is going to be similarly affected.

This thread has been trying to explain the same issues that have been present for the past two years. While good to be written down, it doesn't change the very unfortunate reality of the world, so it would be truly unfortunate to ignore these issues.
Comment 18 Ryan Sleevi 2014-06-05 16:11:37 UTC
(In reply to Boris Zbarsky from comment #16)
> > You try it. It either works, or it does not. If it does not, you inform your
> > users.
> 
> That's sort of ok for controlled environments where you know where you'll be
> running (like operating systems), but seems _terrible_ for the web.  What it
> will mean in practice is sites being created to run in only one browser, and
> likely only in particular versions of one browser...

Even within the OS space, you find additional variance within the OS versions you wish to support. The lack of AES/ECC on Windows XP, for example. Or of SHA-2. Without trying to go into a tangent on XP's EOL, it's more a remark about how things can vary, even within an API.

How is this any different than a "random chat with someone" site, like ChatRoulette, detecting if you have a camera and microphone? Or an application, like Google Maps, detecting if you have GPS to offer higher-precision information?

Each of these algorithms are, again, conceptually device APIs. Application developers will not require the set of (all or none), they will, depending on their application, require certain ones.

Within these APIs, as well, it's not a case of requiring all or no capabilities - they may have different requirements, some that have graceful fallback / opportunity to try other permutations (eg: generating keys), while other use cases may have no fallback method (eg: verifying signatures)
Comment 19 Mark Watson 2014-06-05 16:22:22 UTC
(In reply to Ryan Sleevi from comment #17)
> (In reply to Web Cryptography Working Group from comment #14)
> > Precisely, I think that is what we discussing doing as well - i.e. change
> > "recommended" to "suggested" and then see how tough interop is doing CR. I'm
> > not adverse to, if we get widespread interop during CR, to adding some
> > algorithms as mandatory to be implement.
> 
> In case it hasn't been perfectly clear, I am strongly opposed to this and
> would view it as a blocking issue.
> 
> I suspect that the Google team is the most unique, so far, in their
> WebCrypto implementation. I'm not aware of Microsoft's mobile implementation
> plans. Apple's plans in theory share the same implementation infrastructure
> (CommonCrypto), but I do believe that the iOS implementation is slightly
> different than the OS X (certainly has been the case for every one of their
> previous APIs).
> 
> However, on the Chrome side, I'm daily dealing with a wide variety of
> hardware and platforms. Our abilities on Windows (where we have chosen to
> deal with the hassle of shipping crypto ourselves) are different than our
> abilities on Linux (where we have not) than our abilities on ChromeOS (where
> there is specialized hardware) than our abilities on Android (where there is
> variance in capabilities based on the OS level and various hardware
> capabilities) than our abilities on other platforms that use Chrome/Chromium
> as the basis for capabilities (such as ChromeOS).
> 
> Even if the WG were to say "Tough, that's Google's problem, you can afford
> more lawyers and engineers", every other UA that would have these different
> profiles is going to be similarly affected.
> 
> This thread has been trying to explain the same issues that have been
> present for the past two years. While good to be written down, it doesn't
> change the very unfortunate reality of the world, so it would be truly
> unfortunate to ignore these issues.

It's not clear to me how the above is an argument against what was proposed.

The idea would be that at some future point (say a year from now), we look at what has actually been implemented across multiple platforms. You have the advantage that your platform would certainly be included in the ones we look at. If we find there is a common subset that is widely implemented, we make that subset mandatory for future implementations.

At least from Chrome's point of view, surely this process would be a noop, since the only things made mandatory would be those you have already implemented ?

If we were to follow the above process, there seems also no harm in writing down our 'wish list' for that common subset, making it clear that it is no more than that.
Comment 20 Boris Zbarsky 2014-06-05 16:32:34 UTC
> detecting if you have a camera and microphone?

Because in practice people who want to use such a site always do.  And that's because users know whether they have a camera and microphone and don't have the expectation that they can do video chat without a camera or audio without a microphone.

So place yourself in the user's shoes for a second.  You go to a website to do chat, and it says "Can't do it, no microphone".  You either go "Oh, duh, I forgot I was using my 5-year-old desktop", or you assume that either the site or the browser is broken because the microphone is right there on your device and works fine with all other applications.  Obviously sites and browsers have incentive to not seem broken, so they work on properly detecting microphones (in the case of browsers) and properly detecting microphone APIs (sites).  As a result, the site can be fairly certain that telling the user "There is no microphone" is pretty reasonable, because if there were one the browser would expose it to the site and the site would see it.  Furthermore, it's even sanely actionable: the user can plug in their external microphone, for that tiny fraction of users who have a device without a built-in one, have an external one, and use it, or grab a different device that has a microphone.

Alright, let's try this again with crypto.  A user goes to a site to, hypothetically, see a movie whose watching involves crypto operations in some way (if there other obvious use cases that I should be considering here, I'd appreciate a pointer; I'm told the movie use case does exist, though).  The site wants to use an algorithm the user's browser doesn't support.  What is the messaging the site shows the user?  It's obviously not going to be "Can't do it, no RSASSA-PKCS1-v1_5 support" because the user's reaction will be to read that out loud trying to make sense of it and then the user's significant other will think the user is choking and try the Heimlich maneuver on them.  More seriously, the site won't say that because from a typical user's point of view that's a meaningless statement.  Even for users who sort of understand what it means, it's really non-actionable.  

So the site will instead provide an actionable error message.  Chances are, something like "You must be using Microsoft Internet Explorer 10.0 and Windows 7" to use this site (with possibly a different browser name/version and different OS name/version, and maybe with a short list instead of a single entry, but this will be the gist).  Unlike a lot of other such statements on the web the statement might even be true, in the sense that Windows Vista and Windows 8 might not have the exact algorithm the site is using, or whatever.

In some abstract sense, there is no difference between these two cases: in one case the user grabs their tablet because it's got a camera and microphone, and in the other case the user grabs their tablet because it's got the blessed operating system and browser version on it.  However in terms of actual perception by actual people I think you'd find they get a lot more pissed off about "I can watch this movie on this website on my iPad but not my Android phone" than they do about not being able to video chat on a phone because it only has a backward-facing camera.  And that's because users know to get a two-camera phone if they want to video chat, and that's part of the obvious things people will tell you about a phone.  Whereas the exact list of crypto algorithms the built-in browser on the phone supports... is not something commonly advertised on the store shelf.
Comment 21 Ryan Sleevi 2014-06-05 16:33:03 UTC
(In reply to Mark Watson from comment #19)
> 
> It's not clear to me how the above is an argument against what was proposed.
> 
> The idea would be that at some future point (say a year from now), we look
> at what has actually been implemented across multiple platforms. You have
> the advantage that your platform would certainly be included in the ones we
> look at. If we find there is a common subset that is widely implemented, we
> make that subset mandatory for future implementations.

This doesn't work in practice for two reasons.

1) For some efforts, the finite resources of implementation have been focused on particular platforms where business requirements, rather than technical, have prioritized the implementation or support of certain algorithms.

2) Not all platform implementations, even within Chrome, are being pursued at the same rate.

I've already explained why, even for a single vendor, there is a vast swath of capabilities. Trying to argue for required algorithms favors those incumbents with implementations already, OR it encourages 'defensive' implementing in which the least possible is implemented within that time frame, to avoid the most normative requirements.

Most importantly, however, should the simplest and most obvious reason: Your guarantees mean nothing. The UA is the User's Agent, and thus will run on platforms where algorithm X is not available or disabled, or key sizes less than Y are disabled. Your precious web application *has* to deal with these issues as a matter of course already.

Saying MUST, in WebCrypto, is really saying http://tools.ietf.org/html/rfc6919#section-1
Comment 22 Ryan Sleevi 2014-06-05 16:44:41 UTC
(In reply to Boris Zbarsky from comment #20)
> Alright, let's try this again with crypto.  A user goes to a site to,
> hypothetically, see a movie whose watching involves crypto operations in
> some way (if there other obvious use cases that I should be considering
> here, I'd appreciate a pointer; I'm told the movie use case does exist,
> though). 

For sake of discussion, it might be better to focus on an authentication service (IdP / RP), whose security goals are a bit less nebulous.

> So the site will instead provide an actionable error message.  Chances are,
> something like "You must be using Microsoft Internet Explorer 10.0 and
> Windows 7" to use this site (with possibly a different browser name/version
> and different OS name/version, and maybe with a short list instead of a
> single entry, but this will be the gist). 

Up until a few weeks ago, this is the same thing that a site that wished to use WebGL on certain major platforms would do, where, despite being "standard", is not implemented by a particular UA. And the user can do nothing but switch UAs.

> the other case the user grabs their tablet because it's
> got the blessed operating system and browser version on it. 

A site that wishes to restrict it's users to "the blessed operating system and browser version" on it has plenty of other ways to do this. If that is your thread, you should be far more worried about http://www.w3.org/TR/2013/WD-webcrypto-key-discovery-20130108/ , which embodies that ability cryptographically.

So let's set aside a moment the 'hostile streaming provider' (which is certainly reasonable, though hopefully rare).

Your more common case is going to be your site operator that is wishing to use WebCrypto to offer some extended functionality. It might be handling OpenID Connect messages (which use JWT) within the UA, using WebCrypto to handle the JOSE interaction, with multiple iframes and postMessage to handle the identity provider/relying party communication.

They might allow a system of "sign in with just your email", but only if they detect the user agent supports their desired cryptographic profile. They might hide it from the user, or they might allow the user to try, run a check for support, and then decide that the feature is not accessible.

They can say *whatever* they want when it's not available, and whether it recommends the user to get a new UA, to file a bug with their existing UA, to write their congresscritter/representative/MP, it's up to the site.

However, from the point of the output of this WG, they know that every UA *COULD* implement a given algorithm, and every UA that *DOES* implement an algorithm will at least implement it to the conforming shape and size. The last thing you want is your signatures to be unreadable in every UA but the one you're using - which is a real risk, judging by historical APIs.
Comment 23 Boris Zbarsky 2014-06-05 17:18:37 UTC
> And the user can do nothing but switch UAs.

While true:

1) Everyone agreed this was a terrible state of affairs.
2) There's a difference between "Use literally any other browser you've ever
   heard of" and "use a specific browser".
3) The UA you're presumably thinking of wasn't claiming to support WebGL.
4) It was clear to everyone that this was a temporary situation.

> A site that wishes to restrict it's users to "the blessed operating system and
> browser version" on it has plenty of other ways to do this. 

Sure.  What it looks like to me is that the current setup more or less _forces_ sites to do that.

Which, again, is why I keep asking for an example of how a well-authored site that doesn't want to do this will use this API and how it will know to do that correct thing.  That's where having some idea of what set of things one can expect a conforming implementation to implement (again, not all of them, but at least _some_ of them) seems like it could be useful.

> they know that every UA *COULD* implement a given algorithm

That's not actually obvious from the spec as it currently stands.  Again, right now a UA that doesn't implement any algorithms at all is considered as conformant as any other UA.

We agree that specifying what happens when a UA does implement an algorithm is worthwhile.  That part is done; I'm not worrying about that part.  ;)
Comment 24 Ryan Sleevi 2014-06-05 19:19:08 UTC
(In reply to Boris Zbarsky from comment #23)
> Sure.  What it looks like to me is that the current setup more or less
> _forces_ sites to do that.
> 
> Which, again, is why I keep asking for an example of how a well-authored
> site that doesn't want to do this will use this API and how it will know to
> do that correct thing.  That's where having some idea of what set of things
> one can expect a conforming implementation to implement (again, not all of
> them, but at least _some_ of them) seems like it could be useful.

I feel like I've given examples, but I suspect we're not communicating well on this.

An application that has *specific* cryptographic requirements can attempt the operations, detect a NotSupportedError, and surface to the user that their user agent doesn't support the necessary algorithms.

An application with less-stringent requirements - let's use a concrete example, of say, https://code.google.com/p/end-to-end/ - can use WebCrypto when it's available and supports the desired algorithm (speed! correctness!), and fall back to Javascript polyfills when not. They can make this decision per-application, depending on whether or not they feel the Javascript implementation matches their necessary security requirements.

An application like https://developer.mozilla.org/en-US/Persona can similarly use detection-and-polyfill, IF the security requirements are suitable.

An application such as the 'media streaming service over HTTP that wants to prevent passive, but not active, attackers' can likewise make tradeoffs. Perhaps they prefer WebCrypto for the key-storage and the browser-mediated key unwrap, and thus chose to not make the service available for UAs that don't support their necessary security profile, rather than polyfill. Or they might choose to polyfill via iframe. Or they might be (mistakenly) more concerned about the user of the device not having access to the keys, rather than the observer on the network. Or they might require hardware backed keys, much like they might require specific CDM modules or support for specific plugins, and thus interoperability is already out the window.

> 
> > they know that every UA *COULD* implement a given algorithm
> 
> That's not actually obvious from the spec as it currently stands.  Again,
> right now a UA that doesn't implement any algorithms at all is considered as
> conformant as any other UA.

Yes.

That is because, for the past two years, conformance has been separated into APIs vs algorithms.

I suspect this might also be some confusion on the list of algorithms included in the spec. As has been discussed on past calls, and, were it not for the fact that the current spec format (WebIDL.xsl) is unwieldy, would have already happened, is that *every* algorithm would have been divided into a separate REC-track document.

That would not change the thing you object to, but would make it clearer to readers that this is a "by design" and not "by accident" feature.

In this respect, it is similar (as I understand) to how the canvas element provides a consistent API/tag, but there are different possible contexts ("2d", "webgl"). Supporting the canvas tag does not, AFAICT, require that one support WebGL.

> 
> We agree that specifying what happens when a UA does implement an algorithm
> is worthwhile.  That part is done; I'm not worrying about that part.  ;)
Comment 25 Henri Sivonen 2014-06-10 11:35:59 UTC
(In reply to Ryan Sleevi from comment #1)
> However, there are types of interoperability that are more problematic.
> Cryptography is an extremely special case, for a variety of reasons. Some of
> them include:
...
(In reply to Ryan Sleevi from comment #7)
> And that's not even beginning to touch on the (many) issues from
> http://www.cryptolaw.org/

Frankly, these look like vague "There are concerns." kind of concerns that aren't usefully actionable.

Does any major browser these days ship different versions with different sets of algorithms to different countries? If not, then it seems beside the point to bring these points up. If they do, then it would be useful to be specific about which algorithms get disabled in which cases by which browsers.

(In reply to Ryan Sleevi from comment #1)
> This is also similar to browsers' existing TLS stacks - administrators can
> (and do) change or disable the ciphersuites used to negotiate with a server.
> Even the SSL "mandatory to implement" cipher may be disabled.

I think drawing an analogy between TLS and the Web Crypto API is incorrect, because they provide different interfaces to the application developer.

In the case of TLS, the application developer gets an authenticated encrypted duplex channel. In the Web case, the situation is even more specific and the Web application developer gets authenticated and encrypted HTTP requests and responses. The cipher suite details have no bearing on the interface presented to the Web application developer, so it's OK for the server to negotiate a different cipher suite with different browsers.

With the Web Crypto API, the level of abstraction is different. Chances are that you are implementing a particular application protocol that uses particular ciphers and if those ciphers are not available, your code breaks or you have to provide the crypto in JS. If you have to provide the crypto in JS, the point of having the browser provide the crypto primitives is completely defeated as far as implementation convenience goes or as far as asking the browser to hold the keys even in the face of XSS attacks goes. Then the Web Crypto APIs just for acceleration and constant timeness treated as an optional characteristic.

> As more and more users
> disable weak algorithms, user agents would be forced to remove *all*
> cryptographic capabilities 

This assumes that it's reasonable for users to be able to disable weak algorithms in a low-level API. As noted above, the alternatives are that the application breaks or that the application falls back on JS-implemented crypto.

If the application just breaks, it's very unlikely that users properly connect the breakage to the configuration changes they have themselves made. Instead, the user will just perceive the mystery breakage and probably blame it on the browser. Therefore, it's probably not a good idea for a browser to make it easy for users to make this kind of configuration changes that are most likely to reflect badly on the user's perception of the browser.

If the application falls back on JS-implemented crypto (i.e. the Web Crypto API was just used for acceleration and optional constant timeness in the first place. The user doesn't get any security benefit from the application falling back on JS-implemented crypto, so it doesn't make sense for the user to disable weak algorithms in that sense, either.

For practical purposes, Web applications will use the Web Crypto API behind the scenes and users won't have any meaningful browser configuration recourse to fight against the application developers making bad algorithm choices. The best bet is not to include weak algorithms in the first place, but the nature of Web compatibility will make it hard to get rid of any algorithm deemed weak subsequently.

(In reply to Ryan Sleevi from comment #3)
> If that sounds hypothetical, it's not. Z, in this case, is ECC with the NIST
> curves. So let's say we spec'd "NIST curves and Curve25519 and Brainpool"
> (Bug 25839). Now users within the US government are prohibited from using
> Curve25519/Brainpool, so they wish to disable those curves. Are they too now
> prevented from using WebCrypto?

If complying with FIPS means that random parts of the Web break for users whose browsers have been configured to be FIPS-compliant rather than Web-compatible, in the big picture it doesn't matter that much if the breakage arises from individual algorithms disappearing from the API or the whole API disappearing. Either way, it's going to be inconvenient for users who live under the FIPS bureaucracy, since some sites will just break in unobvious ways. The only way to make this not be the case would be not having non-FIPS crypto available via the API at all, but that would clearly be wrong, since it would be wrong for the whole rest of the world, including non-governmental users in the United States, to be limited in crypto in order to help U.S. government users not be inconvenienced by U.S. government bureaucracy.
Comment 26 Ryan Sleevi 2014-06-10 17:49:31 UTC
(In reply to Henri Sivonen from comment #25)
> Does any major browser these days ship different versions with different
> sets of algorithms to different countries? If not, then it seems beside the
> point to bring these points up. If they do, then it would be useful to be
> specific about which algorithms get disabled in which cases by which
> browsers.

Major browsers do, but in a way that are transparent to major browsers. This is because major browsers are not in the business, generally speaking, of shipping cryptography. They defer that to existing libraries.

There's also exemptions that are applicable to browsers' use of cryptography (that is, as it relates to SSL/TLS) that do not apply to other uses. This includes both patents and legal restrictions (encryption vs authentication/signatures).

This is not the best forum to speculate or offer legal advice. I am merely presenting the facts, as they stand, because that HAS required the implementation efforts work together with counsel. And I doubt we're alone in this.

> 
> (In reply to Ryan Sleevi from comment #1)
> With the Web Crypto API, the level of abstraction is different. Chances are
> that you are implementing a particular application protocol that uses
> particular ciphers and if those ciphers are not available, your code breaks
> or you have to provide the crypto in JS. If you have to provide the crypto
> in JS, the point of having the browser provide the crypto primitives is
> completely defeated as far as implementation convenience goes or as far as
> asking the browser to hold the keys even in the face of XSS attacks goes.
> Then the Web Crypto APIs just for acceleration and constant timeness treated
> as an optional characteristic.

But it doesn't change what I said - administrators can and do regularly disable cipher suites and algorithms. The choice to disable algorithms is transparent to the web author, but not to the server administrator - it directly affects the security profile of servers (eg: crappy suites vulnerable to BEAST or issues like RC4), and failure causes interop.

However, if your use case is "implementing a particular application protocol", then either you have negotiation capabilities - in which case, your application can work, sans polyfills - or you don't, in which case, you, the application author, can make a choice about whether polyfills are appropriate. In some cases, they are. In some cases, they are not. A protocol without negotiation is something that is rare in practice for standards bodies to produce, although I certainly admit the possibility of an application-specific protocol.

More importantly, and at the risk of diverging this bug - WebCrypto does not protect you in the face of XSS attack. I cannot stress that enough. If you feel it does, it's best to start a thread on public-webcrypto-comments@ so we can explain how, even with 'extractable', you are not protected from common attacks, both cryptographic and on privacy.



> 
> > As more and more users
> > disable weak algorithms, user agents would be forced to remove *all*
> > cryptographic capabilities 
> 
> This assumes that it's reasonable for users to be able to disable weak
> algorithms in a low-level API. As noted above, the alternatives are that the
> application breaks or that the application falls back on JS-implemented
> crypto.

But the reality is that this happens, which you seem to believe doesn't.

They are required by local jurisdiction, enterprise policies, industry regulation, export policies, or mere taste.

> Therefore, it's probably not a good idea for a browser to
> make it easy for users to make this kind of configuration changes that are
> most likely to reflect badly on the user's perception of the browser.

In every UA implementing, the ability to make these kind of configuration changes is not part of the UA. It is entirely transparent to the UA, but happens, which is the point.

> The best bet is not to include weak algorithms in the first place,
> but the nature of Web compatibility will make it hard to get rid of any
> algorithm deemed weak subsequently.

This is irrelevant and not the point I was making. It was about algorithms that show future weaknesses.

> 
> (In reply to Ryan Sleevi from comment #3)
> > If that sounds hypothetical, it's not. Z, in this case, is ECC with the NIST
> > curves. So let's say we spec'd "NIST curves and Curve25519 and Brainpool"
> > (Bug 25839). Now users within the US government are prohibited from using
> > Curve25519/Brainpool, so they wish to disable those curves. Are they too now
> > prevented from using WebCrypto?
> 
> If complying with FIPS means that random parts of the Web break for users
> whose browsers have been configured to be FIPS-compliant rather than
> Web-compatible, in the big picture it doesn't matter that much if the
> breakage arises from individual algorithms disappearing from the API or the
> whole API disappearing. Either way, it's going to be inconvenient for users
> who live under the FIPS bureaucracy, since some sites will just break in
> unobvious ways. The only way to make this not be the case would be not
> having non-FIPS crypto available via the API at all, but that would clearly
> be wrong, since it would be wrong for the whole rest of the world, including
> non-governmental users in the United States, to be limited in crypto in
> order to help U.S. government users not be inconvenienced by U.S. government
> bureaucracy.

I'm sorry, but you've missed the point here as well. This has nothing to do with FIPS compliance. Your point about "ignoring the US government" is the exact opposite of the point.

The NIST curves are standard and, without exception, the most widely adopted curves - in implementations and in deployment. However, people, which would seem to include yourself, may have a distrust of the U.S. Government, and thus wish to disable these curves, using alternate curves. These curves may be non-standard (but performant) curves like Curve25519, they may be "standard by a different agency" curves, such as Brainpool. Or they may be experimental curves (like the MSR curves). But the NIST curves still exist, and substantial use cases depend on supporting them.

Again, this has NOTHING to do with FIPS. This is simply that if WebCrypto requires support for the NIST curves (which nearly everyone *presently* use), then any disabling of the NIST curves would require disabling the entire API. That is not acceptable. If the NIST curves are NOT required, then we're either recommending people use non-standard curves, or standard curves that have poorer security/performance characteristics. This is also craziness.
Comment 27 Web Cryptography Working Group 2014-06-11 08:33:48 UTC

(In reply to Ryan Sleevi from comment #26)
> (In reply to Henri Sivonen from comment #25)
> > Does any major browser these days ship different versions with different
> > sets of algorithms to different countries? If not, then it seems beside the
> > point to bring these points up. If they do, then it would be useful to be
> > specific about which algorithms get disabled in which cases by which
> > browsers.

The NIST curves are  widely deployed. Nonetheless, there is also demand for Curve25519. Ultimately, while browsers often do have local policy they have to deal with, the W3C needs to have an test-suite to prove interoperability over the API. We should test every algorithm that is listed as registered.  I see no reason why not to register Curve25519 given the demand and given that it can be specified, and if browsers don't implement it, that will show up in the CR test-suite report. The CR test-suite report for browsers, which will depend on underlying libraries, will then demonstrate from the interopeability that developers can expect and rely on.  

As regards Boris's original point about interoperability, I would hope that all the "Suggested" (formerly known as "Recommended for impelementation) algorithms are supported cross-browser/platform so developers have a common inteoperable baseline, but algorithms like Curve25519 that are registered but may not yet be widely implemented will nonetheless be listed in the CR report. If it ends up that "Suggested for implementation" algorithms are not actually implemented in an inteoperable manner, I would hope the W3C  may wish to revisit whether or not those algorithms are actually suggested for impelementation. The goal of the W3C Working Group is, as in any other Working Group, to produce an API that is interoperable and can be relied on by developers as much as possible even with the particular nature of cryptography. 

> 
> Major browsers do, but in a way that are transparent to major browsers. This
> is because major browsers are not in the business, generally speaking, of
> shipping cryptography. They defer that to existing libraries.
> 
> There's also exemptions that are applicable to browsers' use of cryptography
> (that is, as it relates to SSL/TLS) that do not apply to other uses. This
> includes both patents and legal restrictions (encryption vs
> authentication/signatures).
> 
> This is not the best forum to speculate or offer legal advice. I am merely
> presenting the facts, as they stand, because that HAS required the
> implementation efforts work together with counsel. And I doubt we're alone
> in this.
> 
> > 
> > (In reply to Ryan Sleevi from comment #1)
> > With the Web Crypto API, the level of abstraction is different. Chances are
> > that you are implementing a particular application protocol that uses
> > particular ciphers and if those ciphers are not available, your code breaks
> > or you have to provide the crypto in JS. If you have to provide the crypto
> > in JS, the point of having the browser provide the crypto primitives is
> > completely defeated as far as implementation convenience goes or as far as
> > asking the browser to hold the keys even in the face of XSS attacks goes.
> > Then the Web Crypto APIs just for acceleration and constant timeness treated
> > as an optional characteristic.
> 
> But it doesn't change what I said - administrators can and do regularly
> disable cipher suites and algorithms. The choice to disable algorithms is
> transparent to the web author, but not to the server administrator - it
> directly affects the security profile of servers (eg: crappy suites
> vulnerable to BEAST or issues like RC4), and failure causes interop.
> 
> However, if your use case is "implementing a particular application
> protocol", then either you have negotiation capabilities - in which case,
> your application can work, sans polyfills - or you don't, in which case,
> you, the application author, can make a choice about whether polyfills are
> appropriate. In some cases, they are. In some cases, they are not. A
> protocol without negotiation is something that is rare in practice for
> standards bodies to produce, although I certainly admit the possibility of
> an application-specific protocol.
> 
> More importantly, and at the risk of diverging this bug - WebCrypto does not
> protect you in the face of XSS attack. I cannot stress that enough. If you
> feel it does, it's best to start a thread on public-webcrypto-comments@ so
> we can explain how, even with 'extractable', you are not protected from
> common attacks, both cryptographic and on privacy.
> 
> 
> 
> > 
> > > As more and more users
> > > disable weak algorithms, user agents would be forced to remove *all*
> > > cryptographic capabilities 
> > 
> > This assumes that it's reasonable for users to be able to disable weak
> > algorithms in a low-level API. As noted above, the alternatives are that the
> > application breaks or that the application falls back on JS-implemented
> > crypto.
> 
> But the reality is that this happens, which you seem to believe doesn't.
> 
> They are required by local jurisdiction, enterprise policies, industry
> regulation, export policies, or mere taste.
> 
> > Therefore, it's probably not a good idea for a browser to
> > make it easy for users to make this kind of configuration changes that are
> > most likely to reflect badly on the user's perception of the browser.
> 
> In every UA implementing, the ability to make these kind of configuration
> changes is not part of the UA. It is entirely transparent to the UA, but
> happens, which is the point.
> 
> > The best bet is not to include weak algorithms in the first place,
> > but the nature of Web compatibility will make it hard to get rid of any
> > algorithm deemed weak subsequently.
> 
> This is irrelevant and not the point I was making. It was about algorithms
> that show future weaknesses.
> 
> > 
> > (In reply to Ryan Sleevi from comment #3)
> > > If that sounds hypothetical, it's not. Z, in this case, is ECC with the NIST
> > > curves. So let's say we spec'd "NIST curves and Curve25519 and Brainpool"
> > > (Bug 25839). Now users within the US government are prohibited from using
> > > Curve25519/Brainpool, so they wish to disable those curves. Are they too now
> > > prevented from using WebCrypto?
> > 
> > If complying with FIPS means that random parts of the Web break for users
> > whose browsers have been configured to be FIPS-compliant rather than
> > Web-compatible, in the big picture it doesn't matter that much if the
> > breakage arises from individual algorithms disappearing from the API or the
> > whole API disappearing. Either way, it's going to be inconvenient for users
> > who live under the FIPS bureaucracy, since some sites will just break in
> > unobvious ways. The only way to make this not be the case would be not
> > having non-FIPS crypto available via the API at all, but that would clearly
> > be wrong, since it would be wrong for the whole rest of the world, including
> > non-governmental users in the United States, to be limited in crypto in
> > order to help U.S. government users not be inconvenienced by U.S. government
> > bureaucracy.
> 
> I'm sorry, but you've missed the point here as well. This has nothing to do
> with FIPS compliance. Your point about "ignoring the US government" is the
> exact opposite of the point.
> 
> The NIST curves are standard and, without exception, the most widely adopted
> curves - in implementations and in deployment. However, people, which would
> seem to include yourself, may have a distrust of the U.S. Government, and
> thus wish to disable these curves, using alternate curves. These curves may
> be non-standard (but performant) curves like Curve25519, they may be
> "standard by a different agency" curves, such as Brainpool. Or they may be
> experimental curves (like the MSR curves). But the NIST curves still exist,
> and substantial use cases depend on supporting them.
> 
> Again, this has NOTHING to do with FIPS. This is simply that if WebCrypto
> requires support for the NIST curves (which nearly everyone *presently*
> use), then any disabling of the NIST curves would require disabling the
> entire API. That is not acceptable. If the NIST curves are NOT required,
> then we're either recommending people use non-standard curves, or standard
> curves that have poorer security/performance characteristics. This is also
> craziness.
Comment 28 Henri Sivonen 2014-06-11 08:51:16 UTC
(In reply to Ryan Sleevi from comment #26)
> (In reply to Henri Sivonen from comment #25)
> > Does any major browser these days ship different versions with different
> > sets of algorithms to different countries? If not, then it seems beside the
> > point to bring these points up. If they do, then it would be useful to be
> > specific about which algorithms get disabled in which cases by which
> > browsers.
> 
> Major browsers do, but in a way that are transparent to major browsers. This
> is because major browsers are not in the business, generally speaking, of
> shipping cryptography. They defer that to existing libraries.

Which major browsers defer to libraries that are not shipped by the same entity that ships the browser?

Is there documentation of the shipping destination-specific differences in algorithm availability in libraries used by major browsers?

> There's also exemptions that are applicable to browsers' use of cryptography
> (that is, as it relates to SSL/TLS) that do not apply to other uses.

OK.

> However, if your use case is "implementing a particular application
> protocol", then either you have negotiation capabilities - in which case,
> your application can work, sans polyfills - or you don't, in which case,
> you, the application author, can make a choice about whether polyfills are
> appropriate. In some cases, they are. In some cases, they are not. A
> protocol without negotiation is something that is rare in practice for
> standards bodies to produce, although I certainly admit the possibility of
> an application-specific protocol.

Suppose the protocol you implement is OpenPGP. You receive a message signed with RSA. If you want to verify the signature, you have to have RSA signature verification--either via Web Crypto or via a polyfill. You don't get to negotiate an algorithm at that point even though OpenPGP support multiple signature algorithms.

If you are implementing e.g. the TextSecure protocol (whether that particular protocol makes sense as a part of a Web page isn't relevant to the example), you have to have a specific set of cryptographic primitives. AFAIK, there's no negotiation between functionally similar alternatives.

As for the point about standards bodies, I think it's not reasonable to expect that even the majority of application-specific protocols that could potentially use Web Crypto, if Web Crypto ended up being consistent enough a basis to write apps on, would be standards body-specified.

> > > As more and more users
> > > disable weak algorithms, user agents would be forced to remove *all*
> > > cryptographic capabilities 
> > 
> > This assumes that it's reasonable for users to be able to disable weak
> > algorithms in a low-level API. As noted above, the alternatives are that the
> > application breaks or that the application falls back on JS-implemented
> > crypto.
> 
> But the reality is that this happens, which you seem to believe doesn't.

I believe it happens. I just think it is sufficiently unreasonable that it's not good for the spec to treat it as a legitimate conforming thing and from the spec perspective it would make more sense to treat it like a user self-sabotaging other features of their self-compiled copy of an Open Source browser.

> > The best bet is not to include weak algorithms in the first place,
> > but the nature of Web compatibility will make it hard to get rid of any
> > algorithm deemed weak subsequently.
> 
> This is irrelevant and not the point I was making. It was about algorithms
> that show future weaknesses.

Right. My point is that the dynamics of Web compatibility are such that if an algorithm is shown weak in the future, you won't be able to make Web apps secure by withdrawing the algorithm from the API underneath those apps.

Consider a recent example from a non-Web context: LibreSSL developers tried to remove DES (not 3DES but DES) as insecure, but they had to put it back in, because too many things that use OpenSSL as a non-TLS crypto library broke. At least with *BSD systems you have a finite ports collection to test with. With the Web, you don't have a finite collection of Web apps to test, so this pattern can only be worse on the Web.

This is why I think it's unreasonable to design with the assumption that it would be reasonable for browser developers, administrators or users to disable the API availability of algorithms that have been taken into use and found weak subsequently. I believe browser configuration simply isn't going to be a viable recourse for users to forcible fix Web apps that user weak algorithms via the Web Crypto API. 

> The NIST curves are standard and, without exception, the most widely adopted
> curves - in implementations and in deployment. However, people, which would
> seem to include yourself, may have a distrust of the U.S. Government, and
> thus wish to disable these curves, using alternate curves.

I don't appreciate inferring something about my trust or distrust in the U.S. goverment from the point I made.

My point is that a policy (maybe the policy is not called FIPS, but at least in Firefox the button that messes with crypto features is called "Enable FIPS") that involves disabling some Web browser features as a matter of local policy is hostile to Web compatibility and, therefore, something that Web standards, which are supposed to enable interop, shouldn't particularly accommodate. If there was only one actor with such a policy (the USG), it would be *possible* to accommodate one such actor by making the browser feature set match the local policy of the single actor. But as soon as there's anyone else who legitimately should have more features, and there are, designing for a single actor (e.g. the USG) is not OK. (This observation isn't a matter of trust in the USG but an observation that you get a requirement conflict as soon as you have more than one whitelist/blacklist of algorithms.)

FWIW, I think it's reasonable for an actor to have a local policy that say that their servers shall only enable certain TLS cipher suites or that Web apps hosted by them shall only use certain algorithms via the Web Crypto API. I think it's unreasonable to subset the browser features so that the compatibility of the browser with sites external to the actor is broken in ways that are non-obvious to users. (The possibility of a site providing a JS polyfill for a crypto algorithm shows how silly a policy that turns off Web Crypto API features would be: Unapproved crypto algorithms would still end up being used.)

I realize that there are organizations whose policies are unreasonable per the above paragraph, but, personally, I think the WG shouldn't have to feel an obligation to those organizations to make it easy for them to subset the API or to bless such a subset configuration as conforming, since such subsetting is hostile to interoperability of Web tech and the W3C should be promoting interoperability of Web tech. That is, I think it would be fine to say that such subsetting is non-conforming. Not in the sense of expressing a belief that such a thing would never happen but in the sense that non-conforming configurations should be expected not to work interoperably. In other words, if someone deliberately seeks to make things incompatible, the WG should feel no need to make sure such deliberate incompatibility is accommodated within the definition of conformance.

For the rest of the Web that doesn't seek deliberate incompatibility, there is value in having a clear definition of what the compatible feature set for vanilla browsers is.
Comment 29 Anne 2014-06-11 08:57:41 UTC
(In reply to Henri Sivonen from comment #28)
> For the rest of the Web that doesn't seek deliberate incompatibility, there
> is value in having a clear definition of what the compatible feature set for
> vanilla browsers is.

Agreed. Even if the WG is interested in addressing use cases outside of browsers, there should be a clear conformance level aimed solely at browsers so that they can at least get decent interoperability.
Comment 30 Ryan Sleevi 2014-06-11 09:06:09 UTC
(In reply to Henri Sivonen from comment #28)
> Which major browsers defer to libraries that are not shipped by the same
> entity that ships the browser?

This is not the correct question to ask.

CNG is only updated on major Windows releases. IE is on a separate timeframe.
Common Crypto is only updated on major OS X releases. Safari is on a separate timeframe.
Firefox intentionally supports OS-distributed NSS. In fact, it is the most common distribution of Firefox on Linux platforms that Mozilla has no direct control over the version of NSS used.
Similarly, Chrome on Linux makes the same deferral. On other platforms, it uses a mix of cryptographic capabilities provided by the OS or by third-party libraries.


> Suppose the protocol you implement is OpenPGP. You receive a message signed
> with RSA. If you want to verify the signature, you have to have RSA
> signature verification--either via Web Crypto or via a polyfill. You don't
> get to negotiate an algorithm at that point even though OpenPGP support
> multiple signature algorithms.

Correct. And as an author of such an application, you can make a reasonable and informed choice about whether a polyfill implementation is appropriate when receiving messages (for signature verification, it might be; for decryption, it might not), and you can make informed choices when sending messages.

> As for the point about standards bodies, I think it's not reasonable to
> expect that even the majority of application-specific protocols that could
> potentially use Web Crypto, if Web Crypto ended up being consistent enough a
> basis to write apps on, would be standards body-specified.

That is something we have repeatedly and emphatically, unquestionably, discouraged within the security considerations and the discussions for the past two years.

> I believe it happens. I just think it is sufficiently unreasonable that it's
> not good for the spec to treat it as a legitimate conforming thing and from
> the spec perspective it would make more sense to treat it like a user
> self-sabotaging other features of their self-compiled copy of an Open Source
> browser.

This is an unreasonably hostile path to take to users, and on an extreme that is not accurate. Show me an industry regulation that says things like "Thou shalt not access the camera", compared to the many industry and governmental regulations that say "Thou shalt not use SHA-1" (for example).

> Right. My point is that the dynamics of Web compatibility are such that if
> an algorithm is shown weak in the future, you won't be able to make Web apps
> secure by withdrawing the algorithm from the API underneath those apps.

Correct.

And that WebCrypto, in present form, allows implementations to do this, is a FEATURE, not a bug.

> For the rest of the Web that doesn't seek deliberate incompatibility, there
> is value in having a clear definition of what the compatible feature set for
> vanilla browsers is.

Again, you have seemingly missed the point, or at least placed conflicting statements.

Your position, simply stated, appears to be that if I want to disable SHA-1, as a user, my only recourse is to disable all of WebCrypto, because it no longer fits the mandatory to implement algorithms.

As an implementor, author, and user, that is unacceptable.

If we allow for SHA-1 to be disabled - by user/policy - while still allowing other algorithms to be enabled (SHA-256, RSA-OAEP with SHA-256, perhaps even HMAC-SHA1 and only disable SHA-1 for digest), then application developers are in the exact same position as they are now.

The "all or nothing" approach to WebCrypto, as you seemingly are arguing for (in the name of interop), makes WebCrypto practically useless.
Comment 31 Ryan Sleevi 2014-06-11 09:11:22 UTC
(In reply to Web Cryptography Working Group from comment #27)
> The NIST curves are  widely deployed. Nonetheless, there is also demand for
> Curve25519. Ultimately, while browsers often do have local policy they have
> to deal with, the W3C needs to have an test-suite to prove interoperability
> over the API. We should test every algorithm that is listed as registered. 
> I see no reason why not to register Curve25519 given the demand and given
> that it can be specified, and if browsers don't implement it, that will show
> up in the CR test-suite report. The CR test-suite report for browsers, which
> will depend on underlying libraries, will then demonstrate from the
> interopeability that developers can expect and rely on.  

The support for Curve25519 is a separate bug. Let's keep it that way. If you want to talk about lack of interoperability, specification, or maturity, that's a great one to focus on exactly why we should not be including it in Web Crypto.
Comment 32 Harry Halpin 2014-06-11 09:39:03 UTC
(In reply to Ryan Sleevi from comment #31)
> (In reply to Web Cryptography Working Group from comment #27)
> > The NIST curves are  widely deployed. Nonetheless, there is also demand for
> > Curve25519. Ultimately, while browsers often do have local policy they have
> > to deal with, the W3C needs to have an test-suite to prove interoperability
> > over the API. We should test every algorithm that is listed as registered. 
> > I see no reason why not to register Curve25519 given the demand and given
> > that it can be specified, and if browsers don't implement it, that will show
> > up in the CR test-suite report. The CR test-suite report for browsers, which
> > will depend on underlying libraries, will then demonstrate from the
> > interopeability that developers can expect and rely on.  
> 
> The support for Curve25519 is a separate bug. Let's keep it that way. If you
> want to talk about lack of interoperability, specification, or maturity,
> that's a great one to focus on exactly why we should not be including it in
> Web Crypto.

The point is that while interoperabiloty is difficult due to dependency on the OS level, interoperability for WebCrypto will likely be more difficult than other APIs (due to reasons mentioned by Ryan), and so it's hard to specify a priori requirements for "just browsers". Nonetheless, the test-suite *should* test every registered algorithm across browser and OS combinations to determine what the level of *actual implemented* interoperability actually is. We are hoping the "suggested for implementation" algorithms will be implemented uniformly and so deliver interoperability, but again - we won't know till we get the test-suite. 

So I suggest we revisit this bug in detail after we get into CR, but that we add some clarifying text to the next Editor's Draft about the relationship of suggested implementation to interoperability in 18.2, namely that while there are no strictly required algorithms in this draft, we will report any interoperable algorithms in the test-suite. 

Curve22519 was used as an example of an algorithm with real use-cases and actual demand from developers that can be registered (although it does lack a proper IETF RFC etc.) but may likely not be implemented by browsers, but nonetheless should likely be registered and tested. That being said, I can't see the "it's unlikely to be implemented" as a good argument until we actually register it and then test the interop to provie it isn't implemented. There is a large class of such algorithms.

We're likely going to have to revisit the whole registered/suggestion(recommended) distinction after CR and what stays in the spec and test-suite and what just is listed in a wiki. I'd like concrete suggestions on how that process should work. 

For example, options could be:

1) Any algorithm which can be specified should be tested. 
2) Any algorithm which has one (or two?) implementation should be in final test suite.
4) Only algorithms that work across all major implementation/OS combinations should be "suggested for implementation" in spec.

There's lots of variance possible here. I feel personally we just need to make progress on test-suite and then revisit these questions of interoperability, but we do need a good rule on what we are going to test. So far, it seems like we're going to test everything registered by the time we move to CR. 



[1] https://dvcs.w3.org/hg/webcrypto-api/raw-file/tip/spec/Overview.html#algorithms
Comment 33 Henri Sivonen 2014-07-07 09:46:10 UTC
(In reply to Ryan Sleevi from comment #30)
> (In reply to Henri Sivonen from comment #28)
> > Which major browsers defer to libraries that are not shipped by the same
> > entity that ships the browser?
> 
> This is not the correct question to ask.
>
> CNG is only updated on major Windows releases. IE is on a separate timeframe.
> Common Crypto is only updated on major OS X releases. Safari is on a
> separate timeframe.
> Firefox intentionally supports OS-distributed NSS. In fact, it is the most
> common distribution of Firefox on Linux platforms that Mozilla has no direct
> control over the version of NSS used.
> Similarly, Chrome on Linux makes the same deferral. On other platforms, it
> uses a mix of cryptographic capabilities provided by the OS or by
> third-party libraries.

So the propagation time of changes to the systems libraries is longer. It doesn't follow that Microsoft couldn't make changes to their platform crypto to accommodate IE's Web Crypto impl, Apple couldn't make changes to their platform crypto to accommodate Safari or Red Hat couldn't make changes to NSS to accommodate Firefox--eventually.

My comments are about converging on interop eventually--not about exiting CR next week.

> > I believe it happens. I just think it is sufficiently unreasonable that it's
> > not good for the spec to treat it as a legitimate conforming thing and from
> > the spec perspective it would make more sense to treat it like a user
> > self-sabotaging other features of their self-compiled copy of an Open Source
> > browser.
> 
> This is an unreasonably hostile path to take to users, and on an extreme
> that is not accurate. Show me an industry regulation that says things like
> "Thou shalt not access the camera", compared to the many industry and
> governmental regulations that say "Thou shalt not use SHA-1" (for example).

So if you are banned from using a particular crypto primitive, don't invoke it in your intranet app. It's not reasonable to withdraw it from the browsers you have installed thereby breaking random external sites.

> > Right. My point is that the dynamics of Web compatibility are such that if
> > an algorithm is shown weak in the future, you won't be able to make Web apps
> > secure by withdrawing the algorithm from the API underneath those apps.
> 
> Correct.
> 
> And that WebCrypto, in present form, allows implementations to do this, is a
> FEATURE, not a bug.

Wait, you think it's a FEATURE to *allow* something that you think is correctly characterized as "won't be able to"???

> > For the rest of the Web that doesn't seek deliberate incompatibility, there
> > is value in having a clear definition of what the compatible feature set for
> > vanilla browsers is.
> 
> Again, you have seemingly missed the point, or at least placed conflicting
> statements.
> 
> Your position, simply stated, appears to be that if I want to disable SHA-1,
> as a user, my only recourse is to disable all of WebCrypto, because it no
> longer fits the mandatory to implement algorithms.

No. What makes you think that a user who want to remove features from their Web browser--be it crypto primitives or anything else--needs anyone to bless the resulting personal feature set as conforming? We don't need a spec to consider it conforming for a user to delete a block of code from the HTML parser and recompile their personal browser instance.

I don't care about whether users can call their browser compliant if they disable some features either by flipping pref or by changing code and recompiling. I'm interested in documenting an eventually consistent target state across the feature set offered by major browsers in their default configurations. For browsers that suffer some non-agility from deferring to platform libraries, "eventually" may take a bit more time than the WG has planned for their CR schedule.
Comment 34 Ryan Sleevi 2014-07-07 22:03:37 UTC
(In reply to Henri Sivonen from comment #33)
> So the propagation time of changes to the systems libraries is longer. It
> doesn't follow that Microsoft couldn't make changes to their platform crypto
> to accommodate IE's Web Crypto impl, Apple couldn't make changes to their
> platform crypto to accommodate Safari or Red Hat couldn't make changes to
> NSS to accommodate Firefox--eventually.

I think we're talking past eachother.

Even if the underlying platform changes, it means that you're now looking at limiting UA support of the feature to only the bleeding edge OS - which we empirically know few users run.

Further, if we go the route of "all or nothing", which I understand you to be suggesting, than it means we would run the risk of cutting off the nose to spite the face on one extreme, or landing with something 'less than desirable' on the other.

> So if you are banned from using a particular crypto primitive, don't invoke
> it in your intranet app. It's not reasonable to withdraw it from the
> browsers you have installed thereby breaking random external sites.

That's not how these things work, unfortunately. Many (most?) of these sorts of requirements explicitly require disabling.

> Wait, you think it's a FEATURE to *allow* something that you think is
> correctly characterized as "won't be able to"???

Yes.

Breaking Web Compat is Bad. It's well-understood you don't remove something without extreme pain and prejudice. I'm aware that this is particularly troublesome for some, especially those whose ears prickle at security APIs, but the reality is, once we ship something to the Web, it's very hard to remove.

This is why the Web has been so successful! You can still go to your old Angelfire or Geocities page, written in the realm of HTML 1.0, and it "Just Works". Mod a blink tag or two, which would still render.

The Web is defined by it's backwards compatibility - for better or for worse.

> No. What makes you think that a user who want to remove features from their
> Web browser--be it crypto primitives or anything else--needs anyone to bless
> the resulting personal feature set as conforming? We don't need a spec to
> consider it conforming for a user to delete a block of code from the HTML
> parser and recompile their personal browser instance.
> 
> I don't care about whether users can call their browser compliant if they
> disable some features either by flipping pref or by changing code and
> recompiling. I'm interested in documenting an eventually consistent target
> state across the feature set offered by major browsers in their default
> configurations. For browsers that suffer some non-agility from deferring to
> platform libraries, "eventually" may take a bit more time than the WG has
> planned for their CR schedule.

This doesn't really address at all the point I was making, so I fear we're talking past eachother here as well.

What's the expected behaviour, for a UA, if a user chooses to disable an algorithm (by policy)?
Does it
  a) Allow all of WebCrypto, mod the disabled algorithm
  b) Disable all of WebCrypto, because it fails to meet the profile?
  c) Other?

Whatever the choice (a-c), if we go MTI, the spec needs to provide guidance on that.

If you answer (a/c), then further questions emerge, such as:
- What if the UA can't determine whether or not the user intentionally disabled an algorithm (i.e.: it's disabled at an OS layer that the UA is unaware/agnostic towards)
- What should the behaviour of the site operator's script be when it attempts to use SHA-1?
- What should "well-behaved" application developers do to handle this?
  a) Not handle it at all. Watch it burn!
  b) Handle it, with fallback to JS crypto
  c) Other?
- If you answered (b) [which, evidence suggests, is invariably what people will do], how is that different than the present state of things?
Comment 35 Henri Sivonen 2014-07-08 10:04:47 UTC
(In reply to Ryan Sleevi from comment #34)
> Further, if we go the route of "all or nothing", which I understand you to
> be suggesting, than it means we would run the risk of cutting off the nose
> to spite the face on one extreme, or landing with something 'less than
> desirable' on the other.

I'm not advocating for all or nothing. E.g. WHATWG HTML or a sufficiently large CSS spec doesn't get implemented in an all-or-nothing fashion. Still, the specs set the expectation that they'll be eventually fully implemented instead of setting the expectation that implementors are welcome to implement even disjoint subsets and that's fine ("conforming"). 

I expect Web Crypto be implemented piecewise.

> What's the expected behaviour, for a UA, if a user chooses to disable an
> algorithm (by policy)?
> Does it
>   a) Allow all of WebCrypto, mod the disabled algorithm
>   b) Disable all of WebCrypto, because it fails to meet the profile?
>   c) Other?

Preferably c): Making it hard to to mess with the feature set so that only those who *really* have to do so. But when they do, a).

> Whatever the choice (a-c), if we go MTI, the spec needs to provide guidance
> on that.
> 
> If you answer (a/c), then further questions emerge, such as:
> - What if the UA can't determine whether or not the user intentionally
> disabled an algorithm (i.e.: it's disabled at an OS layer that the UA is
> unaware/agnostic towards)

Why in the case of a) does the UA need to know the intent, if the algorithm has been withheld from it?

> - What should the behaviour of the site operator's script be when it
> attempts to use SHA-1?

The same as requesting an unknown algorithm.

> - What should "well-behaved" application developers do to handle this?
>   a) Not handle it at all. Watch it burn!
>   b) Handle it, with fallback to JS crypto
>   c) Other?

b) is likely in the short term with a) likely becoming more prevalent over time (based on what we know about how Web developers behave as features become more consistently available across browsers).

I imagine once in a while someone will do c): Show a UI message to tell the user go download a browser that supports Web Crypto.

> - If you answered (b) [which, evidence suggests, is invariably what people
> will do], how is that different than the present state of things?

There's a difference between

 1) establishing a feature set that implementors agree is a worthwhile target to converge their default configurations to over time

and

 2) not even setting the expectation for browsers converging on a common interoperable feature set available by default (either resulting in a mess or resulting in expectation setting happening outside the WG/spec).
Comment 36 Ryan Sleevi 2014-07-09 23:44:51 UTC
(In reply to Henri Sivonen from comment #35)
> I expect Web Crypto be implemented piecewise.

Considering CSS doesn't have varying export controls, that's not surprising.

This is very similar to the WebGL EXT_ sets - except in addition to the IPR concerns, the on-chip concerns, the question about whether an EXT_ can be sufficiently emulated to drop the EXT_, the OS level concerns, there's also the export concerns.

Finally, there's clearly a diverse set of people interested in this - from the "Node.JS/Win.JS" folks looking for server-side operation, from the "exposed to websites" group, to the "exposed to sysapps", to the "usable in devices that use web technologies" (e.g. set top boxes), there's a vast and varying set of requirements here, even beyond that of CSS.

> 
> > What's the expected behaviour, for a UA, if a user chooses to disable an
> > algorithm (by policy)?
> > Does it
> >   a) Allow all of WebCrypto, mod the disabled algorithm
> >   b) Disable all of WebCrypto, because it fails to meet the profile?
> >   c) Other?
> 
> Preferably c): Making it hard to to mess with the feature set so that only
> those who *really* have to do so. But when they do, a).

Specs don't get to decide how hard we make something. Nor should they. But I agree, a) is the only 'sane' approach.

> > - What should the behaviour of the site operator's script be when it
> > attempts to use SHA-1?
> 
> The same as requesting an unknown algorithm.

Agreed.

> 
> > - What should "well-behaved" application developers do to handle this?
> >   a) Not handle it at all. Watch it burn!
> >   b) Handle it, with fallback to JS crypto
> >   c) Other?
> 
> b) is likely in the short term with a) likely becoming more prevalent over
> time (based on what we know about how Web developers behave as features
> become more consistently available across browsers).
> 
> I imagine once in a while someone will do c): Show a UI message to tell the
> user go download a browser that supports Web Crypto.

Yup.

> > - If you answered (b) [which, evidence suggests, is invariably what people
> > will do], how is that different than the present state of things?
> 
> There's a difference between
> 
>  1) establishing a feature set that implementors agree is a worthwhile
> target to converge their default configurations to over time
> 
> and
> 
>  2) not even setting the expectation for browsers converging on a common
> interoperable feature set available by default (either resulting in a mess
> or resulting in expectation setting happening outside the WG/spec).

Considering that the converging set of expectations vary based on device profiles/platforms/use cases (Node vs Set-Top Box vs 'traditional' desktop/mobile UA, which themselves vary by desktop and mobile), I absolutely think it's something that will have different use cases and profiles.

Let me emphatically state that "conformance" lacks a strong definition in the context of cryptographic operations. It is essential to treat them "as if" they were hardware capabilities. These capabilities MAY be emulated in software, but that is dependent on a variety of externalities.

Consider RSA-OAEP as an example. Chromium wants to use the cryptographic capabilities of Linux (the NSS library), except different distros ship different versions of NSS with different capabilities.

What does "conformance" mean, for the Chromium developers?
- Does "conformance" simply meaning making the calls to the underlying library, in the hope that they'll succeed?
- Does "conformance" mean that Chromium should try to implement RSA-OAEP itself, independent of the cryptographic libraries?
  - Is a distinction made between the library being an older version, which may not support RSA-OAEP, and the library being newer, but having RSA-OAEP explicitly disabled by the user (through means outside of Chromium's control?)
- What about for algorithms for which there are no PKCS#11 mechanism points assigned yet, like Curve25519 or the NUMS curves?
  - Does conformance mean implementing in software, ala WebGL-software-rendering?

What does it mean for the web developer, who wishes to use Curve25519 or the NUMS curves or RSA-OAEP? What if it's available on every other platform but Chromium for Linux, and only under certain situations, does this change the answer?

The choice of no-MTI is a choice that reflects the reality that there are untold numbers of ways and reasons for which a given algorithm can be unavailable. The ideal, aspirational world is that every UA is able to implement every algorithm, on every platform, and for every key size, parameter size, etc. But the real world is that will never be possible, and developers have to design their code around that.

Some of this can be handled gracefully by the author - after all, there may be a choice of algorithms that they can use. They can negotiate a-priori on the cryptographic construction (as most protocols do anyways, for cryptographic agility). It does not imply a lack of utility for the API.
Comment 37 virginie.galindo 2014-07-25 21:50:46 UTC
Based on the exchanges related to this bug, one possible way to move forward is to define a browser profile after interoperability testing is conducted with different implementations. This browser profile should be non-normative and should describe the exact behavior of the browser in case part of the algorithms are not available, or partially available, or disabled by the user. 
As such it is required to treat that bug once implementations have been demonstrated, which means after the call for implementation (see process http://www.w3.org/2005/10/Process-20051014/tr.html#cfi)

That possible option to move forward will be discussed by the WG during the call on the 28th of July [1]. 

Virginie
chair of Web Crypto WG

[1] http://lists.w3.org/Archives/Public/public-webcrypto/2014Jul/0106.html
Comment 38 Anne 2014-07-27 11:13:49 UTC
Please justify why it would be non-normative. It should be normative.
Comment 39 Henri Sivonen 2014-07-30 11:37:20 UTC
(In reply to Ryan Sleevi from comment #36)
> Finally, there's clearly a diverse set of people interested in this - from
> the "Node.JS/Win.JS" folks looking for server-side operation, from the
> "exposed to websites" group, to the "exposed to sysapps", to the "usable in
> devices that use web technologies" (e.g. set top boxes), there's a vast and
> varying set of requirements here, even beyond that of CSS.

I think it's fine for Node.js to implement browser APIs, but I think it's a mistake to let the server side wag the browser-side spec. See: DOM.

I also think it's a mistake to make browser API specs vaguer in order to make devices with low-quality browsers or browser ports comply. See: Various Mobile Profiles from the feature phone era.

> Let me emphatically state that "conformance" lacks a strong definition in
> the context of cryptographic operations. It is essential to treat them "as
> if" they were hardware capabilities.

Do you say this in order to make sure that stuff *can* map to hardware capabilities like AES-NI or do you say this in order to be able to treat crypto differently policy-wise compared to features that browsers are expected to have regardless of the underlying hardware?

> Consider RSA-OAEP as an example. Chromium wants to use the cryptographic
> capabilities of Linux (the NSS library), except different distros ship
> different versions of NSS with different capabilities.

Is this issue going to be moot once Chromium starts using BoringSSL?

> What does "conformance" mean, for the Chromium developers?
> - Does "conformance" simply meaning making the calls to the underlying
> library, in the hope that they'll succeed?
> - Does "conformance" mean that Chromium should try to implement RSA-OAEP
> itself, independent of the cryptographic libraries?

Isn't this what Chromium is on track to doing once it switches to BoringSSL?

>   - Is a distinction made between the library being an older version, which
> may not support RSA-OAEP, and the library being newer, but having RSA-OAEP
> explicitly disabled by the user (through means outside of Chromium's
> control?)

I'd say the question of whether Chromium is conforming in that case wouldn't be of practical importance. From the practical perspective, it matter if stuff works. If stuff doesn't work, the system+configuration as a whole would be non-conforming if RSA-OAEP was in the set of algorithms that the WG has deemed to be part of the Web Platform.

> - What about for algorithms for which there are no PKCS#11 mechanism points
> assigned yet, like Curve25519 or the NUMS curves?

Seems reasonable to implement Curve25519 without any PKCS#11 layer in between in that case.

>   - Does conformance mean implementing in software, ala
> WebGL-software-rendering?
> 
> What does it mean for the web developer, who wishes to use Curve25519 or the
> NUMS curves or RSA-OAEP?

If the Web developer is implementing something that has NaCl at the other end of the pipe, then if Curve25519 isn't available, AFAICT, the options are polyfilling with JS (including asm.js) with the risk to timing side channel attacks, telling the user to get another browser or changing what's at the other end of the pipe (if possible).

> What if it's available on every other platform but
> Chromium for Linux, and only under certain situations, does this change the
> answer?

Probably make the "telling the user to get another browser" option the most likely one.

> The choice of no-MTI is a choice that reflects the reality that there are
> untold numbers of ways and reasons for which a given algorithm can be
> unavailable. The ideal, aspirational world is that every UA is able to
> implement every algorithm, on every platform, and for every key size,
> parameter size, etc. But the real world is that will never be possible, and
> developers have to design their code around that.

I don't expect a world where every browser implements every algorithm ever thought up. I expect the WG to maintain documentation of which set of algorithms browsers are expected to implement either because the algorithms provide good crypto and should *become* more common or because they are needed for compatibility with very common legacy at the other end of the pipe. That target set might expand over time and expansions of the set might not be implemented immediately, but it's still better than not trying to aim for convergence of the available features across implementations.

In particular, I'd expect the WG to reject from that set algorithms that are neither necessary for compatibility nor provide better (yeah, what's "better" is probably hard to articulate fully) crypto than what's already in the set. ("Our research department came up with these" shouldn't be good enough, in itself, for inclusion.) As a obvious example, just because there exists a definition for a BADA55 curve somewhere doesn't mean that such a curve should be put in the set of algorithms that browsers are supposed to implement. (More controversially: The most common NIST curves probably need to be in the set due to compatibility considerations despite the open questions about their constants, but it doesn't follow that a bunch of other curves whose constants have similar provenance should be added, too. Instead, other curves added to the set should probably have better justifiable design choices.)

In practice, though, I'd expect the set of algorithms to be pretty heavily influenced by what the browsers with the most market share are willing to implement (in the fashion of comment 37) and not just by abstract discussions in the WG.
Comment 40 Ryan Sleevi 2014-07-30 18:33:34 UTC
(In reply to Henri Sivonen from comment #39)
> I think it's fine for Node.js to implement browser APIs, but I think it's a
> mistake to let the server side wag the browser-side spec. See: DOM.

I think it's a pretty gross mischaracterization to suggest the server side is wagging the browser side.

I think it entirely disregards the point that a "web" device ranges from your smartwatch to your laptop, and that there are a variety of concerns and requirements in between.

> 
> I also think it's a mistake to make browser API specs vaguer in order to
> make devices with low-quality browsers or browser ports comply. See: Various
> Mobile Profiles from the feature phone era.

Low-quality is an entirely unnecessary pejorative here, unless you believe that "Web" is meant to encompass "Desktop machines with high-power, general purpose CPUs". Which the W3C, nor do many of the UAs, consider to be the end-all, be-all.
 
> > Let me emphatically state that "conformance" lacks a strong definition in
> > the context of cryptographic operations. It is essential to treat them "as
> > if" they were hardware capabilities.
> 
> Do you say this in order to make sure that stuff *can* map to hardware
> capabilities like AES-NI or do you say this in order to be able to treat
> crypto differently policy-wise compared to features that browsers are
> expected to have regardless of the underlying hardware?

The answer is both. On a lower-end device constrained by die size, it's absolutely a matter of hardware. On the higher-end devices, there's still a matter of policy.

In both cases, there are a set of factors and considerations that exist outside the purely abstract notion of "implement the following steps" - concerns such as export controls, concerns such as hardware capabilities, etc.

> Isn't this what Chromium is on track to doing once it switches to BoringSSL?

Both of your responses are entirely irrelevant to the question I was asking, and show a failure to appreciate the concerns. You still have not answered the questions I asked, which applies not just for Chromium, but for all user agents.

Your proposed path is one that makes it impossible for a UA to reasonably be agnostic about crypto - which only serves to solidify the notion of a "few UAs", and that the web technologies are impossible to implement unless you're an entrenched incumbent.

Chromium's ability to launch was predicated on it's ability to use the existing platform crypto libraries - including on Windows. Only as it grew, and the "experiment" was worthwhile, was it even possible to switch to NSS, and only recently has it become "possible" to switch to BoringSSL - and not without significant costs and engineering investment over *years*. That's not something to so glibly dismiss with a one-off, nor does it do anyone in the WG any favours, when the heart of the question remains - which is, "what is conformant"

> >   - Is a distinction made between the library being an older version, which
> > may not support RSA-OAEP, and the library being newer, but having RSA-OAEP
> > explicitly disabled by the user (through means outside of Chromium's
> > control?)
> 
> I'd say the question of whether Chromium is conforming in that case wouldn't
> be of practical importance. From the practical perspective, it matter if
> stuff works. If stuff doesn't work, the system+configuration as a whole
> would be non-conforming if RSA-OAEP was in the set of algorithms that the WG
> has deemed to be part of the Web Platform.

This is of the utmost practical importance, because one defines the other. You can't just wave this away, as you did earlier. What does conformance mean on such a system. Does the lack of RSA-OAEP mean you lack WebCrypto entirely? Does the fact that such a profile exists - on a vast number of machines, and in ways that non-skilled users (e.g. those who aren't going to recompile a bleeding edge, ToT version of NSS) can't just solve? And what about for other platforms - like Windows or OS X - where users can't even just arbitrarily extend?

> 
> > - What about for algorithms for which there are no PKCS#11 mechanism points
> > assigned yet, like Curve25519 or the NUMS curves?
> 
> Seems reasonable to implement Curve25519 without any PKCS#11 layer in
> between in that case.

It may seem reasonable to you, but only because you seem to be choosing to ignore the very concerns being presented to you. The use of the PKCS#11 layer is what allows a UA to avoid, entirely, any of the legal or regulatory frameworks surrounding it. Implementing Curve25519, without any PKCS#11 layer, requires significant investment in legal efforts (at a MINIMUM). And if Curve25519 was MTI, what then?

> If the Web developer is implementing something that has NaCl at the other
> end of the pipe, then if Curve25519 isn't available, AFAICT, the options are
> polyfilling with JS (including asm.js) with the risk to timing side channel
> attacks, telling the user to get another browser or changing what's at the
> other end of the pipe (if possible).

Exactly what the spec says today.

> I don't expect a world where every browser implements every algorithm ever
> thought up. I expect the WG to maintain documentation of which set of
> algorithms browsers are expected to implement either because the algorithms
> provide good crypto and should *become* more common or because they are
> needed for compatibility with very common legacy at the other end of the
> pipe. That target set might expand over time and expansions of the set might
> not be implemented immediately, but it's still better than not trying to aim
> for convergence of the available features across implementations.

This is what the WG has always said, from the beginning, when it decided not to do MTI. Nothing has changed here.

 
> In particular, I'd expect the WG to reject from that set algorithms that are
> neither necessary for compatibility nor provide better (yeah, what's
> "better" is probably hard to articulate fully) crypto than what's already in
> the set. ("Our research department came up with these" shouldn't be good
> enough, in itself, for inclusion.) As a obvious example, just because there
> exists a definition for a BADA55 curve somewhere doesn't mean that such a
> curve should be put in the set of algorithms that browsers are supposed to
> implement. 

This is a decision that will (and has already, several times), completely prevent the WG from progressing. Are you an American imperialist who refuses to accept things such as SEED or GOST? Are you a European loyalist who loves the Brainpool curves, even though one can argue they too have issues? Even the cryptographic community is divided upon what an appropriate set of criteria for curves are. This WG is the LEAST CAPABLE of making such a decision.

So by introducing such arbitrary value judgements, and even if they're based on an objective set of criteria (the absolute MINIMUM requirement for MTI), the criteria themselves will be somewhat arbitrary (or "consensus driven", which is indistiguishable from arbitrariness), it just serves to drive endless debate in a WG not well-suited for that.

> (More controversially: The most common NIST curves probably need
> to be in the set due to compatibility considerations despite the open
> questions about their constants, but it doesn't follow that a bunch of other
> curves whose constants have similar provenance should be added, too.
> Instead, other curves added to the set should probably have better
> justifiable design choices.)

And "better justifiable design choices" is a judgement call that, as the CFRG shows, is one that reasonable, seasoned members of the cryptographic community, well versed in the nuance, STILL disagree on.

> 
> In practice, though, I'd expect the set of algorithms to be pretty heavily
> influenced by what the browsers with the most market share are willing to
> implement (in the fashion of comment 37) and not just by abstract
> discussions in the WG.
Comment 41 Henri Sivonen 2014-07-31 11:35:27 UTC
(In reply to Ryan Sleevi from comment #40)
> I think it entirely disregards the point that a "web" device ranges from
> your smartwatch to your laptop, and that there are a variety of concerns and
> requirements in between.

This is the sort of argument that was used for Mobile Profiles. Mobile Profiles weren't the solution for making the Web work on mobile. Porting full browser engines to mobile devices was.

> > I also think it's a mistake to make browser API specs vaguer in order to
> > make devices with low-quality browsers or browser ports comply. See: Various
> > Mobile Profiles from the feature phone era.
> 
> Low-quality is an entirely unnecessary pejorative here, unless you believe
> that "Web" is meant to encompass "Desktop machines with high-power, general
> purpose CPUs". Which the W3C, nor do many of the UAs, consider to be the
> end-all, be-all.

You can get a full browser engine ported to a settop box. The hardware required to run a full browser engine fits on a HDMI+USB dongle these days and doesn't even require a *box*. B2G is available for porting. It looks like Google launched an Android variant for this space, presumably capable of running Chromium. There are various consultancies that'll port WebKit to a settop box. In that case, it's a matter of how well the layer below WebKit is done--it's not a matter of CPU power. (I mean the layer of functionality that B2G and Chromium include but the cross-platform WebKit does not include and needs supplied on a per-port basis.) If a WebKit port for a TV-oriented device cuts some corners, how should it be described?

> Chromium's ability to launch was predicated on it's ability to use the
> existing platform crypto libraries - including on Windows. Only as it grew,
> and the "experiment" was worthwhile, was it even possible to switch to NSS,
> and only recently has it become "possible" to switch to BoringSSL - and not
> without significant costs and engineering investment over *years*. That's
> not something to so glibly dismiss with a one-off, nor does it do anyone in
> the WG any favours, when the heart of the question remains - which is, "what
> is conformant"

For launching a new browser, "what's conformant" is less relevant than "what works". However, it's a spec failure if you can't get "what works" by looking at the specs to implement "what's conformant".

Being able to launch new browsers by not having to reverse engineer everything that came before is supposed to be facilitated by specs. Suppose Servo gets to the point later on when adding Web Crypto becomes relevant in an effort to make Servo useful for using the Web in a future where various Web sites/apps use Web Crypto. Servo then needs to implement a set of algorithms that makes Web Crypto-using sites work in Servo. If the spec doesn't say what that set is, the spec is failing to serve a function specs are supposed to serve.

> > >   - Is a distinction made between the library being an older version, which
> > > may not support RSA-OAEP, and the library being newer, but having RSA-OAEP
> > > explicitly disabled by the user (through means outside of Chromium's
> > > control?)
> > 
> > I'd say the question of whether Chromium is conforming in that case wouldn't
> > be of practical importance. From the practical perspective, it matter if
> > stuff works. If stuff doesn't work, the system+configuration as a whole
> > would be non-conforming if RSA-OAEP was in the set of algorithms that the WG
> > has deemed to be part of the Web Platform.
> 
> This is of the utmost practical importance, because one defines the other.
> You can't just wave this away, as you did earlier. What does conformance
> mean on such a system.

It'll mean whether some sites work or don't work.

> Does the lack of RSA-OAEP mean you lack WebCrypto entirely?

Probably not.

> > > - What about for algorithms for which there are no PKCS#11 mechanism points
> > > assigned yet, like Curve25519 or the NUMS curves?
> > 
> > Seems reasonable to implement Curve25519 without any PKCS#11 layer in
> > between in that case.
> 
> It may seem reasonable to you, but only because you seem to be choosing to
> ignore the very concerns being presented to you. The use of the PKCS#11
> layer is what allows a UA to avoid, entirely, any of the legal or regulatory
> frameworks surrounding it.

I understand the merit of separation of concerns in the abstract sense.

I fail to understand how that's relevant in practice when in all the common cases the same entity (Google, Mozilla, Microsoft, Apple, Opera, $LINUX_DISTRO, $EMBEDDED_DEVICE_VENDOR) ships both the browser engine and the crypto library.

If you use Chrome, you get both Blink and NSS from Google, right? If you use pre-built Chromium, you probably get both Chromium and NSS from the same Linux distribution.

> So by introducing such arbitrary value judgements, and even if they're based
> on an objective set of criteria (the absolute MINIMUM requirement for MTI),
> the criteria themselves will be somewhat arbitrary (or "consensus driven",
> which is indistiguishable from arbitrariness), it just serves to drive
> endless debate in a WG not well-suited for that.

I can see that there are political reasons that will make the Web Crypto spec fail to serve a part of the function that Web specs are supposed to serve for the purpose of the Servo example above.

I guess the algorithms being discrete units makes the outcome less bad in practice than a spec stipulating that any random parts of the spec are optional. *Someone* needs to decide which algorithms get shipped by the entities that ship both browsers and crypto libs and the browsers with enough market share that Web authors bother testing with them will determine what the set of algorithms practically needed for Web compat ends up being. So to go from research to a product, a project like Servo would then, instead of referring to the spec, have to examine the sets of algorithms present by default in popular browsers and take the intersection of those sets as a lower bound and the union as an upper bound of what needs to be implemented and reconcile the difference between these sets by examining how often broken sites are encountered.

I'll concede that the political issues probably make the WG ineffective at overtly saying that which will end up being determinable (as a lower bound and upper bound per the previous paragraph) for practical purposes from what the WG participants end up doing each in their own corner, so I won't debate this further.
Comment 42 Boris Zbarsky 2014-07-31 13:30:19 UTC
> Being able to launch new browsers by not having to reverse engineer everything
> that came before is supposed to be facilitated by specs.

In fact, this is arguably the primary purpose of web specs as being worked on today.  There would not be much of a reason for creating something like the URL spec if it were not for this concern...
Comment 43 Ryan Sleevi 2014-07-31 18:50:51 UTC
(In reply to Henri Sivonen from comment #41)
> If you use Chrome, you get both Blink and NSS from Google, right?

As I have repeatedly said, no.

> If you use
> pre-built Chromium, you probably get both Chromium and NSS from the same
> Linux distribution.

As I have repeatedly said, no.

The same is true for Firefox.
Comment 44 Henri Sivonen 2014-08-01 10:21:17 UTC
(In reply to Ryan Sleevi from comment #43)
> The same is true for Firefox.

I just downloaded a release build from ftp.mozilla.org for Linux and it came with
libnss3.so
libnssckbi.so
libnssdbm3.chk
libnssdbm3.so
libnssutil3.so
libplc4.so
libplds4.so
libsmime3.so
libsoftokn3.chk
libsoftokn3.so

What am I missing?
Comment 45 Harry Halpin 2014-08-04 14:25:27 UTC
(In reply to Henri Sivonen from comment #44)
> (In reply to Ryan Sleevi from comment #43)
> > The same is true for Firefox.
> 
> I just downloaded a release build from ftp.mozilla.org for Linux and it came
> with
> libnss3.so
> libnssckbi.so
> libnssdbm3.chk
> libnssdbm3.so
> libnssutil3.so
> libplc4.so
> libplds4.so
> libsmime3.so
> libsoftokn3.chk
> libsoftokn3.so
> 
> What am I missing?

I believe both Ryan and Henri's arguments have merits, but I might add we also would like to leave Last Call and have another call coming up next week. 

Right now, we do have currently zero mandatory algorithms. It is very likely that some specialized implementations of WebCrypto (such as that being proposed by Netflix) will not be "in browsers." That being said, I also recognize that we at W3C owe Web developers some promise of interoperability in browsers for some common functions. On the call, Ryan was OK with some algorithms being "normative for browsers". 

Can we revisit Virginie's earlier proposal, but with the following change to normative:

"We will define a browser profile after interoperability testing is conducted with different implementations. This browser profile should be *normative* and should describe the exact behavior of the browser in case part of the algorithms are not available, or partially available, or disabled by the user. 
As such it is required to treat that bug once implementations have been demonstrated, which means after the call for implementation (see process http://www.w3.org/2005/10/Process-20051014/tr.html#cfi)"

Thus, we can add as a "Feature at Risk" going into CR some text that there may be TBD normative algorithms for browser implementation, and then determine those precise algorithms (if any, but I'd be surprised if there wasn't some) before exiting CR. 

Would that satisfy both the commenters and the editor?
Comment 46 virginie.galindo 2014-08-04 14:50:59 UTC
(In reply to Harry Halpin from comment #45)
> (In reply to Henri Sivonen from comment #44)
> > (In reply to Ryan Sleevi from comment #43)

> 
> I believe both Ryan and Henri's arguments have merits, but I might add we
> also would like to leave Last Call and have another call coming up next
> week. 
> 
> Right now, we do have currently zero mandatory algorithms. It is very likely
> that some specialized implementations of WebCrypto (such as that being
> proposed by Netflix) will not be "in browsers." That being said, I also
> recognize that we at W3C owe Web developers some promise of interoperability
> in browsers for some common functions. On the call, Ryan was OK with some
> algorithms being "normative for browsers". 
> 
> Can we revisit Virginie's earlier proposal, but with the following change to
> normative:
> 
> "We will define a browser profile after interoperability testing is
> conducted with different implementations. This browser profile should be
> *normative* and should describe the exact behavior of the browser in case
> part of the algorithms are not available, or partially available, or
> disabled by the user. 
> As such it is required to treat that bug once implementations have been
> demonstrated, which means after the call for implementation (see process
> http://www.w3.org/2005/10/Process-20051014/tr.html#cfi)"
> 
> Thus, we can add as a "Feature at Risk" going into CR some text that there
> may be TBD normative algorithms for browser implementation, and then
> determine those precise algorithms (if any, but I'd be surprised if there
> wasn't some) before exiting CR. 
> 
> Would that satisfy both the commenters and the editor?

As a note, this would reflect ideas exchanged during last WG call http://lists.w3.org/Archives/Public/public-webcrypto/2014Jul/0144.html
Comment 47 virginie.galindo 2014-08-13 13:15:25 UTC
During its 28th of July call, the Web Crypto WG has agreed to the following resolution : 
RESOLUTION: Leave the bug open; expect to resolve it after testing phase by developing one or more profiles. 
See minutes of the meeting for more detail http://www.w3.org/2014/07/28-crypto-minutes.html

Virginie
chair of the web crypto wg
Comment 48 Harry Halpin 2014-11-20 00:16:53 UTC
As noted by Virginie and WG, this problem will be solved during Candidate Recommendation testing phase. Thus, this is "LATER".