On specifying algorithms

So, in looking at the feedback received so far, both from members
within the WG and from external reviewers, one of the concerns that
seems to be getting raised is the choice of algorithms that are
currently specified within the document, and of their relative
security (or insecurity). It seems that some of these concerns arise
from different interpretations about what the objectives of this WG
are, so I wanted to present some of the options and ensure that we had
consensus - both within the WG and with potential users of this API -
as to what the goal should be.

Possible interpretations (not necessarily mutually exclusive)
1) We should (only?) specify algorithms that user agents MUST implement
2) We should (only?) specify algorithms that user agents WILL implement
3) We should (only?) specify algorithms that user agents SHOULD implement
4) We should (only?) specify algorithms that developers SHOULD use
5) We should (only?) specify algorithms that developers WILL use
6) We should (only?) specify algorithms that developers MAY use

While interpretation 1 is included for sake of completeness, as part
of ISSUE-1 ( http://www.w3.org/2012/webcrypto/track/issues/1 ), the WG
decided against such a scheme. This was revisited in ISSUE-4 (
http://www.w3.org/2012/webcrypto/track/issues/4 ), which again
re-iterated that there is not a mandatory set of algorithms.

I believe the concerns raised by Zooko with ECB, and by others with
RSAES/RSASSA, reflect an interpretation under #3 or #4. That is, this
API should not encourage or enable "bad" crypto - that is, crypto that
may be dangerous or error-prone, and instead only focus on the "good"
crypto (whether that be through security proof or through no known
attacks).

Equally, I understand there concern about the cross-browser
interoperability, which typically leads people to an interpretation of
#1 or #2. If a generic web page wishes to make use of cryptographic
services in a browser agnostic way, than the inclusion of a
prescriptive or mandatory set of algorithms helps ensure that it can
truly be browser agnostic.

However, as a WG, we also have within scope, and as demonstrated by
many of our use cases, a desire to 'port' native applications to the
web. Such applications may be implementing protocols that are a decade
or two old (such as PGP or S/MIME) or protocols which are not defined
by the application vendor (such as the various government-defined
signature schemes and algorithms). If these use cases are to be
supported by the API, then minimally, it seems like one must accept
interpretation #5 or #6.


Individually, my take is that the choice of algorithms specified should:
 - Standardize the behaviour of any algorithm that a vendor MAY
implement (effectively, interpretation #6)
   - This is to ensure an inter-operable implementation of the
algorithm if one or more vendors *does* implement the algorithm
 - Survey potential users to ensure that the specification has
captured the desired algorithms (effectively, interpretation #5)
 - Provide guidance for developers regarding the current understood
security properties (effectively, interpretation #4)
 - Provide guidance for implementations and end-users regarding the
'recommended' set of algorithms that should work on 'most/all' user
agents (effectively, interpretation #3)

The reason for this is that I think the most important thing provided
by this API is the notion of browser-mediated key management (aka
"secure key store"). In order to effectively use 'opaque keys', it's
necessary to be able to perform operations on and with these keys. Any
operation or algorithm that is not 'natively' supported or specified
will simply force a developer to polyfill it with Javascript. Failing
to support ECB in the API is not going to mean that people will not
use ECB, it just means that they will implement it in JS, and either
mark the key exportable, or use wrapping (such as RSA) to 'protect'
the key temporarily. The only thing that failing to specify ECB
accomplishes is resulting in even worse key management.

Considering our current draft is a low-level *API*, not a *PROTOCOL*
(like JOSE, like S/MIME, like TLS, etc), I don't believe we should be
making judgments on what algorithms are "good" and what algorithms are
"bad". I agree, for any *new* protocols, there are some algorithms
that should just be avoided. I think the JOSE WG has really
demonstrated the sort of discussions that need to be had. But if this
API is meant to be useful to implement *any* of the many existing
protocols, that for better or worse have made and standardized their
security decisions, then I think we'll need to accept and support the
"bad" algorithms as well.

To that end, I propose we:
 - Identify the core algorithms necessary to support the use cases,
and standardize those
 - Update the document to provide segmentation between "recommended"
algorithms and "specified-but-not-recommended" algorithms
 - Try to define a profile of the set of total algorithms that user
agents can/should implement
  - For example, this "might" be something like Level 1, Level 2, etc
  - With the caveat that "any of these may be disabled by local
policy", etc. But it does mean that, barring external crypto factors
(eg: export regulations, local policy, etc), that a user agent will
have a faithful and conforming implementation.

Received on Friday, 21 September 2012 19:06:49 UTC