Re: Strawman proposal for the low-level API

On Thu, Jun 21, 2012 at 11:22 AM, Mike Jones <Michael.Jones@microsoft.com>wrote:

>  I believe there are advantages beyond what a browser must implement.
> For one, producing the set of defined algorithm/parameter combination
> labels forces the working group to make conscious choices about what
> combinations make sense to promote the usage of, just as JOSE is doing.
> This will likely aid interop.
>

Fair enough. I think we'll probably be in disagreement on this - I don't
think the WG should be in the position of arbitrar of what can or should be
implemented, as I think it significantly limits the utility of the API and
the ability  to make useful combinations. If any value decisions need to be
made, I think they make more sense within the context of a high-level API,
which the survey respondants indicated less interest in. The discussion of
JWA/JWK/JWS previously were, as I perceived, discussions about how the
high-level API should behave.

My view of interop is that the API should behave consistently across all
platforms.
What I understand you to be saying (and may be mistaken on), is that you
view interop is that the same algorithms should be implemented across all
platforms.

I don't think that sort of view of interop is attainable, given how/where
the web runs, in the same way that the <img> tag does not require a JPG
implementation (as spec'd), nor does the <video> tag require H2.64 or WebM.


> ****
>
> ** **
>
> Also, it makes it simpler for code using the API to query whether a
> combination is supported.  A query for “ES256” is simple to formulate (and
> to respond to).  A query for { name: 'ECDSA', params: { hash: { name:
> SHA256' } } } – not so much.
>
 **
>
>                                                                 -- Mike
>

Sure. But as I mentioned several times, the shorthand "ES256" is fully
valid within what I was proposing. I agree, shorthand is good! At the same
time, shorthand that sacrifices flexibility/extensibility is, I believe,
short-sighted, which is why the API specifies the long form, for which the
short form fits, rather than describing only a short-form, of which a
long-form does not fit.


> ****
>
> ** **
>
> *From:* Ryan Sleevi [mailto:sleevi@google.com]
> *Sent:* Thursday, June 21, 2012 11:13 AM
> *To:* Seetharama Rao Durbha
> *Cc:* Mike Jones; David Dahl; public-webcrypto@w3.org
>
> *Subject:* Re: Strawman proposal for the low-level API****
>
> ** **
>
> ** **
>
> On Thu, Jun 21, 2012 at 9:27 AM, Seetharama Rao Durbha <
> S.Durbha@cablelabs.com> wrote:****
>
> ** **
>
> "HS256" is the same as the hypothetical { name: 'HMAC', params: { hash: {
> name: 'SHA256' } } }****
>
> "ES256" is the same as the hypothetical { name: 'ECDSA', params: { hash: {
> name: SHA256' } } }****
>
> ** **
>
> May be so. But the advantage/flexibility you are referring to here allows
> a future replacement of SHA256 with SHA1024, let us say. It is not so
> obvious to me how the developer may check the availability of ECDSA with
> SHA1024 in a specific browser. The above hypothetical may throw an
> unsupported algorithm/combination exception, but its much easier to
> query/check availability with one name.****
>
> JCE is another reference here (
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#impl).
> The algorithm names in the table are essentially similar in approach to the
> JWA names. Its very clear from the table that if one were to write a Java
> program that is shipped and may run in unknown containers/VMs, it is better
> to not use RSA with SHA512.****
>
> ** **
>
> I agree that this same information could have been provided by splitting
> the Algorithm column into multiple columns, one for each parameter. Its
> just a little harder to enforce it during development. For example, its
> very easy to provide constants for supported combinations, and thus enable
> compile time checking.****
>
> ** **
>
> Except we're in JavaScript, the concept of compile time checking is a bit
> of a foregone conclusion. Providing hard constants (eg: an enum on an
> Interface) means that any implementation that extends this interface would
> be Non-Compliant, and any adoption of new algorithms MUST be done via
> updating the spec (which then makes legacy implementations non-compliant).
> ****
>
> ** **
>
> There are effectively two ways to check - using the .supports({ name:
> 'ECDSA', params: { hash: { name: SHA1024 } } }) (which returns a bool) or
> by attempting to instantiate it, such as via .encrypt({ name: 'ECDSA', ...
> })****
>
> ** **
>
> This is the exact same way an application would check for support using
> strings as identifiers - .supports('ES1024') / .encrypt('ES1024')****
>
>  ****
>
> Again, it sounds like the argument you're making is orthogonal to the
> representation - for example, "It's better to not use RSA with SHA512" is
> only true because it's not explicitly listed within the Implementation
> Requirements. However, as also noted within that spec, third-party
> providers can implement whatever they want.****
>
> ** **
>
> The discussion about what U-A's MUST/SHOULD/MAY implement is a useful
> discussion to have, but it seems perhaps separate from the API discussion.
> The API should be flexible enough to support 'any' algorithm - not strictly
> for vendor extensibility, but simply on the basis of providing a well
> defined method for how we, as a WG, extend this spec in the future.****
>
> ** **
>
> Is there an advantage to the 'string' representation that is not directly
> tied to what a browser MUST implement? The limitation I described I believe
> is reasonable - cannot provide arbitrary/algorithm-specific data, such as
> OAEP label.****
>
> ** **
>
>   ** **
>
> ** **
>
> *From: *Ryan Sleevi <sleevi@google.com>
> *To: *Seetharama Rao Durbha <s.durbha@cablelabs.com>
> *Cc: *Mike Jones <Michael.Jones@microsoft.com>, David Dahl <
> ddahl@mozilla.com>, "public-webcrypto@w3.org" <public-webcrypto@w3.org>***
> *
>
>
> *Subject: *Re: Strawman proposal for the low-level API****
>
> ** **
>
> On Wed, Jun 20, 2012 at 3:06 PM, Seetharama Rao Durbha <
> S.Durbha@cablelabs.com> wrote:****
>
> I think this is tussle between interoperability and extensibility. For
> interoperability, we certainly need a published set of algorithms and their
> combinations that developers can rely on (previously I expressed that we
> need a mandatory list of such). The question is whether this set gets baked
> into the API itself (through an ENUM for example) or validated through
> interop tests. The former is more easier for developers to verify that
> their implementations are going to be cross-platform/browser compatible.
> The later allows developers to take advantage of features when they are
> available in certain platforms/browsers.****
>
>  ** **
>
> I personally believe that having the API limit/specify features is a good
> thing. It cuts down a lot of time (doing research on what browser
> implements what algorithms) as well as second guessing what is a good
> combination.****
>
> ** **
>
> On the other hand, concatenated combinations do not limit extensibility.
> If a particular browser provides a specific combination not in the spec, it
> can do so through a custom namespace – something like what people did with
> CSS, through –webkit or –mozilla prefixes. ****
>
> ** **
>
> --Seetharama****
>
>  ** **
>
> It's worth noting that the (continued) use of prefixes is an area of great
> stylistic/semantic debate within the other WGs, and my understanding from
> the light following of discussions is that the prevailing opinion is that
> it is not the Right Way Forward. As we've seen, in practice simply forces
> vendors to implement each other's prefixes, since early adopters either
> fail to or, with some of the semantics, simply cannot, use prefixes in a
> way that will be forwards compatible.****
>
> ** **
>
> I fundamentally don't see any differences between Mike's JWA reference and
> what I proposed, other than brevity, so I'm trying to understand what
> exactly the argument is. Both you and Mike have made references to
> interoperability, but that seems to me to be wholly orthogonal to the
> representation of algorithms and their parameters.****
>
> ** **
>
> Likewise, the argument of "restricting people from getting it wrong" seems
> more an argument in favour of a high-level API, since by very nature,
> low-level APIs, in order to be useful for situations both described and yet
> to be discovered, must give you enough rope to hang yourself.****
>
> ** **
>
> If the WG decides that there should be a list of MUST implements, then
> whether it's a set of string identifiers or a set of
> Algorithms+AlgorithmParams, its no different which scheme we choose.****
>
> ** **
>
> "HS256" is the same as the hypothetical { name: 'HMAC', params: { hash: {
> name: 'SHA256' } } }****
>
> "ES256" is the same as the hypothetical { name: 'ECDSA', params: { hash: {
> name: SHA256' } } }****
>
> ** **
>
> Both are fully extensible schemes - that is, new JWA algorithm identifiers
> can be registered via IANA or extended via URI, and new Algorithms can be
> implemented by vendors/implementations and possibly standardized as
> appropriate (eg: W3C WG efforts, the same as already happens for CSS, HTML,
> etc). As I mentioned to David, it's fully possible to express a JWA
> algorithm via short-name, such as { name: 'ES256' }, which serves as
> reasonable 'default' values.****
>
> ** **
>
> As far as I can tell, the only actual difference at play is whether we use
> a single string or a dictionary. Under a single string, there is *no* way
> to convey additional, application specific data or optional parameters, as
> far as I can tell. For example, under JWA's single-string naming, with
> RSA-OAEP, there's no way for an application to include the 'label' data. I
> imagine that under the scheme, one would have to construct some canonical
> URI form and then append something like query parameters to specify
> additional data - encoding into an appropriate ASCII representation as
> necessary (base64?)****
>
> ** **
>
> Again, I'm not trying to suggest that an implementor of a scheme like HMAC
> be required to support HMAC with every possible hash function real or
> imagined - just that the semantics of the arguments and their handling are
> well defined, both when supported and not. The API specification itself
> should be able to react to any set of identifiers/parameters - and that
> equally applies whether using pure string identifiers ("ES256") or using
> dictionaries.****
>
> ** **
>
> It seems like this is really a discussion of more complexity/more
> flexibility vs less complexity/less flexibility. I'd much rather err on the
> side of more complexity/flexibility, since if there has been anything
> learned from the meteoric growth of the Internet and the richness of the
> Web, it's that there are so many ideas and use cases that we have yet to
> imagine.****
>
> ** **
>
> I think a perfect parallel to the APIs we're discussing here would be
> WebGL (which allows for browser/vendor specific extensions), Web Intents
> (which allow for arbitrary intents to be registered), WebRTC (which does
> not specify the format of the MediaStream), the video tag (which allows
> arbitrary types), or even further back, the catch-all cloaca that is the
> <object> tag. All of these, by virtue of flexibility, have allowed for very
> quick and rapid iteration and adoption of new features/abilities, without
> mandating any specific behaviours beyond the core API.****
>
>  ****
>
> Cheers,****
>
> Ryan****
>
> ** **
>
>  ** **
>
> *From: *Ryan Sleevi <sleevi@google.com>
> *To: *Mike Jones <Michael.Jones@microsoft.com>
> *Cc: *David Dahl <ddahl@mozilla.com>, "public-webcrypto@w3.org" <
> public-webcrypto@w3.org> ****
>
>
> *Subject: *Re: Strawman proposal for the low-level API****
>
> ** **
>
> ** **
>
> On Wed, Jun 20, 2012 at 1:47 PM, Mike Jones <Michael.Jones@microsoft.com>
> wrote:****
>
> As a data point, the IETF JOSE work intentionally has only algorithm
> identifiers and no separate algorithm parameters.  Instead, the particular
> parameter choices are baked into the identifier.****
>
>  ****
>
> So for instance, rather than having a generic RSA signature algorithm and
> hash functions as parameters, there are separate algorithm identifiers such
> as:****
>
>                 RS256 - RSA signature using SHA-256 hash algorithm****
>
>                 RS384 - RSA signature using SHA-384 hash algorithm****
>
>                 RS512 - RSA signature using SHA-512 hash algorithm****
>
>  ****
>
> The designers of the JOSE specs felt that it would be better, both for
> implementers and for interop, to specify a small set of meaningful
> algorithm combinations, than to face the combinatorial explosion that
> independently specifying each parameter causes.****
>
> See http://tools.ietf.org/html/draft-ietf-jose-json-web-algorithms for
> the algorithm combinations specified and the identifiers for them.****
>
>  ****
>
> I'd personally rather see the WebCrypto APIs follow this model, than allow
> all parameter combinations as XML DSIG and XML ENC do.  As a bonus, the
> WebCrypto work could then directly use the algorithm identifiers in the JWA
> spec, rather than inventing new ones.****
>
>  ****
>
>                                                                 Best
> wishes,****
>
>                                                                 -- Mike***
> *
>
> ** **
>
> Mike,****
>
> ** **
>
> It's not clear to me how that scheme or the spec actually avoid
> combinatorial growth. You're still individually specifying algorithm
> parameters - you're just doing it symbolically via the algorithm identifier.
> ****
>
> ** **
>
> Even under a MUST implement scheme (which I'm still fairly opposed to),
> extensibility is still important. Under JWA, extensibility must be
> accomplished by concatenating symbols together to form an identifier.
> Unrecognized identifiers are rejected.****
>
> ** **
>
> Under the above proposal, extensibility is accomplished by defining
> parameters. Unrecognized parameters or unsupported parameter options are
> rejected.****
>
> ** **
>
> I certainly don't mean to imply that every possible hash algorithm + RSA
> PKCSv1.5 must be supported for an implementation to be compliant. For that
> matter, I don't even think an implementation needs to understand RSA to be
> compliant. However, it MUST have well-defined behaviour encountering
> unrecognized algorithm/parameters.****
>
> ** **
>
> For example, under the current JWA specification (draft-02), there's no
> way to use RSA-PSS. That's specifically called out within Section 3.3. If
> an implementation wanted to support RSA-PSS with SHA-1/2-224,2-256,2-512,
> then according to Section 6.2, a new 'alg' value would be registered with
> IANA OR be a URI (as per 3.6).****
>
> ** **
>
> So JWA still has a fully extensible API with full combinatorial growth, it
> just requests that you go through the IANA registry. I don't see how that's
> different than what I was proposing. Is it just a matter of difference in
> who operates the registry? Given that URIs can be used, even the registry
> isn't actually required.****
>
>  ****
>
> Cheers****
>
>    ****
>
> *From:* Ryan Sleevi [mailto:sleevi@google.com]
> *Sent:* Wednesday, June 20, 2012 1:17 PM
> *To:* David Dahl
> *Cc:* public-webcrypto@w3.org
> *Subject:* Re: Strawman proposal for the low-level API****
>
>  ****
>
>  ****
>
> On Wed, Jun 20, 2012 at 12:42 PM, David Dahl <ddahl@mozilla.com> wrote:***
> *
>
> ----- Original Message -----
> > From: "Ryan Sleevi" <sleevi@google.com>
> > To: public-webcrypto@w3.org
> > Sent: Monday, June 18, 2012 12:53:03 PM
> > Subject: Strawman proposal for the low-level API
> >
> > Hi all,
> >
> >    While I'm still in the process of learning WebIDL [1] and the W3C
> >    Manual
> > of Style [2], I wanted to take a quick shot at drafting a strawman
> > low-level API for discussion.****
>
> This is great, thanks for taking the time.****
>
>
> >
> > First, a bit of the IDL definition, to set the stage. This is also
> > using
> > using ArrayBuffer from TypedArray [6], which I'm not sure if it's
> > altogether appropriate, but it's been incorporated by reference into
> > FileAPI [7], so it seems alright to use here.
> >****
>
> I think so. ArrayBuffers seem a natural fit for this API.****
>
>
> > [interface]
> > interface CryptoStream : EventTarget {
> >   void processData(ArrayBuffer buffer);
> >   void processData(DOMString data);****
>
> The flexibility of accepting either a string or ArrayBuffer is a good
> idea, with an internal, seamless conversion.****
>
>  ****
>
> I'm not sure whether it should be a literal ArrayBuffer or if it should be
> an ArrayBufferView. In looking at more specs, I suspect the latter is
> actually more correct.****
>
>  ****
>
> Well, no, it's not necessarily a seamless conversion :-) DOMString is
> UTF-16, so the conversion into a byte sequence is problematic if
> underspecified (eg: as I've unfortunately done here)****
>
>  ****
>
> Representation of binary data via DOMString is a known problematic area
> (eg: see WHATWG's work on StringEncoding via the TextEncoder/TextDecoder
> interface).****
>
>  ****
>
> Within the W3C, I understand this is part of ongoing discussions in
> public-webapps.****
>
>  ****
>
>
> >   void complete();
> >
> >   readonly attribute (DOMString or ArrayBuffer)? result;
> >
> >   attribute [TreatNonCallableAsNull] Function? onerror;
> >   attribute [TreatNonCallableAsNull] Function? onprogress;
> >   attribute [TreatNonCallableAsNull] Function? oncomplete;
> > };
> >
> > dictionary AlgorithmParams {
> > };
> >
> > dictionary Algorithm {
> >   DOMString name;
> >   AlgorithmParams? params;
> > };
> >
> > [NoInterfaceObject]
> > interface Crypto {
> >   CryptoStream encrypt(Algorithm algorithm, Key key);
> >   CryptoStream decrypt(Algorithm algorithm, Key key);
> >
> >   // Also handles MACs
> >   CryptoStream sign(Algorithm algorithm, Key key);
> >   CryptoStream verify(Algorithm algorithm, Key key, ArrayBuffer
> >   signature);
> >
> >   CryptoStream digest(Algorithm algorithm);
> >
> >   // This interface TBD. See discussion below.
> >   bool supports(Algorithm algorithm, optional Key key);
> >
> >   // Interfaces for key derivation/generation TBD.
> > };
> >
> >
> > As you can see, CryptoStream is used for all of the actual crypto
> > operations. That's because, in looking at the operations, I think all
> > of
> > them will work on a series of calls to provide input, and the result
> > of
> > which is either: error, some data output, or operation complete.
> >
> > The real challenge, I think, lies in the AlgorithmParams structure,
> > which
> > is where all of the algorithm-specific magic happens. My belief is
> > that we
> > can/should be able to define this API independent of any specific
> > AlgorithmParams - that is, we can define the generic state machine,
> > error
> > handling, discovery. Then, as a supplemental work (still within the
> > scope
> > of the primary goal), we define and enumerate how exactly specific
> > algorithms are implemented within this state machine.
> >
> > To show how different AlgorithmParams might be implemented, here's
> > some
> > varies definitions:
> >
> > // For the 'RSA-PSS' algorithm.
> > dictionary RsaPssParams : AlgorithmParams {
> >   // The hashing function to apply to the message (eg: SHA1).
> >   AlgorithmParams hash;
> >   // The mask generation function (eg: MGF1-SHA1)
> >    AlgorithmParams mgf;
> >   // The desired length of the random salt.
> >   unsigned long saltLength;
> > };
> >
> > // For the 'RSA-OAEP' algorithm.
> > dictionary RsaOaepParams : AlgorithmParams {
> >   // The hash function to apply to the message (eg: SHA1).
> >    AlgorithmParams hash;
> >   // The mask generation function (eg: MGF1-SHA1).
> >    AlgorithmParams mgf;
> >   // The optional label/application data to associate with the
> >   signature.
> >   DOMString? label = null;
> > };
> >
> > // For the 'AES-GCM' algorithm.
> > dictionary AesGcmParams : AlgorithmParams {
> >   ArrayBufferView? iv;
> >   ArrayBufferView? additional;
> >   unsigned long tagLength;
> > };
> >
> > // For the 'AES-CCM' algorithm.
> > dictionary AesCcmParams : AlgorithmParams {
> >   ArrayBufferView? nonce;
> >   ArrayBufferView? additional;
> >   unsigned long macLength;
> > };
> >
> > // For the 'HMAC' algorithm.
> > dictionary HmacParams : AlgorithmParams {
> >   // The hash function to use (eg: SHA1).
> >   AlgorithmParams hash;
> > };
> >
> >
> > The API behaviour is this:
> > - If encrypt/decrypt/sign/verify/digest is called with an unsupported
> > algorithm, throw InvalidAlgorithmError.
> > - If " is called with an invalid key, throw InvalidKeyError.
> > - If " is called with an invalid key/algorithm combination, throw
> > UnsupportedAlgorithmError.
> > - Otherwise, return a CryptoStream.
> >
> > For encrypt/decrypt
> > - The caller calls processData() as data is available.
> > - If the data can be en/decrypted, it will raise an onprogress event
> > (event
> > type TBD).
> >   - If new (plaintext, ciphertext) data is available, .result will be
> > updated. [This is similar to the FileStream API behaviour]****
>
> > - If the data cannot be en/decrypted, raise the onMGF1-SHA1error with an
> ****
>
> > appropriate
> > error
> > - The caller calls .complete() once all data has been processed.
> >   - If the final block validates (eg: no padding errors), call
> >   onprocess
> > then oncomplete.
> >   - If the final block does not validate, call onerror with an
> >   appropriate
> > error.
> >
> > For authenticated encryption modes, for example, the .result may not
> > contain any data until .complete has been called (with the result
> > data).
> >
> > For sign/verify, it behaves similarly.
> > - The caller calls processData() as data is available.
> > - [No onprogress is called/needs to be called?]
> > - The caller calls .complete() once all data has been processed
> > - For sign, once .complete() is called, the signature is generated,
> > and
> > either onprogress+oncomplete or onerror is called. If successful, the
> > resultant signature is in .result.
> > - For verify, once .complete() is called, the signature is compared,
> > and
> > either onprogress+oncomplete or onerror is called. If the signatures
> > successfully matched, .result will contain the input signature (eg:
> > the
> > constant-time comparison happens within the library). If the
> > signatures
> > don't match, .result will be null and the error handler will have
> > been
> > called.
> >
> > Finally, for digesting, it behaves like .sign/.verify in that no data
> > is
> > available until .complete() is called, and once .compete() is called,
> > the
> > resultant digest is in .result.****
>
> The final result of any of these operations would have all result data
> passed into the oncomplete event handler, correct?****
>
>   ****
>
> No. The oncomplete event handler follows the DOMCore event handling
> semantics. Since I didn't define a custom event type (eg: one that would
> curry the result), it would be expected that callers obtain the result via
> evt.target.result. evt.target is bound to an EventTarget, which the
> CryptoStream inherits from, and is naturally the target of the events it
> raises.****
>
>  ****
>
> This is shown in the pseudo-code example of how evt.target.result is read.
> ****
>
>  ****
>
> But yes, for all successful operations (eg: no onerror callback),
> evt.target.result contains the data available. In the case of operations
> which yield "good" or "bad" (eg: MAC & Signature verification), the .result
> contains the verified data.****
>
>  ****
>
> Note that I didn't spec Verify+Recovery, since I'm still mulling that one
> over, but if implemented, I would imagine verifyRecover would presumably
> have the recovered PT (rather than the original signature) in .result.****
>
>  ****
>
>  ****
>
>
> >
> > What I haven't fully worked out is how key derivation/agreement will
> > work -
> > particularly if the result of some result of key agreement results in
> > multiple keys (eg: how SSL/TLS key derivation works in PKCS#11). This
> > is
> > somewhat dependent on how we treat keys.
> >
> > Note that I left the Key type unspecified. It's not clear if this
> > will be
> > something like (Key or DOMString), indicating some either/or of
> > handle /
> > id, if it might be a dictionary type (with different naming
> > specifiers,
> > such as 'id' or 'uuid'), or if it will be a concrete type obtained
> > via some
> > other call (eg: .queryKeys()). I think that will be borne out over
> > the next
> > week or two as we continue to discuss key management/lifecycle.
> >
> > For a pseudo-code example:
> >
> > var stream = window.crypto.sign({ name: 'RSA-PSS', params: { hash: {
> > name:
> > 'sha1' }, mgf: { name: 'mgf-sha1' }, saltLength: 32 }}, key);
> > stream.oncomplete = function(evt) { window.alert('The signature is '
> > +
> > e.target.result); };
> > stream.onerror = function(evt) { window.alert('Signing caused an
> > error: ' +
> > e.error); };
> >
> > var filereader = FileReader();
> > reader.onload = function(evt) {
> > stream.processData(evt.target.result);
> > stream.complete(); }
> > filereader.readAsArrayBuffer(someFile);
> >
> >
> > The FileAPI is probably not the best example of why the iterative API
> > (.processData() + .complete()) is used, since FileReader has the
> > FileReader.result containing all of the processed data, but it's
> > similar
> > than demonstrating a streaming operation that may be using WebSockets
> > [8]
> > or PeerConnection [9].
> >
> > Note that I think during the process of algorithm specification, we
> > can
> > probably get away with also defining well-known shorthand. eg:
> > 'RSA-PSS-SHA256' would mean that the hash is SHA-256, the mgf is
> > MGF1-SHA256, and only the saltLength needs to be specified (or should
> > it be
> > implied?)****
>
> Since this is a low-level API, perhaps we imply a sensible default, with
> the ability to override for properties like saltLength?****
>
>   ****
>
> I think "sensible default" is actually quite appropriate for high-level,
> but not for low-level.****
>
>  ****
>
> One of my biggest concerns with "sensible default" is that, once spec'd,
> you cannot ever change the defaults. This creates potential problems when
> we talk about deprecating or removing support for algorithms.****
>
>  ****
>
> This is why I proposed the short-hand notation as an algorithm name,
> rather than being default/optional values on the Dictionary type.****
>
>  ****
>
> For example, rewind ten years ago (or lets go 20, to be fair), and a
> sensible default for RSA signatures would be RSA-PKCSv1.5 + MD5 for the
> message digest function.****
>
>  ****
>
> So applications get written using window.crypto.sign({'name': 'RSA' },
> RsaKey);****
>
>  ****
>
> Now, as we move forward in time, we discover that MD5 isn't all that
> great, and really people should be using SHA-1. However, we can not change
> the default for { 'name': 'RSA' }, because that would be a semantic break
> for all applications expecting it to mean MD5.****
>
>  ****
>
> Further, as we continue moving forward in time, we discover that PKCSv1.5
> isn't all that great, and PSS is much better. However, again, we cannot
> change the defaults, because it would break existing applications.****
>
>  ****
>
> The result of being unable to change the defaults is that when /new/
> applications are written, because they're not required to specify values
> (eg: there are defaults), they don't. The result is that new applications
> can end up using insecure mechanisms without ever being aware of it. By
> forcing the app developer to consider their parameters, whether explicitly
> via AlgorithmParams or implicitly via the algorithm 'name', it at least
> encourages 'best practice' whenever a new application is being written.***
> *
>
>  ****
>
> The argument for default arguments is very compelling - there is no doubt
> about it. The less boiler plate, arguably the better. However, for a
> low-level API, particularly one whose functionality is inherently security
> relevant, defaults tend to end up on the short-end of the security stick
> over time, and that does more harm than good.****
>
>  ****
>
> That's why I provided the 'escape hatch' of defaults by using the
> algorithm name as a short-hand for the more tedious AlgorithmParams portion.
> ****
>
>  ****
>
>
> >
> > Anyways, hopefully this straw-man is able to spark some discussion,
> > and
> > hopefully if it's not fatally flawed, I'll be able to finish adopting
> > it to
> > the W3C template for proper and ongoing discussions.
> >****
>
> I like what you have here. I think this interface is elegant in the
> central concept of the CryptoStream being able to handle any operation
> possible for the algorithm. This interface is simpler to work with than my
> proposal.
>
> Like Wan-Teh said in the meeting this week,  we should figure out how key
> generation works, the structure of the key handle, or,  extracted key data
> properties look like. ****
>
>   ****
>
>  With the Algorithm and its AlgorithmParams are we headed down the path
> of maintaining a cipher suite for this API?****
>
>   ****
>
> So, as I mentioned on the phone and the preamble, I think as a WG we
> can/should first focus on defining the practical parts of the API - eg:
> without defining any ciphers (MUST or SHOULD) - and what the semantic
> behaviours are for consumers of this API.****
>
>  ****
>
> Following that, I think the WG can supplementally extend the spec to talk
> about different algorithms, modes, etc - eg: AES, RSA, HMAC, etc.****
>
>  ****
>
> I think part of this is pragmatic - while we can talk about all the
> 'popular' suites of today (AES, RSA, SHA-1/2), there's no guarantees
> they'll be secure 'tomorrow' (ECC, SHA-3, SomeNewKDF), and putting those as
> MUST-IMPLEMENT imposes a real security cost going forward. Further, as we
> look across the space of devices that might implement this API - from beefy
> desktops, to resource constrained mobile devices, to game consoles, to who
> knows what - it seems we must also recognize that the ability to reasonably
> support some algorithms is simply not going to exist. Whether this is being
> unable to AES in cipher-text-stealing mode or DSA/DSA2, a lack of support
> for ECC or MD5, mandating algorithms won't do much to help adoption of the
> core API, I believe.****
>
>  ****
>
> Yes, extensibility carries risks - vendor-specific encryption schemes may
> be added that aren't implemented by other user agents. However, this risk
> exists with just about any generic and usable web API defined by the W3C -
> we've seen it with custom HTML tags, custom CSS prefixes, <video> and
> <audio> algorithm support. I see the Algorithm/AlgorithmParams operating
> within that same space - something that can be (independently) standardized
> without changing this core API.****
>
>  ****
>
> Note: I do think the Core API can define sensible values for the
> algorithms we know/care about, I just don't think it's a function of the
> core API to dictate what must be implemented, just how it will behave if it
> is implemented.****
>
>  ****
>
> Cheers,****
>
> Ryan****
>
>  ****
>
>
> Thanks again for putting this together, I think we should begin nailing
> down the hand wavy 'Keys' for this proposal.
>
>
> Regards,
>
> David****
>
>   ****
>
>  ** **
>
>  ** **
>
>  ** **
>

Received on Thursday, 21 June 2012 18:34:30 UTC