W3C

Technical Architecture Group Teleconference

02 Apr 2014

See also: Agenda, IRC log

Attendees

Present
Tim Berners-Lee, Daniel Appelquist, Domenic Denicola, David Herman, Yehuda Katz, Sergey Konstantinov, Yves Lafon, Peter Linss, Alex Russell, Jeni Tennison, Anne van Kesteren
Regrets
Chair
Daniel Appelquist & Peter Linss
Scribe
Yehuda Katz, Domenic Denicola

Contents


<Yves> https://pbs.twimg.com/media/BkI7VdfCAAA8h6i.png

<JeniT> this is running here in SF today: http://www.w3.org/2014/04/annotation/submissions/

<wycats> scribenick: wycats

<dka> Scribe: Yehuda

<dka> For discussion this morning: https://www.w3.org/wiki/TAG/Reviews

Tracking our reviews

<Domenic> https://github.com/w3ctag/spec-reviews/issues

<Domenic> https://github.com/w3ctag/spec-reviews/commits/master

Yves: Most of you may not be aware that we need to report from time to time to the AC and Jeff to show the progress of the TAG
... we discussed this morning about documenting that our new work style is actually working and having good effects on the specification and general work on APIs
... I put together a dashboard to show the work we're doing
... it's not like the previous product page - we don't need to document actions
... just dates and the impact we've had on the specification
... for example, with Push, we can see that we've had effects

wycats: do Jeff and the AC know what we're doing?

dka: yes
... this is about recording it for our own purposes
... and showing the positive impact
... the idea is to do it in a light-touch way so we're not adding a ton of bureaucracy

wycats: I thought the style of work on the quota management API was very good

Domenic: I just added quota management
... and Sergey's work on web animations

Yves: we don't want a heavyweight process here, just a basic dashboard

dka: Github has a lot of stuff in it, and this is a way to track where we've had an impact

Yves: or not

<chuckling>

wycats: I was really happy about the CSS dashboard, and having a way to point people at what we're doing sounds great
... I've had people approach me with thoughts and helping them see what we're doing is awesome

dka: People can add stuff whenever, and this shouldn't add any more work for people who want to do reviews

Website

dka: I ripped off the web apps WG for us

http://www.w3.org/2001/tag/Overview-beta.html

<slightlyoff> scribenick: slightlyoff

dka: how does package url relate to zip urls?

wycats: they're the same

Package URLs

wycats: the reason this is important because we already have a packaging problem. JavaScript exascerbates this
... having maintainable content requires that you use multiple files
... in JS, e.g., ember apps, people use server-side concatenation
... this can mostly resemble a package of JS in one file. This is important to the way the web works today.
... however, I and Sam and Dave have done a bad thing with ES6 modules
... also the WC folks ahve done the same thign with imports
... we could have added syntax in ES6 modules that let you do "inline modules" for use in concatenation
... but we didn't do that

dherman: (clarification)

(one stream of bytes)

dka: you decided not to do this?

<Yves> like mime multipart?

wycats: yes. Andreas Rossberg of V8 pointed out that we were trying to solve a network problem; meanwhile people on the committee were advocating that poeple write this way when it wasn't for this use-case
... separately we also noticed that there are other places on the web where they have this problem (base64 encoding images inlining, inlining templates, etc.)

slightlyoff: we also built compilers that inline templates (Dojo & Closure)

wycats: we noticed that trying to solve this in JS was the wrong layer to solve it at

dka: sprites, etc.

wycats: so we punted to the W3C and said "they'll solve it"

timbl: mnot would say that HTTP 2.0 will solve it

wycats: because of the way people are doing bundling today, we're going to remove the current solution (inlining) in the module world
... and we find the "HTTP 2.0 or bust" message to not be compelling
... the way I'm imagining the system needing to work, these systems will start to predict what's needed and try to pre-emptively serve
... and they'll have to get very sophisticated to reason about the set of related resources

dherman: we're going to have to continue to reckon with a world where people can't control the servers they deploy on

timbl: yes. Every time we need to control the server we fail. e.g., real content type
... in practice, we've seen this pushback so much that I'm astonished mnot is making the argument

slightlyoff: FB, twitter, and Google all have lots of control
... the initial HTTP2.0 deployment environment doesn't look like an average webdev's experience

(discussion about SPDY, manifests, prediction, etc.)

(discussion of using the term "JIT")

(discussion of how to learn about related resources, etc.)

dherman: we need an interim solution in a pre-SPDY world

wycats: ember will *need* to be 100 files in a module world and we need a way to address that

timbl: I like the idea that a zipfile is an optimization and you don't refer to it directly

Yves: (question about using packaging)

slightlyoff: (discussion of layered packages)

Yves: was asking about local resources vs. over the network. wycats was targeting the network case

<JeniT> background: https://github.com/w3ctag/packaging-on-the-web

wycats: so there's one constraint -- having read Jeni's document -- discussing relative URLs
... there's the idea that you might have a directory and want to have urls like "../" continue to work
... e.g., I have thinger/index.html and an image at ../images/foo.png

JeniT: the idea was that the package would deal with fully resolved URLs

wycats: I think the relative lookup is the normal way people will want to use the system

JeniT: are you talking about files in the package relating to each other?

wycats: JeniT's proposal requires Content-Location header

for every item in the file

wycats: I'm trying to understand...if you don't specify Content-Location, would it be relative to the root of the package?

JeniT: how could you refer to it?

wycats: relative to the root of the package?

(some confusion)

JeniT: where would you get the filename if you don't do that?

wycats: oooh....I see...that's the name of the file

Yves: if you want to use a plain zip (vs. mulit-part mime), you could have location relative to the root of the document

wycats: how is this backwards compatible?

JeniT: you can continue to have the files on disk on the system

wycats: so if I have a script tag, and that's the entry point to the package, how do I refer to the files in the package?

JeniT: you point to the file you want to reference from your script element and it's loaded from the package

wycats: I'm confused...did I miss something?

JeniT: I tried to split up "how do we package it" from "how do we refer to it"

slightlyoff: JeniT's proposal is an overlay

<JeniT> https://github.com/w3ctag/packaging-on-the-web

<JeniT> https://github.com/w3ctag/packaging-on-the-web#requesting-a-package

(note that the <link> tag is how the package is ref'd)

slightlyoff: this gives rise to my issue; you need a way to get the original HTML from the package

wycats: yeah, if I have the full app in the package, how do i reference index.html out of the package?

JeniT: I was thinking you'd have all the files on your filesystem, and you'd reference other files in packages from whatever your entry point is

slightlyoff: this creates a challenge for signing

JeniT: you can include all the files in the package, no?

slightlyoff: yes, but what do I do for going to app.example.com?

Yves: it's the inverse to 209

JeniT: you could use 209 if it could be generalized

timbl: if you send mulit-part mail, there's one that's the "top-level" message in the package
... there's a clear distinction that the cover note is the thing being sent

JeniT: yeah, it could just be the first one in the package

wycats: I have a question about the <link>
... if I have <a href="...."> in a document...do I konw where that goes? do I have to block on package download?

slightlyoff: this is why I brought up extent

JeniT: I think it has to go on the <link>

wycats: this seems like my custom separator...
... if you say something like...<link rel="pkg" extent="/app/">
... <link rel="pkg" extent="http://example.org/app/">

JeniT: the other thing that was raised on the mailing list was some sort of glob

slightlyoff: I'd like the globbing to match up with whatever SW's do

wycats: this is the moral equivalent of what I'd proposed before
... you (dherman) objected because you wanted to find a separator to avoid the ergonomic cost of having to put the link tag in everywhere

dherman: we want the common programming patters not to require opt-in
... *every* time you use jquery, you need a link tag?

Yves: this a pre-HTTP2.0 solution

wycats: we don't think this is a transient thing...will need it for a long time
... imagine if this configured the browser (ES 6 module) loader
... JeniT's making the point that URLs matter and that if we use this overlay, we can not change the URLs

(discussion of spriting, packaging, etc.)

(wycats outlines JeniT's proposal to dherman)

dherman: somethign about this is making me worried
... this reminds me of Alex Limi's proposal...but that was on a much more fine-grained basis that required configuraiton

<annevk> http://limi.net/articles/resource-packages

discussion of CDNs, annoyance of configuration

annevk: this seems isomorphic to alex limi's proposal

Yves: the server could be smart about sending a header + package

JeniT: i looked at this. Does require a lot of configuration

<JeniT> see https://github.com/w3ctag/packaging-on-the-web#package-requests for that discussion

timbl: we should define a level of conformance about server capabilities

wycats: I think mod_spdy is eventually going to be a good solution for sites with dev-ops teams, etc.
... I hope that ghpages et all get the benfits

JeniT: annevk, you talked about this being tried (and failing) before
... is the environment different such that this will work better?

dherman: I thikn I can answer; this is very close to Alex Limi's proposal. There was another proposal (lost to time) that required more configuration. Alex Limi's work stopped at "SPDY will fix it". We're re-opening the discussion

dka: what is the deliverable?

wycats: the open question is Alex's concern about top-level?

dka: back to deliverable. What needs to be done? Who needs to do it?
... where?

wycats: I think the TAG is fine

dka: TAG is fine but theres an IPR issue, perhaps
... I'd like to understand what the shorter-term deliverable from us is

wycats: I missed a bunch of your proposal when I read it...I can perhaps accent the bits that help those like me who might have missed those bits

JeniT: it was written as proposal and not spec
... we can re-frame it as a spec and re-do it

wycats: was there a reason why in encapsulation section you rejected message-http as the boundary/header in multi-part mime?

JeniT: it was just kludgy
... it's horrible with multi-part because you ahve to have this parameter on the Content-Type. I thought, why not create a new Content-Type that sniffs th boundary instead. semantically mulit-part but has the sniffing added.

wycats: works for me
... the concern is server configuration

(discussion of configuration difficulty)

JeniT: 3 scenarios: (missed), spriting, and serving up the whole app from the package
... in that case (3rd) you have to serve up the starting page and the send the package

slightlyoff: smart servers should send the package + a 209-like-thing

(discussion of packaging, noting URL bar identifies a single thing)

annevk: this is using fragments?

group: no

slightlyoff: you can imagine a spec going either way about blocking or not

Yves: if you have that big a package...

slightlyoff: it's not about size. It's about blocking re: preload scanner

timbl: when you say that could be expensive...

slightlyoff: not in CPU, but in opportunity cost for missed download time

(discussion of costs)

(discussion about what to do with missing extent in the <link rel="pkg">

)

(discussion about optional vs. mandatory prefix/extents)

wycats: I think we're converging on the requirement

dherman: we should check with precsanner implementers

timbl: does this mean that packaging tools on the server need to be smart?

(discussion of headers and in-package metadata about extents)

<wycats> it's nice that you can explain the <link> tag in terms of service worker

timbl: is there a way that the requests could be redirected to packages in a stronger way? redirect to a package if we have an extent registered?

JeniT: in what I'd proposed, there was a Link header for retreiving CSV data so that for thigns which aren't HTML you can still reference packages

(discussion of header version)

wycats: we should make sure there's a way for the SW to handle this sort of thing as a polyfill

(discussion to about SW install flow)

dherman: so then the SW can handle this without requiring new clients

wycats: it'd be great if you could install some sort of a codec handler
... ...depending on perf
... there's likely going to be some sort of perf tradeoff

(agreement)

<scribe> ACTION: JeniT to make a respec document out of the proposal [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action01]

<trackbot> Created ACTION-857 - Make a respec document out of the proposal [on Jeni Tennison - due 2014-04-09].

<dka> ACTION: Jeni to make a new document on package URLs. [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action02]

<trackbot> Created ACTION-858 - Make a new document on package urls. [on Jeni Tennison - due 2014-04-09].

plinss: a proceedural question. I've heard conflicting things about wether or not the TAG can produce specs

dka: as everyone knows, the W3C operates under and RF policy
... everyone in the TAG is here as an Invited Expert

(per the charter)

dka: from an IPR perspective, if the TAG creates a spec, we're not automatically licensed under the W3C patent policy

wycats: Domenic, e.g., isn't an employee of a member company

dka: we can produce REC track docs, but we generally move things to WGs for expertise reasons
... e.g., this is why SW needs to be produced at WebApps

(everyone agrees this would suck)

(background around why TAG doesn't have patent policy)

<dka> See: http://www.w3.org/2003/12/22-pp-faq#taglic

wycats: we've put in the work. Would be good to get to the end from here without restarting the entire debate

slightlyoff: perhaps sysapps would take the work?

[break]

<timbl> dka, http://plane.ardupilot.com/

<timbl> http://diydrones.com/ etc

<dka> Scribenick: Domenic

Integrity URLs

wycats: a good use case for integrity URLs is caching

JeniT: there are lots of caveats

<dka> ACTION: dan to talk to PSIG on whether or not it’s possible to amend the TAG charter to allow for rec track docs to run under the standard W3C IPR rules. [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action03]

<trackbot> Created ACTION-859 - Talk to psig on whether or not it’s possible to amend the tag charter to allow for rec track docs to run under the standard w3c ipr rules. [on Daniel Appelquist - due 2014-04-09].

<JeniT> http://w3c.github.io/webappsec/specs/subresourceintegrity/

wycats: the spec right now does say that this is a use case

JeniT/wycats: the caveats are about origins and security

<JeniT> http://w3c.github.io/webappsec/specs/subresourceintegrity/#caching-optional-1

wycats: the spec currently finds places URLs are used and adds places where you can add `integrity="""`
... but finding every place URLs are used and adding attributes is a maintenance nightmare, sometimes impossible, lots of work, bad bad bad
... I was thinking this is similar to packaging URLs and we might want to solve it the same way
... maybe a Link tag?

JeniT: maybe site-wide would be better, e.g. a manifesty thing in a .well-known location

annevk: that seems problematic
... you need this for CORS too
... there are not that many things

wycats/dherman: there are many things

annevk: APIs should take a dictionary
... the alternative is a generalized URL format that contains this metadata

wycats: we tried that with the package thing...

dherman: I'm not sure what I think ... it would be interesting to enumerate them

<JeniT> maybe use RDFa or microdata (?!?)

<JeniT> but that doesn't help in CSS

annevk: we've done the enumeration for many things, e.g. CSP or service worker

(discussion of `new Image` example)

annevk: CORS requires explicit metadata at fetch time

wycats: there are many different metadata things related to each fetch and they all use different syntax and it's hard

annevk: I agree

(meta violent agreement)

wycats: your objection to a Link tag or manifest solution is that it's too far away from the actual resource
... I'm not sure that applies to integrity

annevk: (counterexample of an image in a templating system)

wycats: fair, but the crossorigin solution (extra attributes) extended to everything seems bad

annevk: what about a fetch attribute that takes a microsyntax expressing all these things
... works in CSS and JS too
... backward-compact in CSS works fine using fallback properties

Domenic: I'm worried about JS APIs mostly... how do we retrofit

(discussion of potential solutions ... usually just add a new parameter)

annevk: srcset is a big problem
... even if crossorigin makes sense to apply to all the URLs in a srcset, integrity does not
... integrityset!?
... fetchset!?!?!?!

<JeniT> the existing design for providing metadata for URLs is RDF

wycats: I generally dislike microsyntax ... grumble grumble ...
... I think the action item is to find all the extension points and consider how to invent a microsyntax, so that integrity could be hooking into an existing thing instead of having to do this work all over again and independently

slightlyoff: how are they explaining how verification is done?

wycats: it's very short

<slightlyoff> http://w3c.github.io/webappsec/specs/subresourceintegrity/#validation-3

wycats: I feel like this should be using webcrypto's algorithms?>

slightlyoff: at a minimum if a user agent provides integrity they should provide web crypto's SHA 256/512

(discussion of how webcrypto doesn't mandate things)

wycats: this whole integrity thing should be polyfillable with service worker + web crypto

(discussion of how to tie in to WebCrypto)

wycats: I think I am proposing a spec review of the integrity spec
... I can try to do this but need help from annevk to understand all the constraints
... there are two strategies
... we need to create a new metadata mechanism on top of URLs regardless
... we need to figure out how to express this everywhere
... our options are either "close" or "far"

annevk: I have a pretty good handle on where URLs exist
... srcset is the most problematic

wycats: we can just say it's legacy...

Domenic: already!?!

(some general incredulity)

JeniT: it is worth looking at RDF and RDFa for metadata about documents
... it doesn't help with CSS or JS but it does help with HTML

annevk: also it has many levels of indirection that will be problematic for authors

JeniT: I don't think so... you can put things inline with RDFa...
... I'd be happy to do a sketch of how it would look in RDFa

<JeniT> http://www.w3.org/TR/rdfa-lite/

wycats: I agree it would be a shame if we come up something extremely similar and not identical

<scribe> ACTION: wycats and others to do an integrity URL spec review [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action04]

<trackbot> Created ACTION-860 - And others to do an integrity url spec review [on Yehuda Katz - due 2014-04-09].

<JeniT> timbl, you were talking about http://en.wikipedia.org/wiki/Extensible_Metadata_Platform

<scribe> ACTION: wycats to collaborate with annevk on a plan for general metadata on URLs [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action05]

<trackbot> Created ACTION-861 - Collaborate with annevk on a plan for general metadata on urls [on Yehuda Katz - due 2014-04-09].

plinss: integrity URLs are heading in the direction (but not quite getting to) is doing cryptographic signatures, instead of just hashes

wycats: I think slightlyoff wants signatures also

slightlyoff: I sense that there is a need to be able to verify the contents of bundles etc.
... I think a packaging system is the first step

Domenic: can someone clarify how this is different from HTTPS?

slightlyoff: HTTPS is transport-level]

plinss: this would allow me to, once I get the resource, send it to you along with its signature so that you know it came from the original source
... instead of from me
... this could enable valuable other things in the future, e.g. a peer-to-peer mode

wycats: what's nice about this is that it solves the "omg too much javascript" problem, at least insofar as there are popular frameworks. E.g. jQuery gets downloaded very few times.
... Google Ajax CDN is not perfect

Domenic: is there a potential for malicious collisions?

(you'd have to find a collision, which is not at all easy)

timbl/wycats: (discussion of the incentive structure this creates for library consumers)

wycats: rapid library releases actually encourage adoption of the latest version
... I want this

https://github.com/w3ctag/promises-guide

+ outstanding https://github.com/w3ctag/promises-guide/pull/21

<dka> Scribenick: wycats

Promises Guide

Domenic: (goes through the sections of the Promises Guide)
... suggests that people say "Promise objects are defined in [ECMAScript]"

annevk: points out that people are using WebIDL anyway, which references ECMAScript already

wycats: TC39 has been defining event loop concurrency and the browser has to hook into it

(discussion about "continue running the following steps async"

(discussion about tasks, microtasks, and "queue a task")

(discussion about "run the following steps asynchronous", spawning a thread, await in ES7)

dherman: we're talking about the meta-language vs. object-language

Domenic: (explains what his spec is doing)

annevk: in a normal situation, the spec would say "enqueue a task"

Domenic: what language do we need to offer people about how to talk about running steps asynchronousl

annevk: (talks about ways to explain how async works using the platform)

dherman: this is where the extensible web goes off the rails

dka: who is this document for?
... we need to tell people what TO do if we are telling them what NOT to do
... we should tell them what TO do first
... then tell them about antipatterns afterward

dherman: about "run these steps asynchronously"... if there's a network fetch, shouldn't you say "when such and such happens, run these steps"

annevk: (shows the XHR spec)

dherman: I'm not sure how to read this

(discussion about threads vs. running things async on the main thread)

(discussion about how algorithms can keep running after returning)

annevk: it seems like we're just discussing how to word things

wycats: dherman, what are your thoughts?

dherman: I've been doing a lot of thinking on the run-to-completion programming model
... it's fiendishly difficult to nail it down
... trying to get my head around what invariants we're trying to enforce
... this came to a head when I started fighting back against people who want to add data races into JS
... I really want to articulate why it's so critical
... I have been fighting the Balrog for years - YOU SHALL NOT PASS
... Yehuda is worried about avoiding a run-to-completion violation

annevk: I'm not trying to change anything about the run to completion model

dherman: the only way to know that is to read the spec carefully

annevk: but how can you know?

dherman: this is a constant vigilance we need to be careful about

wycats: I think we need to beef up the meta-language so it can more easily protect against invariants

dherman: yes

annevk: I would love to migrate to something better

wycats: this may be an extensible web hazard - we may have unobservable data races that we expose by accident

annevk: I would love for TC39 to take over the role of WebIDL

Domenic: WebIDL has a lot more than just some types

annevk: WebIDL provides a central place for the platform to address general meta-language questions

dherman: this is sometimes called "elaborative semantics"

(discussion about ownership of the meta-language)

dka: flagging that we have gone far beyond promises
... should we be involving more WebIDL people?

wycats + annevk: Cameron would love to have someone do this

slightlyoff: we can use this to improve the idioms

JSIDL and WebIDL…

(discussing TC39 ownership)

dherman: Allen has been doing heroic work refactoring the ES spec incrementally dragging it into the 21st century

(discussion about run to completion and garbage collection)

(discussion about GC and WeakRefs)

(discussion about why exposing GC is bad)

dherman: GC produces non-determinism
... non-determinism produces interop hazards
... injecting randomness can help, but who knows

dka: we need to have a meta-discussion around our design principles

Promises

Domenic: we need to formalize "run asynchronously"

(back to spec review)

annevk: shorthand phrases should go in WebIDL

<general agreement>

Domenic: now that I finished promises I can finish up this guide
... comments are welcome

<dka> ACTION: Domenic to incoporate comments on guide and put it into respec… [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action06]

<trackbot> Created ACTION-862 - Incoporate comments on guide and put it into respec… [on Domenic Denicola - due 2014-04-09].

Web Animations

<dka> scribenick: Domenic

(twirl presents his slides)

twirl: seconds instead of milliseconds; we should probably recommend milliseconds

(discussion of origin of web animations. Is it flash inspired?)

slightlyoff: Google started on this; it is nice to unify the timeline models between SVG animations and CSS animations
... the answer is, this did not come from flash

<dka> Domenic: The specs don’t reflect reality to some extent.

wycats: if people claim to layer, we should teach them how to prove it

slightlyoff: not clear why twirl recommends building rAF on top of web animations

(discussion of whether it's possible for this spec to cause data races)

twirl: I am 95% sure there are no observable races

slightlyoff: (explains the idea of the coherent freely-manipulatable model vs. the synchronization with the rendering)
... our platform still has sync APIs that cause flushing between the model and the rendering

twirl: the spec should define what happens when I do getComputedStyle etc.

slightlyoff: it does, section 5.23

<slightlyoff> http://dev.w3.org/fxtf/web-animations/#script-execution-and-live-updates-to-the-model

wycats: specifically does getComputedStyle force a flush of the model into the view

slightlyoff: it does

(further discussion of interaction between layout forcing and animation)

twirl: my concerns are that this is not defined in the spec, and that the these two statements are confusing

slightlyoff: (explains the model and the layout/paint cycle and how changing properties of the model does not interact with layout/paint)
... style resolution can happen independently of anything being laid out or painted

annevk: I am concerned about the use of terms that don't mean anything like "execution block"

<annevk> In particular they need to define their terms and work on integrating with concepts defined outside their specification, such as tasks et al

https://github.com/twirl/spec-reviews/blob/master/2013/10/Web%20Animations.md

twirl: what next? More reviewers...

slightlyoff: the API feels odd in a few places...
... I can try to dump some of my thoughts into the review document
... I think you're right that there's confusion about how the models are meant to interact...
... the spec doesn't enunciate how layout relates to resolving styles and which of those two properties this operates on

Domenic: I can help nitpick APIs...

twirl: the spec is good, but very complex...
... grouping in particular

wycats: how coupled is the grouping stuff to the simpler stuff

slightlyoff: I think it's not that coupled and it scales and composes pretty well
... the clone thing is very odd

wycats: there are callbacks, which maybe should be .... something different.
... e.g. "finish" event?

dka/slightlyoff: we should have a call with an editor

<scribe> ACTION: slightlyoff to do a brain dump on his thoughts [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action07]

<trackbot> Created ACTION-863 - Do a brain dump on his thoughts [on Alex Russell - due 2014-04-09].

<scribe> ACTION: plinss to set up a call with web animations spec editor(s) [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action08]

<trackbot> Created ACTION-864 - Set up a call with web animations spec editor(s) [on Peter Linss - due 2014-04-09].

<twirl> http://dev.w3.org/fxtf/web-animations/#widl-Timing-easing

wycats: can easing be a general JS function instead of a string? E.g. like jQuery

Domenic: isn't it because then we'd have to have a story for executing arbitrary JS functions on another thread?

wycats: we should ask the jQuery people who work on animations whether they could replace jQuery's animations system on top of this spec. If not we've probably done the wrong thing.

<scribe> ACTION: wycats to find jQuery animations people and ask if they could build on top of web animations [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action09]

<trackbot> Created ACTION-865 - Find jquery animations people and ask if they could build on top of web animations [on Yehuda Katz - due 2014-04-09].

plinss: to be clear that's not the goal of the web animations spec. It is geared toward explaining W3C specs
... we *could* augment the existing spec with more things if necessary

wycats: it is probably possible given that the existing declarative CSS animations are very close to being possible

EME

wycats: I have heard a few times from timbl that we could do this as a plugin
... my understanding--- the problem with the current EME strategy is it forces new browsers to gain keys
... i.e. it relies on side-deals between browser vendors and content providers
... and it may be difficult for new browsers or browsers with little clout to negotiate those deals
... I have heard someone advocate having someone like Google take up that burden by creating a plugin for other browsers to use

slightlyoff: we're moving to a plugin-free universe...

wycats: what we're talking about here is a plugin, undeniably. EME is a plugin.
... what if we made it an actual plugin?
... this is not the status quo because existing platforms want to get rid of flash

twirl: these "deals" you are discussing are a legal concept...

plinss: I don't think that's what Mozilla's afraid of; they are afraid of wanting to support an open platform, but EME is not one of those things

wycats: (clarifying) if the web browser does not have the certificate to view the content, then the user cannot view the content
... but that doesn't help Firefox ship on Linux
... Firefox on Linux is not a niche cache. E.g. Firefox might have to make a deal with Windows to have Windows expose the certificates to them

timbl: (explains tech details of how new browsers could access existing EME blobs on a machine)

<twirl> I'd propose everyone to read this document: https://www.eff.org/wp/digital-rights-management-failure-developed-world-danger-developing-world

dherman: I don't understand how this works with the threat model. The content owners are protecting themselves against users, right?

timbl: that's not it; they assume it's crackable but they want to make it harder for the regular user

<slightlyoff> scribenick: Domenic

timbl: the whole thing is a combination of software and law

dherman: but if they're moving toward these trusted execution environment models with secure bits where they control the computer...
... I don't understand how that model is compatible with a browser that the user installs on their system

timbl: there are extreme positions

dherman: isn't this direction more than is necessary to protect their content?

(discussion of measures that still leave you in control of your machine but also help make content creation economically viable)

timbl: content providers will let different quality versions of content be protected by different levels of CRM strength

wycats: a solution that stops browser UI from allowing easy downloading would get us pretty far...

<twirl> One more suggested reading: http://en.wikipedia.org/wiki/Hdcp

dherman: a good analogy is the H.264 situation with Mozilla and Cisco---an open source plugin

twirl/wycats: there's still the possibility of "Exclusive to IE" content

twirl: there's nothing we can do about that

wycats: that doesn't seem to be the goal of e.g. Netflix

timbl: you could imagine Sony films making things available only on Sony computers and TVs...

wycats: that is interesting but not necessarily what we're most worried about...

twirl: I think the content owners will not stop until they have even more control...

(discussion of how bad it can get; Sony rootkit etc.)

<dka> CF: http://www.copyrightreform.eu

<annevk> dka: https://wiki.mozilla.org/EU_Copyright_Consultation

<JeniT> this is what's happening in the UK: http://blogs.ch.cam.ac.uk/pmr/2014/03/30/uk-copyright-reforms-set-to-become-law/

(various idle dreams of a better world...)

<twirl> In my view we should strive for changing basics of copyright law, i.e. Article 2.6 of Berne Convention. It now states that "This protection shall operate for the benefit of the author and his successors in title." [1] Proper formula should state that: (a) intellectual property right is legal, not natural[2]; (b) copyright protection shall operate for the benefit of both authors and the Society; (c) digital millenium had made traditional forms of per-copy pa[CUT]

<twirl> bsolete. So new law should: (a) find a balnce between authors and society rights; (b) strongly support new forms of payment (subscription services, crowdfunding, etc); (c) protect public interests, such as filling up public domain, ensuring availability of content for educational purposes, simplifying content publishing and making easier to publish content abroad -- and protecting right of linking, of course.

dka: how would we design a system from scratch to give these guarantees?

dherman: what matters is the actual state of who's pushing what, and right now the question is about Netflix and Google

dka: is the current design of EME feasible for smaller content providers to use, or would the cost be prohibitive?

plinss: the cost would not be prohibitive but it would be high
... one alternative is something good enough for most people via crypto + streams + a "secure worker"

wycats: on the list Henri said this would not go far enough

plinss: I agree it wouldn't. But I. don't. Care.
... So let Google build their own nonstandard DRM system that goes farther
... we'll build our own standard DRM system that runs on top of the web platform and see which one wins on the web.

dherman: wouldn't Google easily win?

annevk: remember it's Google + Microsoft + Netflix. It seems like a nonstarter to compete with that.

plinss: I think providers will recognize they're leaving money on the table by restricting themselves to only those.

wycats/Domenic: Netflix will just tell you to use a browser that has the nonstandard thing. Whether it be Silverlight or EME.

dka: the missing part of plinss's plan is some high value content that would drive adoption of such a standard DRM system

https://plus.google.com/+IanHickson/posts/iPmatxBYuj2

plinss: we shouldn't keep EME in the W3C

wycats: is it a good world if EME get standardized elsewhere and everyone else ships anyway

plinss: I think it would be good if we gave an alternative within the W3C instead
... SecureWorker is in a stronger sandbox than most but gives the promise that the browser will not be able to look into or debug the code
... so e.g. it can go fetch the keys ... or something ... and its code doesn't even need to be obfuscated

dherman: Henri was trying to explain to me why this was very bad...

wycats: because it doesn't help against the threat model problem?

dherman: the general idea of a worker that is isolated to be able to touch data that we don't want to leak to another part of the platform is possibly impossible to make work from a security perspective on the network layer. I'm not sure but a coworker told me.

twirl: there is a devil hiding in EME. It's this license request thing.

wycats: so they're mandating a rootkit on your computer?

timbl: could we have the browser do that license request instead of the root-access secret thing?

twirl: yes but Netflix would not allow this

Summary of Action Items

[NEW] ACTION: dan to talk to PSIG on whether or not it’s possible to amend the TAG charter to allow for rec track docs to run under the standard W3C IPR rules. [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action03]
[NEW] ACTION: Domenic to incoporate comments on guide and put it into respec… [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action06]
[NEW] ACTION: Jeni to make a new document on package URLs. [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action02]
[NEW] ACTION: JeniT to make a respec document out of the proposal [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action01]
[NEW] ACTION: plinss to set up a call with web animations spec editor(s) [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action08]
[NEW] ACTION: slightlyoff to do a brain dump on his thoughts [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action07]
[NEW] ACTION: wycats and others to do an integrity URL spec review [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action04]
[NEW] ACTION: wycats to collaborate with annevk on a plan for general metadata on URLs [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action05]
[NEW] ACTION: wycats to find jQuery animations people and ask if they could build on top of web animations [recorded in http://www.w3.org/2014/04/02-tagmem-minutes.html#action09]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2014/05/13 08:46:21 $