See also: IRC log
<wycats> Will he be calling in?
<wycats> Domenic and I can probably handle a TC39 update
<wycats> Good to have Dave on the phone
<wycats> ETA: 17m
<wycats> Is there any ibuprofin or naproxen at the Telefonica HQ?
<wycats> Sweet
<wycats> I will need some :p
<dka> we are on skype now
<scribe> Scribe: JeniT
<dka> please call me at “dappelquist” to join
<scribe> ScribeNick: JeniT
mnot: WG last call ended a few weeks ago
… there are a few small-medium issues remaining
… should be closed in week or so
… IETF last call will take a couple of weeks
… RFC in Jan/Feb if all goes well
… large number of implementations (about 20)
… IE & ISS, Firefox very eager, Chrome is shipping, Akamai, nginx will implement, Varnish are rewriting their core
wycats: isn’t Varnish not compatible with caching?
mnot: it’s an
application-specific cache, but http/2 doesn’t say anything
about caching really
... Apple, we don’t know, but probably they’ll implement
… the big question is Apache; Google has given them the modspdy code base
dka: the big question at the NY meeting the sense of urgency was about SPDY having such momentum
… and the fact that Apple was supporting SPDY
… leading to the question of whether there would be enough momentum to get off SPDY to http/2
mnot: we were a big confused by the Apple announcement
wycats: they’re doing it because there’s lots of SPDY that exists
mnot: Twitter have been very active, they’ve shipped very rapidly
… both Google & Microsoft have said that they’ll turn off SPDY support when http/2 comes through
… SPDY is really hacky, so they do want to get away from it
<dka> Related URL: http://http2.github.io
wycats: it would only affect super advanced sites in practice anyway
mnot: they’ve been versioning SPDY very rapidly
<wycats> proof: DEPRECATION WORKS ON THE WEB!
… after RFC there are a few things more eg opportunistic encryption
… there are a group of mobile operator vendors talking about proxies
… lots of privacy & security concerns around that
wycats: http/2 doesn’t mandate SSL right?
mnot: it doesn’t, we decided to move on and not require it
… but the browsers are saying they’ll only support http/2 for SSL
wycats: we’ve talked about making it easier to get certs
mnot: there’s an active discussion in the community about that
timbl: what kinds of plans are there?
Domenic: Cloudflare have a blog post yesterday spelling out they’re giving free SSL/TLS
wycats: that’s great for Cloudflare customers
[off the record discussion]
timbl: I want to enable certificates signed by my family, people I trust
wycats: that restricts what content you can see, because other sites aren’t signed by those certificates
timbl: creating socially-signed certificates only as complicated as social network sites
… the existing tools are bad, but they could be redesigned
wycats: I don’t think this is a standards issue: I think the standards are there
… I think write it and see
timbl: I think it requires some redesign, some P2P-oriented protocols
wycats: I don’t think they should be designed up front
timbl: it would be nice for the TAG to be somewhere where we can imagine a different world
<wycats> "role of the TAG" is a good discussion to have every time we have a new member
[pop]
dka: is there anything that the TAG can do to help http/2?
mnot: in the long run, the issue of the role of the network in communication is interesting
… comms between client/server is a 2 party problem not a 3 party problem
… we’ve pushed back around breaking encryption etc
dka: what timbl was saying about Verizon putting ads into pages sounds horrifying
mnot: there’s variability in cluefulness in mobile operators
… is it appropriate for the TAG to publish an opinion that says “X on the Internet sucks”
dka: it can be useful to publish a blog post or an official Finding
… we can more actively intervene to get people talking
wycats: in this case what they’re doing is against existing standards
dka: we can also use the relationship between GSMA and W3C
… when the issue is about getting information out there we can use that
mnot: in the IETF we have a position that writing words down has very little effect & regulation has little effect
… we’re moving towards enforcing what we believe technically
… eg if you think that connections should be end-to-end then use encryption & make it hard to break
… enforcement through technical means
… “standards & protocols & code is making law”
timbl: I think that’s a dangerous possible route
<dka> CF: https://github.com/w3ctag/secure-the-web
… it’s going towards an all-out battle of the robots, on who can out-design the other one
wycats: we can say what we want in the code protocols, but we have to persuade people too
mnot: it’s not that we’re making law by making code, but the way we write the protocols shapes the world
timbl: I think you need to start off with the principle/rule “your stuff shall not get interfered with”
… and if people break that “you’re banned”
… when you put in your fortress, when they find a way to break it, you need to have a paper trail
… you need to have a principle to point to
wycats: I think you need human beings talking to other human beings
… you have to get the people in the room
… most people aren’t acting maliciously, they’re just confused/misinformed
mnot: I don’t know, I think it’s a business decision, not confusion
timbl: it’s a huge income stream, it’s deliberate
<wycats> wycats: opt-outs don't affect the income stream
<wycats> wycats: this is why ads are ok with DNT opt-outs
dka: we talked about writing “Secure the Web”
… about crypto everywhere etc
… it’s out of date now, but we could update it
mnot: there are lots of people writing, we could throw our weight behind them
Domenic: there are things we can say around API design
wycats: we can talk about the browser as the platform, and security in the browser
<dka> e.g. PROPOSED RESOLUTION: privacy-centric capabilities in the browser should be exposed over SSL…
Domenic: our recommendation should say “browsers should tell the user when NX records are compromised”
wycats: there’s a way that user agents could surface the sniffing/spying more effectively
mnot: there are ways to tell the user when the CA isn’t one built in to the OS
… you get a different icon in the location bar if the CA isn’t one built in to the OS
… that’s a step forward
… the pushback is that users ignore that UX
… but there are users for whom it is important
wycats: Chrome makes it hard to click through now
… NXDOMAIN hijacking should be pretty easy to detect
mnot: easier than captive portal detection
<wycats> we could standardize a domain like: iamhijacked.com :P
dka: is there an action we could take as the TAG?
… should we be moving towards minimisation, only allowing privacy-infringing information passed through secure channels
mnot: we should definitely talk about privacy-sensitive features as the TAG
wycats: Alex has the idea that we could do more with hosted apps if we let them opt in to stricter security
dka: the balanced approach: to have access to the camera, I’ll give up something
wycats: eg not running third party scripts
dka: this runs into permissions, which we’ll discuss Tue afternoon
wycats: there’s a lot that’s already in the platform, there’s a social problem: we need to let people extend the platform
mnot: “prefer secure origins” is just about prefering HTTPS
dka: the thing about third-party ads is important
… people don’t want to go to HTTPS because it prevents them displaying the ads & limits their revenue
wycats: because if you opt into HTTPS you opt out of mixed content
dka: moving to HTTPS has major implications
Domenic: our position should be “privacy sensitive features”: these are things that have the ability to, if they’re man-in-the-middled, compromise people’s privacy
timbl: I worry that browser vendors are slapping on security constraints
<wycats> sigh
… the mixed content thing that forces abandoning HTTPS or abandoning HTTP
… is there a model of the world in which we could access HTTP things from HTTPS pages, then fine
<Yves> talking about UI and mixed-content, https://bugzilla.mozilla.org/show_bug.cgi?id=838395 is not really helping
… but they way Chrome “refused” messages, I get the impression people are slapping on constraints on developers
wycats: there are features on the web that expose private information, and web browsers would like to avoid leaking that
… one large mechanism for leaks is plain text HTTP
… if you just serve the HTML page over HTTPS, there are many other things (like scripts) that can access that information
… they want to offer some assurance that the private information is private
timbl: as a result, they turned off the ability to read completely public data over HTTP
… an intermediate library can’t access it either
Domenic: it’s better not to give false positives on privacy
wycats: another common way of leakage is third party scripts
… the solution the Chrome guys used for mixed content is the same as for connecting to CNN, the ‘ad view’ tag
… a way of putting in content in a completely sandboxed way
timbl: that’s fine in HTML, but when I’m writing a web app, how can I tell Chrome that I want to access stuff that’s completely public
… it’s got a CORS-Origin: * because it’s accessable by everyone
wycats: the analogous thing for ‘ad tag’ is to have a sandboxed area that enables your script to access the data
timbl: no, we have to move towards either writing my code inside browser extensions, or have a way of users saying that they trust scripts from a particular location
<wycats> this is not the correct group to have this discussion
<wycats> unfortunately
wycats: the problem is not that we don’t trust authors, it’s that the authors don’t know what they’re putting in their pages
… you have to limit what the third-party script can do
… eg prevent a third-party script from getting your geolocation
dka: you can still fingerprint or do other things in that kind of environment
wycats: the script would have access to nothing except post message
… the environment that contains the script could still pass that information
… but the author that’s installing the script will have to be explicit about what it’s giving the script
dka: what’s the status of this work
wycats: iframe sandbox and CSP give you reasonably close to lock down, there’s also realms in Javascript, which could give you a just-Javascript context
… but timbl’s original point is correct: there isn’t a model for how you avoid leakage
<wycats> it isn't coherent
<wycats> people are just stabbing around trying to lock things down
<wycats> but it's kind of random
<wycats> and driven in large part by what you can get away with
[discussion that browser extensions, plugins, wifi etc insert scripts into pages the author never knew about]
Domenic: CSP is a pretty big hammer; it would be great if you could allow geolocation but not enable those to be passed to the third party scripts
wycats: the ad tag which the Chrome appstore does is the correct approach
Domenic: secure origins is a clear feature which we can recommend
<Yves> tagging data in the engine... but it would require a huge change
mnot: the ‘prefer secure origins’ approach has consensus
dka: we have to address the way that impacts people who have ads in pages
<wycats> <adview> would allow mixed content that didn't trigger a mixed content warning
dka: what’s the action?
mnot: we should ratify ‘prefer secure origins’
wycats: a large number of geolocation-using sites would break with this policy
mnot: this is for new features, right?
Domenic: no
timbl: the resolution is that you should only be able to call features which are sensitive from HTTPS?
dka: yes
timbl: my point is if a developer wants to access something which is private, and therefore needs to use a secure page, it means that he can’t access public data
mnot: until it goes HTTPS
… if I’m relying on government data, I’d hope it was from the government
JeniT: GOV.UK is https, but that’s not the case for smaller organisations
dka: if I take the user’s location in script & make a request to https://gov.uk with my location in the query string, I can do that, right?
… there are still leakage possibilities
Domenic: in theory the browser could surface all the places that have access to your location, but if you allow the page to access http sites then it could be everyone
[NSA knows everything already]
dka: we could draft a statement that says “for new privacy-sensitive APIs, we want these to require secure origin”
wycats: “we support efforts by browser vendors…”
<Domenic> Draft recommendation here http://oksoclap.com/p/kCUkARlDtn
<wycats> 👍
<wycats> https://hackpad.com/Untitled-RQucKs6BLTT
[drafting in oksoclap not hackpad]
[drafting in hackpad not oksoclap]
[drafting in oksoclap not hackpad]
[hackpad requires access to Google contacts, oksoclap isn’t https]
<dka> https://etherpad.mozilla.org/qPTpp2UKfa
[drafting in etherpad]
<wycats> https://developer.chrome.com/apps/app_runtime
proposed resolution: https://etherpad.mozilla.org/qPTpp2UKfa
mnot: how should the TAG communicate?
Domenic: on EME they would have preferred us to file a bug rather than issuing an edict
wycats: when it’s more architectural, and there isn’t a spec, then a post etc would be more appropriate
mnot: just having it in the minutes doesn’t work
dka: we could put it on the blog, or on our GitHub-hosted home page
… like the http group did
mnot: have one place
… why not move the findings to github, and CNAME tag.w3.org to github
Domenic: we have lots of different kinds of things we publish: Findings, guides, short statements
mnot: TAG Findings are architectural: this is a short one but fits into that scope
<wycats> see https://github.com/tc39/ecma262
[discussion on publication process]
Domenic & mnot propose to create a home page for the TAG onto a GitHub Pages page
dka: we could use w3ctag.org
… the W3C page would be static, only change when the membership of the TAG changes
timbl: w3.org should have a record of everything that we do
… we need to make sure we have the archive
mnot: for IETF we save all the state from GitHub, using API access to the issues as JSON
… we check the JSON into the repo it’s associated with
… if GitHub were to go down, we’d need to do some work to reconstruct it
dka: keeping the history, getting it archived on w3.org, is really important
… w3.org has the longer lifetime
timbl: I will talk to the sysadmins about the TAG having the subdomain tag.w3.org
<dka> PROPOSED RESOLUTION: We support efforts by browser vendors to restrict privacy-sensitive features to secure origins. This includes ones that have not historically been restricted as such, like geolocation or webcam access.
<dka> We also support investigation into ways of preventing these features from leaking to third-party scripts within a webpage (although the exact technology to do so is unclear as yet, probably involving some combination of CSP and/or something like <iframe sandbox>).
<dka> +1
<Domenic> +1
<wycats> 👍
<plinss> +1
<twirl> +1
timbl: I’m concerned this will break code
mnot: I think a little context about medium term pain would be good
Domenic: this is an aspirational resolution
<dka> Agreed.
<wycats> architecture architecture architecture
<Domenic> Adding: "We appreciate this could cause some short and medium-term pain (breaking some existing content), and so this needs to be done with care, but it is a worthy goal to aspire to."
<dka> RESOLUTION: We support efforts by browser vendors to restrict privacy-sensitive features to secure origins. This includes ones that have not historically been restricted as such, like geolocation or webcam access.
<dka> We also support investigation into ways of preventing these features from leaking to third-party scripts within a webpage (although the exact technology to do so is unclear as yet, probably involving some combination of CSP and/or something like <iframe sandbox>).
<dka> We appreciate this could cause some short and medium-term pain (breaking some existing content), and so this needs to be done with care, but it is a worthy goal to aspire to.
<dka> now lunch
LUNCH
<dka> we will be back at 13:00 UK - for web of things discussion
<timbl> mnot, code for extracting discussion state and issues into gitable files?
<dka_air> https://call.mozilla.com/#call/awwfKaBNSJk
<dka_air> https://opentokrtc.com/w3ctag
<dka> ok that looks stable - we’re going to try to use opentokrtc.com
<dka> Seems to work best with either Firefox Aurora or Chrome Beta/Canary.
<scribe> ScribeNick: Domenic
<JeniT> ScribeNick: JeniT
<Domenic> https://w3ctag.github.io/promises-guide/
Domenic: the promises guide has been converted to look fancy
… also updated the readme, and contacted PLH to get the w3.org URL proxy to here
… at which point I can update the links
… there are a few issues but none are particularly big
wycats: is this where you’re adding the ECMAScript++?
Domenic: no
dka: there’s nothing here that states the status of this document
… and talks about the stability of the document
… and the fact that it’s going to be updated etc
… also you have it as public domain
… I’m happy with that, don’t care, PLH might say it should have a W3C something or other on it
Domenic: I wanted to use the W3CTAG logo
… this is using Tab Atkins bikeshed tool
… people like it and have been using it
wycats: I think we should get streams done, because people will be creating promises that should be streams
<Yves> +1 to streams
<Domenic> http://w3c.github.io/mediacapture-main/getusermedia.html#dom-mediadevices-getusermedia
… the new Ember is based on streaming stuff
Domenic: ^this^ is now returning a promise
<dka> PROPOSED RESOLUTION: pending a couple of document boilerplate changes (document status, logo, etc…) we agree that the Promises Guide is a published finding of the TAG.
Yves: I saw the issue about using ‘parallel’ rather than ‘async’, and I didn’t understand why
Domenic: there was a lot of confusion, when we looked at how it was actually be used, it’s actually about using a different thread
… we actually mean ‘don’t block’
wycats: maybe you should say ‘concurrently’
Domenic: all of the uses of ‘in parallel’ are actually in separate threads
wycats: it’s all giving instructions to things that actually have threads
Domenic: right, this is writing specs for browser implementers
Yves: I think that’s misleading if you’re writing a guide that’s directed a JS programmers
Domenic: that’s a good point; in the document there are places ‘this is more applicable to spec writers’, it would be worth calling that out
… and say ‘in JS you would use a WebWorker’
Yves: for me, it feels that there are synchronisation things that need to happen
Domenic: currently when we say ‘in parallel’ we actually do need to double check that
dka: we should publish this as a Finding
… the boilerplate changes don’t change that
<dka> RESOLUTION: pending a couple of document boilerplate changes (document status, logo, etc…) we agree that the Promises Guide is a published finding of the TAG.
Domenic: it went pretty well; we got the word out to a lot of developers
… discussions that stood out:
… editing (guy from MS was there)
… contenteditable, intention events like ‘cut’ instead of Ctrl-X
wycats: is that because the platforms have different key bindings?
Domenic: yes
… the rest of the things are eg cursor support
wycats: instead of trying to fix contenteditable, we should figure out what we need
Domenic: that’s what they’re doing
dka: we got some good input from developers who need this
Domenic: I noticed that it’s easy for someone to take over the discussion
… in the future we should be clearer about the messaging that this is not for you to present
… this happened in a couple of sessions
dka: it’s a fine line: you have to enable people to talk about their project, but you can’t let it overload the session
Domenic: I hosted a future of JS session… hosts should be a moderator, not a leader
dka: being a moderator is the key thing: you have to be in charge of the conversation, and making sure that everyone has input
Domenic: even if the messaging is ‘we don’t want one person to take over the session’
<dka> yes we do
<dka> hold on
dka: I think we did a better job of talking about moderators at this session
<dka> https://opentokrtc.com/w3ctag
JeniT: people need to be happy to leave if they are bored of one person hogging the discussion
wycats: sometimes the people who are talking need to have that discussion
<dka> best viewed on Firefox Aurora™
<dka> (or chrome)
dka: the only other thing on EWS is that you’re talking to other people who are taking that name and running something in Oakland
… but we should write a one pager on what EWS is
… for other people who want to run them
wycats: in particular having implementers & practitioners in the same room
dka: I’ll start that document
<dka> :)
<wycats> The place where practitioners and implementors meet is exactly "The Extensible Web"
<wycats> For the record, I think Hangouts works grea
<wycats> great*
<wycats> all hail NaCL
wycats: we’re still in the process of finishing ES6
… not a huge number of changes in ES6
… change of process for ES7 is going well
… there’s a website that have the 17 features in the ES7 features
… we’re using github
… github has list of proposals, which you can keep an eye on
<wycats> https://github.com/tc39/ecma262
wycats: we’re moving to a process where when it’s implemented we stamp it ‘standard’
… rather than creating a ‘standard’ which we then expect to be implemented
dka: this is also about the pattern of enabling practitioners to extend the web, and how this impacts standards
… this is related to what we have about the future of the future of standards, on our agenda on Wed
wycats: also in TC39, we moved the loader into a separate document
… we realised it’s intertangled with the browser loader
… need to let it evolve with the web and with node
Domenic: we also had some discussions about how to make DOM objects subclassable
… and arrived at a solution which people have varying opinions on
… but that can ship in ES6 without us having to do lots of work
… the new process is going well
wycats: almost everything that’s happening now is about locking down ES6 and moving to the new process
JeniT: what counts as implemented?
wycats: when web content relies on it, which could mean one browser
<wycats> depends on reality
dherman: it’s been the understanding of TC39 for a while that you need two implementations to advance to one of the last stages
wycats: yes, that’s in practice what’s going to happen
<wycats> my thinking is that the closer things match realpolitik the more useful they are
<wycats> so why does 2 impls matter? because it means people rely on it
<wycats> it says 2 impls
<Domenic> i agree that the new process document says 2. i find wycats interpretation is good too, possibly better.
<wycats> the more we can base in on reality, the less standards lawyering can get in the way imo
/me votes for using technology that works rather than technology you wish would work
<wycats> dherman: c
<wycats> my interpretation is derived from Brendan's general analysis
<wycats> dherman: totally agree
<wycats> dherman: I understood your point 5m ago :P
<wycats> dherman: c
<scribe> ScribeNick: Domenic
JeniT: we discussed the general
issue of how to send packages of files over the web in an
efficient, easy-to-deploy way
... one of the main drivers of this was JS modules
... (but not the only driver)
... the initial discussion focused on using special URL
syntax
<JeniT> URL syntax: package/!/filepath
<wycats> package/!//filepath
wycats: the point is there is some separator syntax that doesn't conflict with existing web content
<Yves> what would // uris mean then? (protocol-relative URLs)
<JeniT> http://example.com/path/to/package.pack/!//home.html#section1
mnot: so the package is downloaded in toto and deconstructed on the client side?
JeniT: that is absolutely correct
<JeniT> http://example.com/path/to/package.pack
JeniT: so we looked at that, and wrote a readme which was
<JeniT> https://github.com/w3ctag/packaging-on-the-web
JeniT: it looked at the various
options and decided this wasn't quite the right plan
... we decided on a different way of doing it which was using
link relations
... there would then be a separate syntax within the HTML page
which said what the scope of the package was
timbl: the scope being everything up to the /!//, in the previous proposal
<JeniT> http://w3ctag.github.io/packaging-on-the-web/
JeniT: that was written up at
^
... we then had, via dherman, some pushback (which was from
Jonas I think?) but which hasn't yet surfaced on the www-tag
list or elsewhere from what I can tell
wycats: explaining the critique:
the proposal you suggested requires that someone navigate to or
embed the URL from a document which has the context
... however you could not give people a link to the
document
... for example you could not give people a link to slide 15
since we no longer have URLs for each part of the package
dherman: my interpretation of the
objection is pretty close but...
... it is---the only way to understand the contents of a
package is by consulting the webpage context that established
the <link rel="package">; i.e. it's contextual
<JeniT> see http://w3ctag.github.io/packaging-on-the-web/#fragment-identifiers
dherman: you can't hand out the link because to get into the package they have to start from the document which references it via the link tag
<JeniT> or http://example.org/downloads/editor.pack#url=/root.html;fragment=colophon
dherman: this is a concern for
the far-off future
... any interim solution will be contextual
timbl: did we discuss redirections?
wycats: we discussed lots of things that require a server but we decided it was our constraint to not require a server
timbl: you could use a 209/303
something
... the pushback for that from a lot of people was that we want
the packaging system to work on gh-pages without GitHub
changing their code
mnot: I have a much bigger problem with this spec, which is that we're spending a lot of effort in HTTP2 to make the web more fine-grained, because it's better for performance
wycats: we absolutely discussed at great length the objection you are about to raise
mnot: (continuing) web
performance has been moving toward higher granularity because
for example when you concatenate libraries it causes large
downloading and parsing times
... finer-grained also gives you more efficient caching
... it's a little disheartening to see a trend in the other
direction
+1
wycats: the goal of this spec is absolutely not to replace http2 or to suggest that http2 is not the future
mnot: but i was reading the goals and they're about e.g. populating caches...
wycats: there are two primary goals not handled by http2. 1) a transitional technology while people have not deployed http2
mnot: we'll see....
<Yves> archive has the advantage that you can send a coherent set of resources, like js files in a library
wycats: this packaging tech can
be polyfilled with service worker, and in general browsers can
be deployed to consumers faster than servers
... there are servers taht will not be upgraded but people have
access to what files they store there
... basically for people who have access to "FTP servers"
somewhere they can dump files
... use case 2) I am ember or jquery and I want an archive I
can dump somewhere that includes all the HTTP metadata and
semantics
... in my view the ideal world is that I give you Ember via a
package and you serve it via SPDY. But I don't have to
configure it on the server, the configuration is in the
package.
mnot: 1) seems iffy, but 2) could be really exciting.
<dherman> fuuuu
mnot: other question---why multipart mime instead of .zip?
wycats/JeniT: streamability
<dherman> headers!
<JeniT> http://w3ctag.github.io/packaging-on-the-web/#streamable-package-format is the definition of the format
mnot: from a performance standpoint, packages are a big footgun. But there are some possibilities too.
timbl: would you imagine caches understanding this?
mnot: similar to service worker
case; this is a security problem for people using the same use
case
... note that alice's plain HTML page being attacked by bob's
JS page on the same origin is a *new* security problem
JeniT: so should we go back to the URL-pointer issue dherman/wycats raised?
<JeniT> http://w3ctag.github.io/packaging-on-the-web/#fragment-identifiers
JeniT: I get what you are saying;
the way it is described currently in the spec is that you would
use a particular fragment identifier for the files within the
package that would be interpreted as reaching into the
package
... see link
<JeniT> http://example.org/downloads/editor.pack#url=/root.html;fragment=colophon
JeniT: it's ugly, but it works, addressing your issue
wycats: you could also do this with client-side hacks
dherman: the reason why any of
those solutions are scary are that they all rely on everybody
doing extra work when they have a package to make the links
continue to work
... the browser has UI e.g. right-clicking on an image
... and getting a URL
why doesn't the URLs in JeniT's proposal work for that purpose? Right-click, get fragment-identifier URL
<JeniT> or just the relative link, I don’t get it
dherman: if you put some effort
into figuring out how to mitigate this cost, i would feel
better. e.g. wycats'
... s suggestion of a JS library
... my other reason is that we said in the last meeting that
there isn't any reason why we couldn't eventually have both
solutions
<wycats> dherman is raising issues related to "open image in new tab" kinds of UIs
<wycats> the fact that all of these assets are not universally addressable
dherman: i am not blocking this work i just have this URL issue
wycats: i just had a spidey-sense realization. people complained that they could not adopt webp because even though it worked on the web people couldn't right-click and save it and view it on their computer
Domenic: dherman I don't understand; JeniT's proposal has URLs you can get to
wycats: JeniT's proposal would allow to only serve the package if you knew you were dealing with only a new browser?
JeniT: yes. But the URL would work in any case.
wycats: but that won't work with relative URLs
JeniT: that's right
dherman: but that's OK! it creates a canonical absolute URL. Oh wait but relative addressing with a HTML file will not work...
JeniT: right, that won't work. So in the general case you create un-packaged stuff and serve that too.
dka: what is the status of the actual packages document?
wycats: maybe the right thing to do is to say the document is done ("a draft") and ask for polyfills on top of service worker
dka: right but ipr issues
... can we outline the issues and where we agree/disagree?
JeniT: we are generally happy about streamable package formats and its rough struture, with the proviso that IETF guys may have issues
mnot: yeah, multipart media types have a long history and set of corner cases
JeniT: then we have the two alternatives of package link relation or URL separator
dherman: we could go in this
direction, that doesn't preclude is investigating the other
direction
... "also" not "instead of"
... i also have another set of extensions that i'd like to
explore when we have a chance, but that's an orthogonal
point
JeniT: so the question is should we also in parallel with getting package link stuff implemented try to get the URL stuff implemented
<JeniT> https://github.com/w3ctag/packaging-on-the-web#specialising-urls
JeniT: my original feeling was that trying to specialize URLs is a bad idea and we shouldn't go there
wycats: one area of exploration
we didn't do because we didn't have dherman's constraint before
was separating getting a resource from inside a page that knows
about packages, to having a link to a resource that is inside a
package. maybe by separating them we can get new design
space.
... e.g. schemes work fine, jar does it, but there were other
issues
... the other issues being about embedding not relative
addressing
dherman: one of the issues is
http vs. https, so you need to compose
... the question is, will I ever shut up about the URL
approach? I think I don't need to shut up about it for the
<link> approach to proceed.
... I think it's important for the <link> approach to
also solve the linking problem
... the more we can address the linking constraint the
better
... i will be less worried about the need for an additional URL
approach if we can be sure the <link> approach addresses
that
<JeniT> http://w3ctag.github.io/packaging-on-the-web/#installing-web-applications
(discussion of how to extend the format)
wycats: what we probably want is
a way to register with the service worker a decoder for certain
package mime types
... i.e. with a polyfill you could do this all you want, but if
it gets implemented in browsers you'd have to reimplement the
entire polyfill to get this extensibility back
dka: next steps?
JeniT: who should I contact within webapps to push this forward?
<Yves> art & chaals, start a CfC to publish a first draft
dka: send a message to art, cc
me, let's get things started
... you may wish to join webapps first
Yves: note that this work is in the webapps charter that was approved so it won't be a surprise to anyone
dka: and Yves is the TAG contact for webapps so he should be able to help
Yves: you should ask Art for a call for CfC if you want to move faster
JeniT: I think it's good enough for a first working draft
dka: let's do this!
wycats: we should find someone
good to do the polyfill
... like maybe guy bedford who has done great module
polyfills
dka: can you contact him?
wycats: i will ask him to come to
tomorrow's developer meetup
... this is basically http2 in the browser, in the same way
service worker is a server in the browser
mnot: but it's http2 without any of the benefits
wycats: umm ... no?
mnot: i want to see some numbers
plinss: we should use the new pub process
<noah2> OK, guessed that would likely happen. No prob. at all. Best guess on end of break?
JeniT: the current draft came
about because of some discussions about a year ago where we
thought it'd be a good idea to put down some good practices
around capability URLs
... recognizing that people are using them; recognizing that
there are issues with using them; and trying to get a balanced
view between people who thing they're bad and those who think
they're useful
<JeniT> http://w3ctag.github.io/capability-urls/
JeniT: we have a draft ^
<JeniT> http://www.eecs.tufts.edu/~noah/w3c/capability-urls/2014-09-27-Noah.html
JeniT: Noah has some comments he has created an amended version at ^
noah2: (recaps comments from
email)
... could be useful to give more "be careful" advise
<noah2> http://lists.w3.org/Archives/Public/www-tag/2014Sep/0045.html
noah2: substantive changes in
section 4.1.2
... (recaps examples)
<noah2> Punchline from example section: It is essential when deploying Capability URLs to analyze risks such as these and to ensure that countermeasures are appropriate to the requirements of the application. For some applications, Capability URLs will not provide sufficient security.
<noah2> Deleted: If you have decided to use capability URLs, depending on the level of risk associated with the discovery of a capability URL, you should employ as many of the following security measures as possible.:
<noah2> Replaced with: The sections above on Risks of Exposure highlight the challenges of protecting Capability URLs from unintented discovery. When considering use of Capbility URLs it is essential to ensure that such risks can me sufficiently mitigated to provide the security required for the each particular application. The following techniques are recommended and will in many cases provide adequate
<noah2> security:
dka: these changes look really good to me. i am happy with the balance
JeniT: I'm happy
timbl: did we discuss about browsers indicating capability URLs?
JeniT: we did, A "Future Work"
<JeniT> http://w3ctag.github.io/capability-urls/#future-work
<noah2> I did not intentionally renumber an appendix to be a section.
<noah2> oops
<noah2> Never used respec before
JeniT: outside the scope of this
document, but in the scope of possible future work
... mnot you said you had some comments?
mnot: nah, i reviewed this a while back and was reasonably happy
JeniT: we have got this out as a
FPWD (the previous version)
... there are a few bits I need help with; search for "Issue 1"
and "Issue 2"
<JeniT> http://googleonlinesecurity.blogspot.co.uk/2012/08/content-hosting-for-modern-web.html
<JeniT> http://lists.w3.org/Archives/Public/www-tag/2014Jun/0003.html
<noah2> One other thing that we don't need to discuss explicilty: I listed myself as editor for revised draft but that is not intended as a proposal that I be listed as editor of the final document (unless Jeni wants help with that)
mnot: i would shy away from specific security recommendations
Domenic: "consult your nearest security professional?"
mnot/dka: yes
plinss: but will people do that?
dka: (this is regarding issue 2
btw)
... this relates to a specific piece of mailing list feedback.
did it provide anything more helpful, e.g. suggested text?
JeniT: yes, it included some
stuff I incoroporated, but not the hard part
... can we find someone else who's written a fantastic guide on
creating such URLs? Or shoudl we?
<mnot> http://tools.ietf.org/html/bcp106
dka: I think it's acceptable to say that it's not in the scope of this document
timbl/Domenic: there are good xkcds on this subject ;)
<noah2> You should get permission to include those images in the finding.
<JeniT> noah2: I should get permission to include screenshots?
<noah2> Said mostly in jest, but yes.
<noah2> I meant of the xkcds
<JeniT> ah!
Domenic: web crypto!
wycats: uuid.js!
JeniT: or some equivalent
rubygem
... basically saying "use someone that exists don't invent it
yourself"
dka/wycats: can we say "use something cryptographically secure"
<slightlyoff> Very sorry for being late. OMW
<dka> Suggested text fo address issue 2: 2nd para of 5.2 - cut our second sentence and insted just say “you should use the mechanisms cryptographically secure …”
<JeniT> “you should use the cryptographically secure mechanisms available within your web application framework to create these secure random numbers, rather than trying to invent your own approach”
<noah2> Is that actually in all cases good advice?
<slightlyoff> E.g. webcrypto?
<slightlyoff> Yes.
+1
<timbl> +
<plinss> +1
<timbl> 1
<noah2> Are there not cases where a sophisticated corporation, e.g. would know to provide something better than the app framework?
<timbl> s/+\n1/+1\n/
<noah2> I think it would be better to say "you should typically use"
<JeniT> ok
<slightlyoff> Thanks
<Yves> +1
<twirl_> +1
<dka> +1
<slightlyoff> It's *always* good advice.
<JeniT> “issue 1: http://w3ctag.github.io/capability-urls/#managing-access-on-sandboxed-domains”
JeniT: OK what about issue 1
<noah2> I suspect the NSA might disagree when they're using it for their own work, but you could be right.
<slightlyoff> NSA has enough math majors on staff that they don't need our advice
<slightlyoff> But if they *are* looking for a piece of my mind....
<noah2> That doesn't make the "always" claim correct does it?
<slightlyoff> Indistinguishable from "always" is "always"
JeniT: I didn't understand this feedback enough to really incorporate it
Domenic: maybe get in touch with the guy who submitted that feedback?
JeniT: OK, I'll try that
... once those are done I'd say it's ready for a new public
working draft
dka: shall we just say it's an agreed finding that we may amend as new information comes in?
<slightlyoff> Gimmie 3 minutes
JeniT: sure
<noah2> Wanted to say that my additions were written in some haste. I would welcome tweaks Jeni might make to either the scenarios or the wording.
JeniT: it's in /TR/ space currently
<JeniT> noah2, no problem, I’ll fix up while incorporating
dka: have findings previously been in TR space?
noah2: not sure but I believe that the ones that become RECs are
dka: so if it's REC track ...
noah2: at times we made findings notes
<noah2> There's been some history of the community not noticing Findings. Recommendations do get more visibility IMO and have more force, but you sometimes have to work through more process stuff (good and bad) to get there.
dka: probably best to publish it
*somewhere*, TR space may not be the best space
... (recapping) TODO:
<noah2> BTW: another usage scenario is: user gets Capability URL, user accesses Capability URL, the corporation for which he works logs all URI request, sysadmin finds the URL in the log.
dka: 2 changes from Noah's draft
into updated draft
... change status from "draft" to "TAG finding"
<noah2> FWIW: the status of document in Jeni's draft says "This is intended to become a TAG finding"
<noah2> IMPORTANT: TAG Findings have historically been linked from: http://www.w3.org/2001/tag/findings
Domenic: I think the strategy we like is gh-pages + proxy on w3.org
Domenic/dka: we can also reduce boilerplate in document heading if it's not REC track, and we can CC0 it
<slightlyoff> trying to dial in. DKA, can you add me on skype? I'm "alex.russell"
slightlyoff: the EdgeConf panel on different image formats was relevant, about how it's not possible to polyfill new image formats or decompression algorithms on the client side in an extensible way
<slightlyoff> what is an image that doesn't fit in memory?
Domenic: with service worker you can do this for images that fit in memory. With streams, you can do it for images that don't fit in memory
<slightlyoff> does any browser currently handle images like that?
a video :). Or a 5 MB animated gif
on a mobile phone
<slightlyoff> yes
<slightlyoff> navigator.connect()
<slightlyoff> ah, yes, video
(discussion turns to exposing hardware capabilities via service workers to allow hardware vendors and not browser vendors to add capabilities)
<slightlyoff> Domenic: my example was something like new sensors mapped, e.g., to http://device.example.com/apis/v1/thinger/
wycats: generally we haven't been spending much time talking about ways to add new device capabilities into the platform
+1
<slightlyoff> so you'd be able to do navigator.connect("https://device.example.com/apis/v1/thinger")
<slightlyoff> .then(...)
dka: let's pop back up from the
specifics for one second...
... we want to get the word out to the web developers about
what we concretely mean with these extensible web design
principles
<slightlyoff> speaking of scrolling....
dka: divide things into "Good" vs. "Needs improvement"
wycats: anything to do with scrolling is "needs improvement"
<slightlyoff> Apple has done as good thing in iOS 8 with scrolling
<slightlyoff> they've started to dispatch events such that you can over-ride scrolling more easily
<slightlyoff> and create your own behavior
dka: caniextend.it
<slightlyoff> and that's a great thing for extensibility
<slightlyoff> I think it's a good idea
+1
dka: suggest Wednesday afternoon we collaboratively edit the document
<slightlyoff> +1
<slightlyoff> does IE optimize for ASM?
slightlyoff: not yet but under development
wycats: asm.js is good
Domenic: CSS is needs improvement but there are a few steps in the right direction
wycats: web components are not widely implemented yet
<slightlyoff> oh, wow, it looks like IE 11 is doinga solid job on ASM
dka: I think web components would be in the positive side
<slightlyoff> custom elements are also the easiest thing to polyfill
<slightlyoff> don't forget O.o and Mutation Observers
<slightlyoff> we spent years on those = )
<slightlyoff> and they talk about how other systems' semantics work
<wycats> O.o has miles to go before we sleep
<wycats> mutation observers are amazing
slightlyoff: O.o and Mutation
Observers help you understand how you would implement from the
browsers perspective the dirty checking etc. that you do within
a browser. They help you get grips on how the internal systems
watch would otherwise be closed behaviors
... so e.g. attribute and property values changing the behavior
of a system
<slightlyoff> CSP is interesting
<slightlyoff> we don't have an API for it yet
<slightlyoff> but it's good that it controls parts of the system that aren't otherwise controllable
<slightlyoff> well, the CSP WG was responsive
<slightlyoff> I wrote
<slightlyoff> https://infrequently.org/2013/05/use-case-zero/
slightlyoff: getting CSP to have more API is still in progress and is going well
<slightlyoff> well, I wouldn't say "going well", I'd say "the window is open for us to collaborate with them again soon"
<slightlyoff> ARIA is sad-making = (
<slightlyoff> we don't have a lot of data
<slightlyoff> e.g., how do I know that I'm in high-contrast mode?
<slightlyoff> how do we know the zoom level?
<slightlyoff> also, we should be calling out screen reader vendors for not doing this sort of thing correctly. They get a free pass too frequently
Domenic: accessibility is not very extensible
<slightlyoff> it's not shark-infested. It's opinion-infested. Data is immune to opinion
dka: internationalization?
wycats: the i18n ecmascript api
seems OK? I am not sure.
... high-level APIs like the compass are underpinned by
low-level sensors like a magnetometer. it would be ideal to
expose the lower-level sensor
<slightlyoff> whoa: http://blog.cloudflare.com/introducing-universal-ssl/
dka: URL parsing?
Domenic: yes. It previously was locked up in <a> and <base>. It is now exposed.
wycats: archeaology -> extension
dka: we should document that topic on this site
<slightlyoff> document.all's falsyness is just a bug
<slightlyoff> Brendan was too clever by half
Domenic: the DOM. Being fixed from two sides: making DOM APIs more JavaScript-ey, and giving JS more capabilities to do powerful things like the DOM does
<slightlyoff> we need to turn it off and call it a mistake
wycats: yes, there are no longer host objects; there are exotic objects and users can create exotic objects
<slightlyoff> there *is* a complexity cost to the system for all of these corner cases
<slightlyoff> and something like the document.all hack need to die in the fire of history
<slightlyoff> very sorry about that = |
<Yves> +1
<slightlyoff> will be there
wycats: will caches be implemented in Canary before TPAC?
slightlyoff: yes
... (behind a flag)
... we also have a polyfill
<slightlyoff> also, Jake's polyfill lives on: https://github.com/coonsta/cache-polyfill
dka: how to schedule this?
mnot: 15 minutes intro, 1-1.5 hour session?
dka: longer!
wycats: yes, many hours
mnot: as long as there is a definite start time
dka: all afternoon after lunch?
(monday)
dka: would it be bad to overlap with webapps?
Yves: probably yes
mnot: schedule for TPAC is 11am-3pm ad-hoc meetings
plinss: a lot of WGs are scheduling joint meetings during that time
<slightlyoff> QOTD: "TPAC agenda has no concept of lunch"
dka: starting at 1, going until end of day
mnot: some people might have to leave at 3
(general agreement)
slightlyoff: will we allow people to drop in who aren't officially attending TPAC?
dka: i will ask, unsure whether it will work
<slightlyoff> hah. Good point.
dka: there are concerns about there being enough muffins
slightlyoff: we should think harder about how to make it not an issue
<slightlyoff> that said, I think it'd still be valuable
<slightlyoff> getting browser vendors nearer to actual deployed tech is good
<slightlyoff> sorry
<slightlyoff> mute
<slightlyoff> muted
dka: current plan of record is
april in san francisco
... however, suboptimal for mark and i
... mnot's proposal was june in melbourne
mnot: except it's the middle of the winter
Yves: there was a proposal for Paris as well
<darobin> I need a heads-up some time in advance to make sure it works out, but I should be able to host a meeting at Le Tank (a nice co-working space)
mnot: do we want to host another EWS event in April? The last one felt a little unfocused.
dka: that's kind of the idea, as opposed to a normal conference...
Domenic: I think the strongest sessions are the ones where implementers are involved
mnot/wycats: agree
dka: open to changing the format, making it more relevant, ...
Domenic: maybe make use of the EdgeConf software? It was very useful
(discussion of details of colocating with Fluent)
<dka> adjourned
This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/cash/cache/ Succeeded: s/that/the page to access http sites/ Succeeded: s/this/^this^/ Succeeded: s/4.3/A/ Succeeded: s/tet/text/ Succeeded: s/xcds/xkcds/ FAILED: s/+\n1/+1\n/ Found Scribe: JeniT Inferring ScribeNick: JeniT Found ScribeNick: JeniT Found ScribeNick: Domenic WARNING: No scribe lines found matching ScribeNick pattern: <Domenic> ... Found ScribeNick: JeniT Found ScribeNick: Domenic ScribeNicks: JeniT, Domenic Present: Dan Dominic Tim Peter Sergey Mark Yehuda Yves_(remote) Regrets: Alex Dave WARNING: No meeting title found! You should specify the meeting title like this: <dbooth> Meeting: Weekly Baking Club Meeting Agenda: https://www.w3.org/wiki/TAG/Planning/2014-09-F2F Got date from IRC log name: 29 Sep 2014 Guessing minutes URL: http://www.w3.org/2014/09/29-tagmem-minutes.html People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]