Technical Architecture Group F2F

30 Sep 2013


See also: IRC log


Dan Appelquist, Tim Berners-Lee, Yehuda Katz, Anne van Kesteren, Sergei Konstantinov, Yves Lafon, Peter Linss, Noah Mendelsohn, Alex Russell, Henry S. Thompson
Jeni Tennison
Dan Appelquist, Peter Linss
Henry S. Thompson, Alex Russell, Yves Lafon


Agenda planning

YK: Zip URL discussion exciting because it involves fragid and semantics and I was hoping JT would contribute

DKA: Let's go through the agenda topics and see which ones we would like JT involved in

YK: RDF and XPoionter experience relevant. . .

[Minutes suspended while chairs manage agenda planning]

HTTP updates

DKA: Multipath TCP?

YL: HTTP1.1 new versions published recently, in the run up to Last Call

<Yves> https://tools.ietf.org/wg/httpbis/

YL: HTTP2 also has a new release, with stuff on compression and upgrade path (from 1.1 to 2)
... Some requests to put some text in p1 of HTTP1.1 about security and interception

<noah> Curious: my impression was that HTTP 1.1 is mainly clarification and with no (other) functional changes. Is my recollection collect?

HST: Noah, yes, that's right

YL: Server push is one of the new things in HTTP2
... It's a way to give to the client proactively things that might be useful

NM: Not true push, because the connection has to be there first, initiated by the client

AR: Still not what was requested

NM: Right

YL: It's new wrt the HTTP1.1 module

HST: What would it mean for a client to say "I support this"

YL: I think this is the most important thing for the TAG to consider in HTTP2
... Push is a bit similar to ZIP archives, in that you may get more than asked for in either case

NM: If this changes things' names, then that's an architectural issue

DKA: Hold that thought for the Zip Archive discussion

YL: There are some experimental implementations out there . . .

AR: What are the things you thing are important for us to look at in push?

HST: Remember pre-fetching indiscriminately -- bad thing -- that kind of vulnerability shouldn't be opened up again, for example

YK: What about script concatenation as a parallel?
... So isn't this just moving that into HTTP instead of a hack

NM: Well, at least the server doesn't have to lie any more

YK: Yes, it sucks for caching if you lie
... So that opens the question of how this interacts with caching

AR: So how do you send cache controls back to the server early in a connection when you don't know what might be pushed at you

DKA: Is there a worked example comparison, for example between script concat and push?

AR: This is a job for Bloom filters, perhaps

YK: So, broader question is what additional client->server communications are allowed/required to manage push

AR, YK: [start to do design]

HST: Pop this until we can have someone from HTTPWG to help

YL: I'll see who I can find

PL: I'm also worried that it breaks conneg

AvK: Server first sends a push promise, to which the client can say 'no'

TBL: NNTP, anyone?

<Yves> https://tools.ietf.org/html/draft-ietf-httpbis-http2-06#section-6.6

<annevk> http://www.mnot.net/talks/http2-challenges/spec.html#PUSH_PROMISE

HST: Clear this needs our input

DKA: We'll try to do this later in the week if we can find someone from the HTTP WG who can join us

<annevk> wycats_, The header block describes the resource

<wycats_> annevk, The issue is that you need a bunch of handshaking now. The server can't push until it gets an answer back from the client, which can be slow/flaky/etc

<annevk> wycats_, It can push actually, there's no requirement for a handshake

<wycats_> annevk, it has to send the PUSH_PROMISE first

<annevk> wycats_, it just allows for cancellation, potentially late

<wycats_> annevk, yes, how do you not see how that is problematic?

HST: on con-neg - I did some data gathering.

HST: I looked at 2 weeks of traffic through our proxy - university (of Edinburgh) default setting is that student traffic go through proxy. I looked a 2 weeks of traffic and found very little 'reactive' conneg.

YK: I know that anyone using j-query with rails is using [accept header] conneg.

HST: yes, I can't detect that, that's 'proactive' conneg.

YL: are you able to see if there was a vary header?

HST: no. Not logging headers - just status codes.

YL: you have no way to know for sure if conneg occurred in the server.

YK: i know that conneg happens with rails apps, but I don't know how often it happens.

HST: I intend to push pack to the http working group - re: 'reactive' conneg - to say "this never happens." I don't have any evidence that people use it. We don't talk about it in webarch. It ought to be fixed between http 1.0 and 1.1.

YL: Not sure you have a good way to figure it out.

HST: Yes you do - if you get a 306 response - I have about 150 of those out of 75M http: GET requests.

<annevk> http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-24#section-2

<Yves> ht, see https://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-24

<HST> http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-22#section-2

HST: There is also the question of the philosophical introduction at the beginning of p2
... I would prefer to see it pulled or fixed, but fixed is much harder

YK: If we can't agree on what to put in, we should take it out

NM: But if you leave it out, you open the door to conflicting interpretations

HST: I haven't yet been willing to submit an issue on this topic

TC39 report

YK: We achieved consensus on a new scheduling process by which TC39 works
... It will look more like the W3C process
... The primary goal is to allow features to progress independently if they don't have complex dependency

NM: Is this a move away from big blocky releases?

YK: No, not like that -- not lots of little specs
... Library-like modules, yes, maybe

<annevk> "We have CSS as example of doing it wrong." hear hear

AR: But the integration step is crucial still

AR: We're also trying to stop re-opening topics if possible

YK: So we work on pieces individually, through the process to 'recommendation'
... Implementators can implement anything that gets to 'rec'
... You can't get to 'rec' with a dependency on something that isn't a 'rec'
... Then once a year, when ECMA meets, we'll take all the 'rec's, put them into a release, and send them to ECMA
... some questions remain about tooling, which would make the 'editor' more like an editor and less like an author
... Also, putting much stronger requirement that actual spec. language is produced much earlier
... This will, I hope, get some features that have been languishing out the door
... Some technical stuff:

1) Creation of JS object

YK: This currently requires some C++ code somewhere , which causes a problem with creating subclasses
... So there is now a hook at pre-creation time, which allows an override
... More flexibility, but default behaviour is unchanged
... Now possible to subclass DOM objects

AR: @@create

<annevk> (@@create is a temporary name in the specification, it'll prolly end up being an identifier you get through Symbol.create)

AR: Gives a hook into putting privileged/exotic behavious in a box
... Resolves a long-standing fight over who owns the constructor body
... because there's no concept of a partially-initialized object, not easy to resolve
... but now we can
... Now puts pressure on DOM spec authers to specify special-requirement constructors via @@create
... rather than insisting on their own (C++) constructor

<slightlyoff> ...and not providing them to JS

<slightlyoff> And this is good!

<slightlyoff> "new" should be universally useful across the DOM

AvK: [scribe missed]

<wycats_> TL;DR We need mixins!

<timbl> (I am sad to see a language using @@ which I have used for many years in all programming as something which will not compile to leave marker where stuff is unfinished but there we go, I guess I'll just have to use three :-)

<annevk> timbl, Allen is using @@ for exactly that purpose ;-)

<annevk> timbl, it's not part of ECMAScript

AR: DOM has exposed classes, which are not concrete objects
... This doesn't fit at all with the JS worldview
... The good news is that we have solutions for the concrete classes
... We still don't have a solution to the abstract class problem

<slightlyoff> ...but "new" should still work with "nonsensical" things

YK: The Event class is an example we can't manage yet

DKA: Exactly where is all this process stuff?

YK: Consensus, but not fully speced in prose yet -- aiming for the end of the year

DKA: What's usable now/soon on the technical side?

AR: spread/rest operators, iteration (for...in), Maps/Sets

YK: Hard to predict when things will come, because different browsers roll stuff out at different rates

AR: Implementation timing and progress is particularly complicated wrt WebKit and JS

YK: Note that @@create is a name in the spec, which will need a concrete manifestation in code, so we will have an indirection via Symbol
... which hasn't quite settled down yet
... The goal of Symbol is to expose properties guaranteed not to collide with user-land string-named properties

<wycats_> http://wiki.ecmascript.org/doku.php?id=strawman:unique_string_values

<annevk> twirl: http://people.mozilla.org/~jorendorff/es6-draft.html#sec-

YK: What's controversial is sharing symbols across 'realms' -- e.g. multiple iframes, which share a heap

YK: This is potentially a TAG-relevant issue -- architectural, in other words

<wycats_> import { create } from "std:object"
constructor[create] = Node[create]

AR: Same-origin iframes is really the only place this happens

YK: I argued it can't occur across workers
... all there is there is a serialization pblm

<slightlyoff> (and I concur)

2) Modules

YK: We've punted the question of bundling to the network layer

YK: We were trying to find syntactic mechanisms to assist concatenation including modules
... But that just wasn't winning, so we pushed that over to the network layer
... You do eventually get a URL when you import a module via shortname

AvK: Normalize/Resolve/Fetch/Translate/Link

<annevk> https://github.com/jorendorff/js-loaders is the GitHub around ES module loaders

TBL: Normalize?

YK: Map shortname to another, e.g. ../foo

<annevk> https://people.mozilla.org/~jorendorff/js-loaders/loaders.html#section-21 has a description of the loader hooks

TBL: Base URIs?

AR: No such thing

TBL: Why not?

AR: Don't want to constrain runtime to have that notion, e.g. when running not on the Web

YK: [you can choose to ...]
... Suppose you want to depend on jQuery -- w/o commiting to a particular location

TBL: Checksum?

YK: Implement that using the Fetch hook

TBL: W/o that, how to avoid being hacked?

YK: Not the business of the language

AR: JS lives at an uneasy location in the pipeline
... everything that isn't 'magic'
... You could do almost everything above the bits off the network in JS
... but it is just a language
... with provision for hooks which appeal to the gods

TBL: Do the gods always have to be invariant?

AR: No
... The language itself has no networking, it's up to the environment

TBL: I'm worried about that

YK: We do think the environment should provide a loader wrt certain invariants
... In particular, wrt the browser environment

TBL: I prefer to ask for modules with http: URIs

AR: You can do so

NM: What about Atom -- they wanted the simplicity of shortnames, with the explicitness of URIs
... so you get a registry that does the mapping

YK: That's what we're supporting, via conversion hook

NM: But I want a spec level hook

AR: Things in the spec will have expansions in the spec. for all the shortnames that occur in the spec

YK: Any requirement on the browser is outside the language, but could for example require all input to Normalization be URLs

TBL: Current advice is that inline (script/CSS/image/...) are bad, better to have URLs for them
... So I would like to use directory structure to organise this
... So if I have a script directory, I want relative references in something fetched from there to be relative to that directory, just as it already works from the CSS subdir

YK: I agree that ./ and ../ should work that way in the browser, and we think we have a way to do that

TBL: And get consistency wrt running on the Web, or with Node.js

YK: Yes

HST: I hear violent agreement -- there's a place to do the mapping for module names, just as it happens for CSS, except that it works for "./foo", not "foo"

TBL: I'm willing to use "./foo" to distinguish what I want from 'magic' module names

YK: The default behaviour will look relative to the embedding HTML

NM: The user doesn't know about binding jQuery to something

YK: We anticipate the e.g. jQuery.com will give you HTML to paste in to register the mapping
... we may need an HTML5 element for that

NM: I was looking forward to a situation when some shortnames get so well-known that it doesn't need that -- then the Atom experience gets relevant

YK: We haven't supported that yet

TBL: So a server can map 'jQuery' to their own, hacked, implementation

AR: Only in pages from that server, yes

NM: So more like a namespace prefix than a namespace URI

AR: Everything happens fresh with each page context

SK: Can a script redefine the binding?

YK: Yes, if you don't want 3rd-party scripts to do that, you should use CSP [Content Security Policy: https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html]

<slightlyoff> See: http://www.html5rocks.com/en/tutorials/security/content-security-policy/

3) Service-worker

YK: May override network layer
... Service workers provide for asynchronously intercepting http requests, e.g. for local caches
... But we don't want caching behaviour implemented in the module loader
... so we don't make that easy -- you should use the browser (service worker) for that
... Fetch hook talks to the service worker, which gets a response object which is opaque

AvK: You don't need service worker

AR: True, all you need is the cross-origin-response object, however that's provided

YK: The c-o-r-o then goes through the pipe from Fetch to Translate at which point the browser does the unwrapping

[scribe gives up]

4) Promises

YK: Since the module loader wants promises, people really pushed it forward, and so it looks likely that promises will be in ES6 and used by the module loader

AR: Chrome is shipping support now

YK: And given @@create, promises are subclassable
... Having a primitive for async data is good, but that actually makes getting streams even more important, so that we don't end up using promises as a hacked workaround

AR: For example web crypto needs streams for decent performance, as well as a promise for total success/failure

YK: No streams in ES6, but we hope to get it out independently via the new process

DKA: What's the alternative right now

AR: Lots of ad-hoc, mutually incompatible solutions

DKA: So there will be a lag between promises shipping, and streams shipping
... and we need some prose saying "Don't use promises when what you are really doing is streams"
... So we'll come back to this when we talk about the API Guide

AR: I'm hoping to get started on streams soon

DKA: You described TC39 as an oversight group, with Champion Groups -- is that how this works?

YK: Yes, so Modules came from a Champion Group, and got consensus from the TC


TBL: Promises help, but don't get me everything I need
... Is there more coming, e.g. catching an error

AR: Yes, if you also make it asynchronous
... [gives detail]
... Async is poisonous

TBL: That is, it spreads?

AR: Yes

YK: Example on board:
... 'spawn' primitive, which returns a promise

function() {
  return spawn(function*() {
    try {
      var post = yield $.getJSON('/posts/1');
      var comments = yield $.getJSON(post.comments_url);
    } catch(e) {
    return comments;;

AR: This gets us out of the 'triangle of doom'

Adjourned until 1300

Potential future issues

[Jeff Jaffe joins the meeting for this topic]

<dka> Topics for today: http://www.w3.org/wiki/TAG/Planning/Sept-2013-F2F#Monday

1) Do Not Track

JJ: chairs are planning to work issue-by-issue

YK: why do we think this won't get gummed up at the end instead of the beginning?

JJ: we don't know. I look at it this way: the web community as a whole needs a DNT standard. Everyone seems to agree with that statement.
... having no DNT standard seems untennable
... my hope is that if we go through issue-by-issue and work diligently to get consensus, stakeholders will acknowledge that no standard is worse
... there's a lot of skepticism
... we've documented the skepticism using a poll in the WG. It asks: should we unlink TPE/Compliance, only TPE, or give up?

YK: a broad question: my uneducated reading of the tea-leaves is that advertisers will accept something nobody will use, while privacy advocates want something most people would use
... that's the core dissagreement

JJ: I think it's true

YK: "advertisers would be willing to go along with anything as long as nobody uses it"

JJ: there's tremendous value to me that I get tracked (as a consumer)
... a good DNT standard puts the onus on advertisers to help build understanding
... also, what do we allow?
... (use, data handling, etc.)

JJ: there are a lot of unknowns here

YK: are advertisers willing to expose the mechanisms that they use to track?

JJ: not really

YK: so the discussion is foreclosed?

JJ: a challenge
... all our standards are voluntary, and advertisers may choose to ignore. If that happens, you move to the next level: public outcry, legistlation, etc.

DKA: in the context of the CA legislation and EU privacy law, there are more controls -- signposting, cookie warnings, etc.
... controversy results

YK: isn't it self evident that login cookies != tracking cookies

TBL: to people making privacy legislation? NO.
... the difference...the extent that FB tracks with the "like" button when you don't even use it....most don't realize

HT: historically, there are very different assumptions on different sides of the atlantic. The US side assumes "opt out", while the EU default is "opt in"
... the expectation has been that there would be different defaults depending on geography. Has that been resolved?

JJ: the consensus of the WG is that the default should be neither opt-in nor opt-out and that there has to be definitive user action about "opt in" vs. "opt out".
... more similar to the EU mode.
... that you have to have a null default is being ignored by IE, and is being set to "DNT"
... the advertisers are saying "since there was no user action, we have license to ignore user signals since they're not reliable"
... the default question is bizarre because when I happen to buy a PC that happens to have a browser in it, and it's being set up by the Geek Squad from Best Buy, or the IT team sets it up...who the user is is difficult to define

YK: IIRC, the IE startup screen DOES show a choice and you have to click "next".
... so WRT the letter of the law, IE does require user ction
... it's the canonical opt-in/out issue

JJ: I agree. But from the advertiser's perspective, they don't care...they think it doesn't

DKA: is there something the TAG can do here?

JJ: glad you asked....
... I don't think there's a lot the TAG can do collectively
... many of the views are far apart. The WG needs more people who can think deeply about it and contribute technical ideas and text to help bridge them

YK: when you say that everyone needs a DNT standard, can you say what you think everyone would agree to?

JJ: that users should be able to specify "DNT", that there should be only one signal, and that if I say "I don't want to be tracked" we should be able to attach meaning to that signal
... the advertisers agree that there should be a compliance part, but want to define it
... you can google the DAA guidelines that they've published, and presumably they'd agree to those
... some people think legislation is the right answer
... many vendors are fearful that they'll have to implement 30 versions if we don't get something unified

YK: seems plausible that legislation will create scattered standards

DKA: the oven broke in our house, so I've been searching for ovens...now everywhere I go on the web, overs are following me, and I have DNT turned on

YK: why do you think this is bad?

DKA: I'm less likely to buy an oven that's stalking me

JJ: the real problem is they're not tracking you enough! If they tracked more, they'd know you're the sort of person who would be turned off by that

NM: I don't think you can generalize
... I don't like the fact that I can't find things without parties I don't know tracking me
... if I were looking for information about bars and not ovens, there are things like disease treatements, etc. that are much more private

YK: there are folks with ad platforms who are experimenting with remarketing and have found it effective

[TAG applies Brandeis court rules]

JJ: there are attacks against privacy and security

DKA: wendy will be joining us tomorrow
... DNT won't protect anyone against bad govt actors, and that was before we knew how bad the govts were

2) Relations with TC39

DKA: we just finished with TC39 topics...would be good to talk about working together

YK: we brought up the prospect of TPAC, but the time and travel made it a non-starter
... last time it sounds like TC39 was given a room and that there wasn't much engagement
... that said, Norbert Lindberg was much more positive
... they felt they weren't participating
... we represented that wasn't the spirit of this invitation

JJ: last time was before my time, but when we invited TC39 to Shenzen, our intent was that they could be observers and interact however was most effective

[ agreement that we need to try again for next year ]

JJ: the existing bodies that we've got do a pretty good job than, say, the UN would...but we have an obligation to reach out to be effective in making the case that we're effective stewards. Having TPAC in China is good for that
... next year is likely to be Santa Clara.
... that's the 20th W3C anniversary too.


JJ: as we're setting up the agenda for the Plenary, it'll be a little different, and it'll be 60/70% unconference and a bit more plenary time
... we thought it would be beneficial for the community to get an update from the TAG
... there was a lot of feedback from the AC that they hadn't heard from the TAG

NM: what have we been doing? We needed some space...what's happening now?

JJ: there was some euphoria thanks to contested TAG and AB elections...now the community wants to hear what's new
... there's a pent-up interest in understanding what the results have been...you can do that however you like, but there's a sense from the PC that we'd like to hear from the TAG

YK: we've done fewer things as concrete deliverables, but are taking on a stronger coordination role
... Brian Kardell's post regarding twirl's election outlined many of those activiteis and work like Promises should be viewed as TAG deliverables

DKA: having a lot of talking heads on stage might be less useful than something like an interactive Q&A

NM: what if the chairs presented what they think the priorities are + some Q&A
... the communities you're working with know you're there, but the rest of the world isn't hearing enough

JJ: the survey said they don't like talking heads but they enjoy the Q&A

AR: could do a google moderator + some live questions

4) AB work on chapter 7 process revisions

JJ: for those who don't know, Chapt. 7 walks you through the publication process: FPWD, WD, LC, CR, TR
... some people like it, some people think it's overstasted and hurts agility
... the AB has spent a lot of time over the past couple of years trying to figure out how to make it more agile...fewer process steps

<jeff> https://dvcs.w3.org/hg/AB/raw-file/default/tr.html

JJ: combines LC + CR...don't see why they need to be different
... also removed the Proposed Recommendation step
... also we've tried to remove the semi-prescriptive steps and move to something like a guidebook

YK: does the new step say when to start implementing?

JJ: no, not any more

YK: the blink process doc cites the W3C process document

AvK: you might be conflating shipping with implementing?

JJ: we wanted to be more agile, aligns with documents that related to it (e.g. the Patent Policy)
... there might be things we're not aware of, but the document is at the level of maturity where we should be getting more eyes on it

<dka> http://www.w3.org/community/w3process/

JJ: Chaals have been a great editor, issues are being worked through, and at TPAC we're likely going to be at LC-alike status
... if the TAG could look it over, the feedback would be appreciated

DKA: how should we provide that feedback?

JJ: the AB feels responsible, wants to solicit feedback, but doesn't want to feel beholden to everyone...is voting on issues

DKA: joint call? what's the right way?

JJ: might be useful if a review turns up issues

DKA: i think we should take a look at this

NM: one of the things I used to observe is that there are official uses for the process steps, but that communities use the process in different ways, flagging things perhaps later than others

JJ: some of the text under "last call" signals that the WG thinks it's done and is looking for feedback

AvK: well, should be reaching out too...many WGs ignore that...

<annevk> http://www.w3.org/2005/10/Process-20051014/tr.html#last-call

<annevk> "the Working Group believes that it has satisfied significant dependencies with other groups;"

JJ: it's much less explicit. There's now a charachteristic of "wide review". You need to document and demonstrate wide review

<annevk> and in particular

<annevk> "A Working Group SHOULD work with other groups prior to a Last Call announcement to reduce the risk of surprise at Last Call."

<annevk> never happens

AR: we're trying to ensure that we have a crack at specs as they're going through the pipeline and identify architectural issues during that process - should the document say that the TAG should be one of the groups that groups should be seeking review from?

NM: might be good to be able to demur

AR: true

YK: in my community, having a checkpoint is good because many specs start and don't produce and end product...today I wouldn't get involved before LC because it seems like a reasonable place to get started

NM: there's a tough balance. W3C has been a bit waterfall, top-down-ish and have pushed back...lots of bottom-up now. Now early implementations have a high barrier to change.
... (e.g. Geolocation which was reviewed at the earliest point)

NM: people will be relucant to change if there are many users

AVK: browsers may change their process. Things behind flags are likely to be iterable.

YK: we're far too nice to web developers about truly experimental features. My expectation is that it'll break if it's new. We think we're awesome at it, but in practice things change and break all the time. It's crazy to care about it.

AvK: that's not true

YK: it is!
... people love to whine about it, but as a practical matter, if you use something new it's still likely to change

[ disagreements ]

NM: it has been historically problematic that early drafts are "frozen" in impls

JJ: some people review early, some review late...

HT: can we move to a new topic?

DKA: we're likely done
... we should review and feed back

AR: hopeful note: we CAN change things, in the way that we have for WA

JJ: there's a weekly call in the CG

YK: some of us ran on a platform of extensibility
... it's about providing low-level hooks without waiting lots of time to define high-level features
... SW is one area that shows this area

JJ: if there's some fundamentally different way that the TAG wants people to think about the platform, perhaps you should be taking 15 min of your hour to present this
... after reading the [Extensible Web] manifesto, went to Mozilla, and asked "what should I do differently?", and they said "nothing"

YK: it's a reflection of how people working in standards are now thinking, and a thing to link to and point at

NM: one of the roles of the TAG is to document Web Architecture
... you're describing this as an emergent OS, and there's a point of view that says you might want to outline organizing principles

AR: I'm hoping that's what the TAG can do

NM: i"m hearing bottom-up things...if there are things you want to say at a higher level, it might be worth articulating them

JJ: what's the message to chairs? should they be inviting people with different skills to their WGs?


YK: Alex's perspective is that web developers tend to have lots of JavaScript experience and that's valuable

AR: do we have a plan?

JJ: was hoping part of the Chair's plenary schedule would include something about this and describe what it means and how to incorporate it into the process

YK/AvK: reasonable

DKA: should be lined up with whatever the Extensible Web CG will be presenting

<scribe> ACTION: wycats : work to align with the Extensible Web CG] [recorded in http://www.w3.org/2001/tag/2013/09/30-minutes.html#action02]

<trackbot> Created ACTION-830 - : work to align with the extensible web cg] [on Yehuda Katz - due 2013-10-07].

[ discussion about timing ]

<wycats_> http://lists.w3.org/Archives/Public/public-nextweb/2013Sep/0005.html

<wycats_> that's the public-nextweb post

Miscellaneous/Web of things

[ discussion of offline, how it relates to extensibility, etc. ]

JJ: nothing else we really need to talk about today that I need to be here for...director's decision about HTML extension licensing doesn't require me...
... web of things: tons of articles about "internet of things"...it's heating up and we're likley to run a workshop on this next spring. Different points of view about what they are -- sensors that emit data -- some think they're new devices that will have UIs.

DKA: no common consensus about "internet of things", let alone "web of things"

JJ: if anything 5 years from now called "internet of things" that doesn't work with the web, we should be ahead of it...

[ break, return to Service Workers in 15 min ]

Service Workers

AR: impossible to get complete content offline, you might want to get only the important emails, the latest blog entries (etc...) with you
...content is the things you synchronize, offline is bootstrap + sync
...AppCache is manifest-driven, not scriptable
...and included caching issues
...Service Worker acts as an in-browser proxy-cache
...programmatic control over req/rep/caches
...fetch is url-based, not limited to http ones (like data:// )
...managing upgrade -> upgrading the application logic (then associated resources)
...event-based, you can define your own listeners
...you can also intercept request and in-script generate a response

<annevk> Fetch: http://fetch.spec.whatwg.org/

YL: does the cache act as a HTTP cache? (ie: do you have expires, for example)?

AR: no, you can build one on top of that, it's just primitives
...there is also space management, beforeEvicted and afterEvicted would give information

NM: most application are doing a "clear offline db" rather than purging, or asking what to purge

AR: this is synchronization, not an on-disk offline cache
...Service Worker is a tool to create such offline cache
...the controller can support alarm events
...as it's not an HTTP cache, it can caches things beyond normal expiry

AR: there are two phases of upgrade
...1/ install
...2/ activate, for post-installation used to do cleanup/data upgrade/etc...

DKA: 'install' may be confused with installing an app, like on your desktop

AR: some people want to have the service worker present at first run, it has issues as it's a piece that don't have any UI
...caches might be available to page outside of SW

AR: there are no fallback mechanism for worker for fetch
...there are big questions
...sync technologies
...like shared data models

<annevk> WIP: https://github.com/slightlyoff/ServiceWorker/

DKA: is there buy-in?

<wycats_> https://github.com/slightlyoff/ServiceWorker/blob/master/explainer.md

Anne: There are Mozilla people looking into this.
...we might look at push before fetch

DKA: how about packaged apps? Will Service Worker obsoletes the current model?

<dka> Noting - on "installed" web applications - it may be important to be able to expose a URL but some developers really need to be able to take over the whole screen without a URL bar - e.g. game developers.

<wycats_> I really wish ChromeOS had a URL button next to the maximize button I could use to get URLs in max mode


<annevk> http://tools.ietf.org/html/draft-ietf-appsawg-xml-mediatypes is the draft

HT: one of the major reasons for RFC3023bis was that wrt text/xml higher-priority RFC said that all text/* media types default to ISO-8859-1 instead of UTF-8 (which XML wanted)

HT: that got fixed after 3023 was published, and that was one of the main drivers for publishing an update
... there was also nothing about fragment identifiers

YK: what's the best place to learn about fragids + XML?

HT: there is no official story!
... actual practice for barenames is written down in the draft...it's very simple. It has JeniT's blessing and includes what IETF accepts regarding suffix registrations (the "text/xml+*" registrations)

TBL: there's a disjoint set of people writing fragment identifier specs vs. the specs that use them. It seems cleaner to have one reference the other.

HT: that's more relevant for RDF than for XML

AVK: but HTML does define it

NM: there's years of TAG discussion on this

<annevk> HTML even defines its MIME type

<annevk> so brave

HT: that was all background. What I need your help with is character encoding: one of the most complicated topics on the web
... when it breaks, users get really upset.
... getting it right matters
... in the case of XML, there are 3 sources of information about the encoding of an XML mime entity:
... charset http parameter
... the XML encoding declaration (usually in the first line)
... the byte order mark

HT: it takes 2 or 3 bytes (UTF 16 or UTF 8)
... the XML spec goes into considerable detail about what to do in the absence of external information (just encoding decl or BOM) but almost nothing about what to do when there is external information
... there is further irritation because it goes into the most detail regarding how to figure it out in a non-normative appendix.

AVK: it also doesn't constrain the number of bytes to be sniffed...you could pack the front with whitespace before the XML declaration

HT: late in the day, after we went to working group lc, this got called to my attention - if there is external information how do you determine the correct encoding?

HT: there are 2 lines in the non-normative appendix that say the external info should say what is authoritative
... the prose i inherited for 3023bis doesn't answer this problem
... I looked at what the HTML 5 spec says and did some experiments, and everyone is agreed that the BOM is authoritative
... if the BOM and the declaration conflict, it's an XML error
... if there's BOM, it tells you what the encoding is

AVK: first you determine encoding looking at HTTP
... then you send to decoding library
... and it tells you what it got from the BOM
... and if it gives you one, you use it

HT: any transcoding process that doesn't transcode the BOM is horribly broken and can't be recovered
... so the BOM is higher precedence than anything else

AVK: if there's a mismatch you get an error

AR: what sort of error?

AVK: parse error

HT: the remaining question is: if the charset says one thing and there is no BOM and the document says another thing, what do you go with?
... there are cases where no error will arise if you decode it using either

AR: this seems like the sort of case where you should have a defined order

HT: the 2 use-cases that are often refered to are: XML-unaware transcoders
...(where the charset is correct and the string in the doc is incorrect)

HT: the other is a common server configuration that defaults to something in the charset which is often wrong and the user can't fix it
... this broken charset pushes us in the direction of trusting the XML declaration

PL: how common is it that there's a transcoder in the way?

HT: that pushes us away from the implementation consensus
... most things seem to go with the charset param

AVK: for HTML/CSS, HTTP takes precedence
... I think we should do that here as it's consistent
... there's not that much benefit to specifying charset at the HTTP level
... i think it should be BOM, HTTP, then XML decl

HT: I'm happy with that too
... the reason I'm trying to gather support, is that it's not as simple as in-band vs. out-of-band

AVK: the way to think about it is that the in-band thing only looks at the bytes, not the BOM
... this is the same set of steps we're using for HTML, CSS, WebVTT, etc.

HT: the thing I'd like is to know if there's anywhere that gives precedent to taking the BOM as authoritative, but I haven't found one
... very few specs mention the BOM

AVK: all the web platform specs use it
... there's no IETF precedent

HT: thanks. Glad to have this in the minutes...will ask for an endorsement in due course
... Silence is not consent at the IETF
... there's someone who's responsible for documents at the IETF
... and they won't like me for this
... thanks

AVK: if you have an invalid byte sequence, it will bubble up to an XML parse error, but what is an invalid byte sequence isn't defined except for a number of encodings

HT: the answer was that it was up to the character encoding spec to define what an error was. If they failed to do that, it wasn't XML's problem.

AVK: so you embrace the interop issues?

HT: yeah
... ISO 2022 has a strong idea of a valid escape sequence
... but I don't know about shift-jis
...not sure if it defines error conditions

HT: I don't know if anyone looked at that

AVK: with most encodings, they'll get superset
... you'll have a 92x94 table, with some cells not filled up

YK: a few round-tripping issue

AVK: yeah
... other than that, XML is pretty tied down about what's a well-formed doc

<dka> Adjourned

Summary of Action Items

[NEW] ACTION: wycats : work to align with the Extensible Web CG] [recorded in http://www.w3.org/2001/tag/2013/09/30-minutes.html#action02]
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.135 (CVS log)
$Date: 2013-10-10 14:47:28 $