See also: IRC log
(introductions)
mnot: chairing HTTPbis, liaison
from IETF to W3C
... "Web Linking" spec is an attempt to respecify Link:
header
... lots of requirements for link-based protocols
... and typed links
... for example in HTML rel="stylesheet"
... Atom used link relations as well
... and there's a registry and an XML syntax for links and link
relations
... eg copyright statements, next/previous links
... wanted to revive Link: HTTP header
... to convey links in headers rather than body of the
message
... HTML had linking, Atom had linking, and they weren't
matched
... RFC provides a model that you can serialise in various
ways
... needs a context, a type, and a target
... (which you could map to RDF if you wanted to)
jar: is that called out anywhere?
mnot: not particularly
... and so to registration
... link types being a particular example
... we felt there should be one registry
... the HTML groups wanted to use a wiki
... we felt that was too freeform, and noisy
... and could lead to changing semantics in a
backwards-incompatible way
... so we tried to address their concerns in the registry
... but they (the HTML group) didn't feel it was an appropriate
thing to use
... time passes
... the link relation registry is a lot of work to
maintain
... every interaction requires discussion with the author
noah: the overhead is high but the rate isn't high
<timbl> http://www.iana.org/assignments/link-relations/link-relations.xml
mnot: we would like to see a higher rate, but can't support it at that overhead
jar: this is the same issue in journal publication
mnot: the question is whether
value is being added
... we have a common system of expert review in IETF
... the underlying question is what is registry for?
... some people see it as a gating function: if it's not
registered, then it won't be used
... to prevent stuff that is bad from being used
... but the gating function makes people less likely to use the
registry
noah: so people go ahead and do it anyway, without using the registry
mnot: we discovered this was
common to registries for media types, link relations, HTTP
headers and URI schemes
... talked to Ned Freed
... who runs the media type registry
... saw that people were misinterpreting the function of
registries
... actually the registry should reflect what is in use
noah: is there a middle ground?
mnot: for a lot of people,
registry is just a barrier to get past
... especially because the work is all up-front
noah: no one says they would deploy more quickly if it was registered
mnot: there's no benefit in the registration
masinter: it just exposes you to criticism
timbl: text/n3 is an example for
me
... the best practice was to use Turtle, but people would use
application/rdf+xml because it was registered
mnot: people who pay attention to standards might care
timbl: getting media types into Apache does have a big effect
mnot: Apache put lots of stuff in, it's driven by the market
timbl: So Apache mime .types works and iana doesn't -- why doens't IANA use the same process as Apache?
mnot: we want to create a
virtuous cycle, for example with machine-readable registry
data
... for example for link relations that lets you add attributes
to registry entries
... so that a browser can see whether it's a link that could be
followed
... or archived and so on
... so if I come up with a new link relation, and it's treated
in a generic way
... the browsers can have a more automated process for adding
semantics
noah: we have to give Jeff Jaffe
early warning about potential larger issues
... it seems that there's a bigger story here about market
forces driving standards
mnot: what we want to do is make
IANA a suitable place for these registries
... which is complex, but worth trying
<Zakim> JeniT, you wanted to ask whether browsers will really pick up on this metadata automatically
<JT:> You spoke of browsers automatically going to registries and doing something useful with what they find. Do you have actual experience with people being willing to do that? I haven't seen it in my work on RDF.
<mnot:> My understanding is that Ian Hickson and Anne van Kesteren felt this would be very helpful in the registries
<mnot:> I think one aspect of the value is during the browser development and QA process, where those building a browser can pull from the central registry, do some work to integrate with their browser or tests, and then deploy.
<larry:> It's almost as if you want to have so much in the IANA registry that you would never want to use it real time.
<mnot:> Hmm. Probably if you do things real time there will be attack vectors.
<JT:> And the process for staying in sync is to do a pull every time they release the browser.
<mnot:> Yes, but they are on very quick update cycles now
<mnot:> Seems unlikely we'll see people automatically supporting new media type handlers.
<Larry:> I think I've seen things where apps on phones can register for URI prefixes
noah: has anyone looked at
associating some a Javascript handler with eg a link
relation
... which the browser could then use to handle likes of those
types
mnot: that's a bit
speculative
... we want to focus on a virtuous cycle where code can use the
information in the registry
timbl: ontologies are like this
mnot: the registry needs to
reflect what's in use, not how things should be
... you need to make the barrier to entry low
... and iteration rapid
... to incrementally improve the entry
noah: it sounds like you're going
very far over to there being no barriers
... towards something completely open like a wiki
... I think it's more a shift towards that rather than going
completely to the extreme
mnot: the problem is that expert reviewers have a tendancy to try to maintain quality and prevent new entries going in
mastinter: registries have rows
and columns
... there's a column with 'review status' which people making
the entry can't change
... which can be 'unreviewed' or 'unacceptable'
<noah> Philippe le Hegaret joins us in the meeting room
mnot: yes, we've talked about
having a range of statuses
... it comes down to having a process to manage the
registry
... if you have a wiki, that's going to happen because there
are going to be conflicts within the community
... and there will be cases where you can't tell what to do,
where there are two implementations using the same name with
different semantics
<mnot> http://www.w3.org/wiki/FriendlyRegistries
mnot: so we started having
meetings within IETF, and have set up a mailing list
... FriendlyRegistryProcess
... for example, turning the expert reviewer into more of a
community moderator than a gatekeeper
... for example, Apple is using a bunch of URI schemes which
aren't currently in the registry
noah: should one size fit
all?
... it might be different for URI schemes than for link
relations
... it might be that it has different technical consequences to
introduce new URI schemes
timbl: there should be very few
URI schemes added, but a larger number of media types
... because that's how the system is designed
... about switching to a model where you register what
exists
... that does avoid conflict
... I would support a very open bug tracking system on the
registry
... suppose someone registers something, and that automatically
opens up a bug tracker for them, so people could make comments
on it
mnot: yes, we talked about this
timbl: not for the process of the registration, but to register technical issues
mnot: we talked about having a
wiki page for each entry
... there's a hidden bug tracker for IANA
timbl: tracker for issues about the entry
mnot: we do want to support third
parties to register protocol elements
... so if a company hasn't registered something, others can do
it instead
timbl: ideally you want that to go through very fast, so that there can be feedback on the entry
mnot: and that comes back to the
different statuses on the entries
... to label that something has technical or legal issues
... we want to order the entries to make the good ones more
prominent
(moving on to next steps)
mnot: we've had some discussions
on this within IETF for the last year or so
... Ned is doing a revision of [some RFC]
... and revisions of link relations RFCs
... and the message header registries RFCs
... and another document to distil this discussion into "how to
set up a registry"
masinter: I have done some work on that
mnot: there are things that IANA can do without updating RFCs
jar: are they cooperative?
<masinter> in http://www.w3.org/2001/tag/2011/12/evolution/Registries.html
mnot: yes, they need the IETF to
take the initiative and they are resource constrained
... and we've talked about having Wiki pages for each
entry
... this is a long term project
... the mailing list is the main contact place
... I am the main contact
masinter: a little W3C resource would make things go a lot faster
<Yves> https://www.ietf.org/mail-archive/web/happiana/current/maillist.html
<noah> https://www.ietf.org/mailman/listinfo/happiana
timbl: is anyone within the TAG following it?
yves: I am on the mailing list
masinter: I am as well
noah: I'm assuming Larry is our point person on this
masinter: I need help
... I haven't been able to write this up in a way that was
understood
... we need some resources to make some things happen
... and I don't know how to actually make this happen
noah: what aspects of this should
be done within the TAG?
... perhaps we could just free up some of Larry's time to work
on it?
mnot: we would appreciate your review of what we have written up on the wiki
noah: so we should have one or two TAG members review it and frame a way forward
timbl: we can register our enthusiasm and encouragement
<masinter> the main problem i see is that current and previous TAG findings might be in conflict with the new directions being pursued, and that the TAG is more of a bottleneck than a group that can help.
timbl: on TAG ground, each
registry is a piece of the architecture of the web
... the TAG could dive into how much damage there is when
someone makes a new URI scheme
<masinter> i prepared some material on this subject in the slides i put together for this meeting and was unable to get agenda time to present it
noah: arguably it's not a registry discussion
timbl: for a given registry, the TAG might want to point out the damage done by a badly designed scheme
mnot: so long as that doesn't
prevent entries going into the registry
... even if the bad ones are highlighted with blinking 'Evil'
icons
noah: should we actually do this
(about URI schemes)
... is there any new work that we should kick off now?
... also, does it help to have a formal TAG resolution to
support this work?
mnot: probably not now, I just wanted to socialise this with you
noah: we're very interested in this, and we have been looking at this in an ongoing way, and we will keep on doing so
noah: a few months ago, it
started to look as though SPDY was expanding beyond
Google
... we had a technical discussion at TPAC 2011
... it looks like there could be major changes to the web due
to innovations such as SPDY
... doing everything through SSL
mnot: the SPDY guys have strong dislike for transparent proxies.
noah: and there's a privacy angle
here
... we haven't for a while looked at this level of web
architecture
... we want to decide if this is an area where the TAG needs to
do serious work, what the goals are, who is going to do
it
... and top priority is to have discussion with Mark
... we could look at Yves email
http://lists.w3.org/Archives/Public/www-tag/2011Dec/0034.html
... to which there were some responses
mnot: I would like to talk for 10 minutes and then have discussion
mnot: we started HTTPbis about 4
years ago
... the idea was to revise HTTP because we now had 10 years of
implementation experience of RFC2616
... there was a lot of knowledge locked up in people's heads
that we wanted to get written down
... with quite a tight charter
... not a new version of HTTP, no extensions, just fixing the
specs
... we're almost done
... we have about 11 design tickets open, many of those will be
closed really soon
... the editors are Yves, Roy Fielding and Julian Reschke
... we wanted to make something solid for 10-20 years
... meanwhile implementers have come up with SPDY, mostly for
performance
... addressing a number of issues with HTTP and its
serialisation over TCP (?)
... the Google guys have an implementation, both server and
client
... Patrick McManus has done implementation in Firefox, also
HTTP pipelining
... he's found it easier to do SPDY than HTTP pipelining
... it should be in Firefox 11
timbl: so it hasn't actually gone to market
yves: Opera also provided feedback on pipelining
mnot: the main problem with
pipelining is that you have to use a lot of heuristics about
when you can use pipelining
... within pipelining, requests can block
noah: but in SPDY, you can interleave?
mnot: yes
... Mike probably also covered Jim Getty's concerns about
buffer bloat
... I did an implementation of HTTP pipelining and SPDY in
Python, and SPDY was much simpler
... Amazon is using SPDY in the Kindle now, or are in process
of doing so
... Daniel Stenburg, behind curl, is implementing a C library
for SPDY
noah: in Amazon Fire, there's the split browser, do they also use SPDY in requests out to the wider web?
mnot: I'm not sure
... Google is practically the only server implementation
... GenX has just announced implementation too
... I noticed this momentum a couple of months ago
... it's not just Google any more
... I've had a lot of private discussions with various people,
and everyone is very very interested in tracking/implementing
this stuff
... the market is choosing with its feet
... the question is whether this gets done within a standards
organisation, with interactions with other implementers than
Google
... it seems like it's necessary to take this work on
noah: we asked Mike how he felt about that, and he seemed to be keen on standardisation
mnot: we've had an ongoing
discussion about standardisation
... the team understands it will involve
... there's a tension between getting to market and for
everyone else to have their concerns met
... I've been talking to people about this and putting together
a proposed charter for this work
... HTTPbis is just finishing up, and I don't want to distract
from that
... but time is important for the SPDY guys too
... I've been talking about rechartering the HTTPbis group to
work on HTTP evolution
... perhaps not saying that we should start from SPDY
... Roy has been working on WAKA (?)
... but it's not been made public
... looking at that and SPDY, I think they are conceptually
very close
... continuing the HTTP 1.1 revision
... we have split up HTTPbis into components
... SPDY only requires changes in one of those components
... Part 1 of HTTPbis
timbl: what's SPDY's relationship to HTTP headers
mnot: it compresses them, but it
uses the HTTP headers
... there are some headers that aren't needed in SPDY
... I used the same API as for HTTP when I did SPDY
implementation
... there might be other tweaks, but SPDY would be a
superset
noah: SPDY would multiplex on one connection rather than having multiple connections
timbl: people have assumed HTTP
would be replaced in the future
... and therefore HTTP URIs would be replaced by other
things
... but the HTTP namespace can be persistent even if the
protocol works
... calling it HTTP 2 might be useful to avoid that
confusion
mnot: questions about spdy: URIs have always been resisted
noah: there's an interaction with
HTTPS and TLS and certificates
... there are differences between http: and https: URIs, https:
uses certificates and http: don't
mnot: there's a set of issues around TLS, about whether the CAs are a good source of truth
noah: how far has the discussion gone?
mnot: core people in SPDY feel
that using TLS by default would improve the web
... other people don't agree
noah: there are a bunch of issues
with TLS, one of which is to do with name resolution
... it means I have to get a certificate for my server
mnot: right now SPDY says you will deploy over TLS
timbl: what about certificates from DNS Sec?
mnot: there's a bunch of work on
that, yes
... which sometimes gets governments involved
... the question is whether we can leverage it in time for
SPDY
... in the IETF people are uncomfortable with the 'S' in
'HTTPS'
... that there shouldn't be a flag in the URI that indicates
security
... but the browser people like it
... the concept of the origin server means having 'S' is really
useful
timbl: but you may want to add more constraints, not just the 'S' bit
mnot: the question is about whether you should have it in the identifier
timbl: in RDF it's a real
pain
... moving to HTTPS wreaks havoc with links
... I've wondered about using POWDER to put a label on the home
page
... to say that anything that starts 'https' should have the
same identity as if you had 'http'
... like a canonical link
mnot: I have a format for
describing canonical URIs for a domain
... but this is a real tangent
plinss: my understanding is that TLS and SPDY are orthogonal
mnot: the way it's currently
defined, TLS and SPDY are bundled
... I think there are cases where you don't want to use it
masinter: if you start with a HTTP URL, does it use SPDY?
mnot: it will upgrade the
connection
... for an HTTPS URI, there's another negotiation
noah: Google is using SPDY by default for HTTPS URIs
mnot: using TLS NPN is an
uncontroversial use
... OpenSSL isn't going to support it until the next
version
noah: how does this affect CDNs such as Akamai?
mnot: they will need to support
it
... it used to be hard because you need an IP per
certificate
... now there's TLS SNI extension (?) but it's not perfectly
deployed
... which is the Host header for TLS
... so that you can have 100 hostnames on one IP address
noah: how does the HTTPS work through something like Akamai?
mnot: they will need to know your
private key or generate one for you
... one of the metrics of a tracker is rapid changing of
certificates
... back to the charter
... the new bit is working on HTTP 2.0 with the goal of
improving performance
... more efficient use of network resources
... deployment on today's internet, using IPv4 and IPv6
... maintaining ease of deployment
... and balancing that with reflecting modern security
requirements
... there's a new requirement in IETF, any protocol has to have
"mandatory to implement security"
... it has to have an adequate security mechanism that
implementations must support
... the idea is to recharter the working group
... starting work around end of March
... which means HTTPbis needs to have been substantially done
by then
... I'm hoping the HTTPbis review will be fairly
straightforward
... because it's already gone through so much review
... particularly because we're not introducing new things
... we would put out a call for proposals for a starting point,
one of which will be SPDY
... there's a lot of running code out there for SPDY
... the obvious question is why recharter the HTTP WG to do
this, rather than creating a new one?
... I think it's worthwhile because we have a good working
pattern and an established community
... we've talked about Mike Belshe and Julian Reschke being
editors on the spec
timbl: the WG might have to be warned about being open to new people
mnot: we have almost complete
coverage of HTTP implementers
... this needs to be a worthy replacement for HTTP 1.1
... we've got firewall, client, library, intermediary, embedded
guys
... if I can't get it through, we'll charter a separate
WG
... this is obviously of interest and importance to the TAG
DKA: in naming it HTTP 2.0, isn't there a danger that the scope gets expanded?
mnot: yes, we've dealt with that
in HTTPbis
... and the charter is written in a focused way
... to prevent that
<masinter> http://masinter.blogspot.com/2011/12/http-status-cat-418-i-teapot.html
timbl: looks good....
… I think it's good to bring it out under a http 2.0 banner -
<Zakim> timbl, you wanted to ask about the extensibility point of HTTP headers in SPDY
… I think the fine line between directly taking on board existing work and allowing people to make arbitrary changes to existing work that runs is one I understand...
jar: Jim Gettys gave us a
presentation on buffer bloat, and I wondered how this
related
... is this radical enough?
... he was talking about self-authenticating content and
things
mnot: this enables a solution to
buffer bloat
... it will use TCP better
... particularly when people pull content back onto single
domains rather than sharding
... right now the interest is in maintaining the client/server
model
noah: I think Jim spoke sympathetically about SPDY
timbl: back to the HTTP level, I've been trying to push rather than a content-addressable system, you might be able to go back to the referer of a link to get information
<masinter> there's a possibility that the two go together: SPDY for interactive, dynamic, personal, private traffic, and content addressible networking for public, cachable, distributed content.
timbl: for example, to bootstrap into a peer-to-peer system
mnot: have you seen
Metalink?
... a new link relation called 'duplicate'
mnot: an exact byte-for-byte duplicate for a given representation
<masinter> http://tools.ietf.org/html/rfc5854
timbl: the idea is to cache everything you link to to two levels
mnot: there's a lot of
interesting things to do in caching
... the quality of cache implementations is something that
bothers me
<masinter> pursuing both simultaneously would mean you wouldn't have to rely on caching
mnot: I'm concerned around the parallel tracks of caching, for example with AppCache
timbl: caches tend to be temporary; this idea is a mutual-aid system
<masinter> wonder if some of hte weaker parts of HTTP could be left behind
timbl: you'd build it into Apache and the client, and you'd be able to get data from parts of the network that were cut off
masinter: there's lots of HTTP
that isn't very good, that you could leave behind if you're not
encapsulating all of HTTP
... and others that you could promote
... for example caching based on time stamps vs on ETags
... also HTTP uses the same transport for dynamic, private,
interactive content as for large, public, static content
... right now we use Vary headers to distinguish between
them
... maybe there's some other way that would be more
reliable
mnot: I'm nervous about that, because how far do you go? we don't want two separate protocols really
masinter: you only split things off when they really don't fit
mnot: I'm not convinced they don't fit
noah: you can use a new URI scheme, that requires an early commitment
DKA: do you know of any mobile implementations of SPDY?
mnot: I don't know of any, but it
looks like a tempting target for mobile, because the connection
is used more efficiently
... it looks like a real win, and you can use SPDY in the
proxy
noah: what about battery drain on doing encryption?
<masinter> look at SPDY android
mnot: people claim TLS is not
that hard; it depends on cipher strength
... I consider TLS and wire protocols to be separate
DKA: the major battery drain aside from the display is usually the radio
noah: I propose that this is put
on the alert list for Jeff
... it sounds as if the right people are working on this in the
IETF, but I can't see that we need to parachute in
... I think we should have a contact point in the TAG, and
monitor progress
mnot: I would add that this is likely to be discussed at Paris in late March
noah: Yves, you've traditionally had actions on this
yves: I will follow this for W3C anyway
<noah> close ACTION-640?
<noah> close ACTION-640
<trackbot> ACTION-640 Frame F2F discussion of SPDY/HTTP futures closed
yves: my email also touched on
WebSocket
... most of the communication on that won't use
URIs
<noah> ACTION: Yves to prepare telcon discussion of protocol-related issues, e.g. Websockets/hybi (but not SPDY)Due: 2012-02-21 [recorded in http://www.w3.org/2012/01/06-tagmem-minutes.html#action01]
<trackbot> Created ACTION-658 - Prepare telcon discussion of protocol-related issues, e.g. Websockets/hybi (but not SPDY)Due: 2012-02-21 [on Yves Lafon - due 2012-01-13].
<noah> ACTION: Yves to track IETF efforts on HTTP 2.0 & SPDY Due: 2012-03-20 [recorded in http://www.w3.org/2012/01/06-tagmem-minutes.html#action02]
<trackbot> Created ACTION-659 - Track IETF efforts on HTTP 2.0 & SPDY Due: 2012-03-20 [on Yves Lafon - due 2012-01-13].
noah: we need to talk about things for Jeff
<mnot> https://datatracker.ietf.org/wg/dane/charter/
noah: CA system
... perhaps other security aspects
... perhaps how to deal with the TAG issues
... action item review
<masinter> you might want to also look at the long list of dead TAG findings
<masinter> http://www.w3.org/2001/tag/findings
mnot: issue with fragment identifiers and redirections
<masinter> and also "Approved findings" we no longer believe in
mnot: HTTP doesn't say which one
gets precedence
... we talked with you and at the time we said, 'there's not
good interop here'
... so didn't say what to do
... we didn't cover when the request has a fragid and the
redirect location doesn't
... since then, we've tested implementations
... and there is good interop
... from a webarch standpoint would the TAG be concerned if the
combination of fragid and redirect were determined by HTTP
rather than media type dependent?
noah: ht might have input on this
mnot: we need an answer soon
because we want it in HTTPbis
... my opinion is that from an implementation standpoint it is
bad to make it media type dependent
<noah> noah: would it be convenient for you to send an e-mail asking the TAG to consider this question? If so, i'll use that to trigger telcon discussion.
<noah> mnot: Fine, no problem, I'll send the note.
mnot: making it the same for everything is significantly less complex
<noah> Recent TAG finding on fragment identifiers in Web Applications http://www.w3.org/2001/tag/doc/IdentifyingApplicationState
timbl: this is deeply connected
with how the Semantic Web / Linked Data worked
... to me it was a shock that you could redirect to something
with a fragid in it
... how common is it to have that kind of redirection?
jar: Dublin Core
<mnot> HTTPbis bug: http://trac.tools.ietf.org/wg/httpbis/trac/ticket/295
plinss: I think this is going to become more common as you have fragments on video/audio
jar: URL shorteners
timbl: do you have RDF test cases?
mnot: no
... there's strong interop amongst the implementations we've
checked
timbl: adding the fragid to the redirected URI isn't a problem
jar: +1
... I think it's implied by RFC3986
masinter: one way or the other it has to be made explicit
mnot: I'll send noah an email and we'll take it forward
jar: I weighed in on this before and didn't get any reaction
<noah> http://www.w3.org/2001/tag/doc/IdentifyingApplicationState
noah: the TAG did quite a bit of
work in the last year on web application state and fragid
semantics
... that might be of interest
<mnot> http://trac.tools.ietf.org/wg/httpbis/trac/ticket/238
mnot: most of the browsers
redirect automatically
... because the users don't know whether it's safe to redirect
across domains
... so perhaps we should remove that requirement from
HTTP
... but it's a fairly big change
... but it doesn't reflect reality
timbl: has anyone suggested any
improvement?
... currently this is a fairly huge hole
mnot: there are so many ways to
generate requests to multiple hosts in browsers
... they are moving away from making security visible
... because it doesn't meaningfully improve security
timbl: so it's a cross-domain issue?
mnot: we could phrase the requirement be about cross-domain redirects
<jar> n.b. the discussion is of 301 redirects specifically (with unsafe methods such as POST)
mnot: we can't make incompatible
changes in HTTPbis unless it's a serious security issue
... and this isn't serious
... right now this applies same-domain
timbl: relaxing it for same-origin makes sense
mnot: one browser prompts in a
couple of specific situations
... but most already ignore the requirement
mnot: Sec- prefix on HTTP
headers
... adding semantics to an identifier brings problems
... how do you add more prefixes?
... X- for experimental, then it gets adopted
... (that is close to deprecated)
noah: how you support
decentralised extensibility + smooth evolution from
experimental to common is something the TAG could look at
... I'm not sure we can do that well in the TAG
mnot: we're covering it a bit in happiana
jar: registries, decentralised extensibility and persistent naming are all closely related
noah: it's more about whether the community can see progress
(wrapup)
Minutes integration: Wednesday : Yves ; Thursday: JAR ; Friday : Dan
<JeniT> "we need to talk about things for Jeff
<JeniT> CA system
<JeniT> perhaps other security aspects
<JeniT> perhaps how to deal with the TAG issues
Noah: Jeff has asked the TAG to alert him to big controversies and threats to the Web that he might not know about.
… I have ACTION-568.
<noah> ACTION-568?
<trackbot> ACTION-568 -- Noah Mendelsohn to draft note for Jeff Jaffe listing 5 top TAG priorities as trackable items. -- due 2012-01-03 -- OPEN
<trackbot> http://www.w3.org/2001/tag/group/track/actions/568
… We are overdue on this action.
… We need a plan that will close in a few days for an initial note to Jeff.
… This says "5" but I don't think 5 is a magic number.
Yves: 20 items is too many.
Noah: Let's see what we have.
<Yves> https://lists.w3.org/Archives/Member/tag/2012Jan/0001.html
Noah: [outlines above list]
[discussion on death of protocols]
Yves: this is the list discussed during f2f in Edinburgh.
Noah: two more - one is a think Dan asked for: should app cache vs app packaging be on the heads-up list for Jeff?
… Noah: I think the only reason to highlight this is if it's not adequately highlighted in the workshop report.
Dan: risk is more generally apps vs web.
Jenit: which we already have.
Ashok: [asks for clarification]
Noah: We need descriptions of
threats or potential threads to send to Jeff - between a
paragraph and a short page.
... if people like the list, let's look at each one.
JeniT: should the registries and IANA stuff get moved into the bigger section?
… especially rdfa vs html link relations.
Yves: this is not new.
… if there will be more issues based on this - e.g. registries being misused - then yes.
Noah: If we think e.g. Happiana will rise into a key issue then we might raise it to Jeff.
… we could make it an addendum to the main list.
Yves: "there have been issues - there will probably be more issues - there is this work happening (IANA)"
JAR: … and a small effort by w3c staff could help.
Noah: Jeff asked me for [major issues] that might hit him.
JeniT: Along those lines, the section on SPDY and http - this feels less like something that we need to be mega-concerned about.
Noah: I think http is a major
part of the Web - we should [outline the key topics] and then
say "we have been working with e.g. mnot about it and this is
what's happening..."
... let's go through the things Yves has drafted on each of
these.
... first - "Specifications with the Same Scope…"
... Question I would ask - you talk about RDFa
Yves: With the evolution of different stacks, they step on other technology stacks' feet. It's difficult to predict that.
Noah: If we know of anything similar to microdata/rdfa then we should alert Jeff.
Yves: another one might be xpath and css selectors.
Noah: is that resolving?
Yves: I think it's more or less under control.
JeniT: We can say - following discussion with plh over html.next there seem to be areas e.g. speech but our advice from him was that this wasn't going to cause problems.
<noah> I think we need to tell Jeff where the ones we know about stand, and where there are others that are likely to be problems. I hear Yves saying: microdata/RDFa and CSS/XPath are probably headed for resolution, but these things will keep happening.
… the two things that could be hot topics look like they are being handled in the right way.
Noah: I think this should come out under my signature on behalf of the TAG.
… I feel sufficiently informed on this one to take a cut.
<noah> Current draft: The TAG, as part of its review activity will continue to monitor such
<noah> issues.
<noah> Suggested: The TAG, as part of its review activity will continue to monitor such
<noah> issues.
Yves: last para where I said TAG is reviewing what is going on - even if we don't know an issue that will happen in the next 6 months, in 2 months we might discover an issue.
<noah> issues and we will alert you to any that we think are of particular concern.
Noah: Next one - "phone apps vs web apps"
yves: there are a range of issues here, and I'm not sure what the crux is
<noah> YL: On the mobile, I'm not sure where the issues are.
DKA: there's things like URI schemes as well
<noah> DKA: It touches on things like vendor-specific URIs
DKA: and the "death of the web"
noah: can you send me an email?
<noah> ACTION: Dan to put together a bulleted list of items to go into this category [recorded in http://www.w3.org/2012/01/06-tagmem-minutes.html#action03]
<trackbot> Sorry, couldn't find user - Dan
DKA: give me an action to include APIs, packaging, offline use, tools, monetisation
noah: if you could just draft a section?
DKA: ok
JeniT: do Facebook apps have similar characteristics?
DKA: the risk on mobile is apps
running outside the browser, that could be done in the
browser
... due to artificial constraints
noah: widgets could run outside the browser, are they bad too?
DKA: this is where it's a grey
area, because some people don't think Widgets are the Web
... because they don't have addressability, for example
noah: outside the browser, forward/back navigation doesn't work
Ashok: are you thinking of Apps like iPad Apps?
DKA: it's definitely not just on
the mobile phone, but this whole class of device
... which uses the AppStore model
... which diminishes the importance of the web
... there are plenty of Apps that use web technologies
... but you can't use the web to download them, rate them, talk
about them etc
noah: I don't care about how I got the App but how you navigate out of the App
yves: so what about the Chrome
Apps?
... they are using web technologies
noah: if they're not linking to things on the web, then that's not so good
yves: the threat is the creation of a walled garden
Ashok: +1
DKA: so Chrome apps run in the browser, but you can only download them and use them in Chrome
noah: should we start each section with 'Threat:'
DKA: I think that's good: the threat is the browser is no longer the way that people find and download information
noah: I'm want to focus on the risk/threat for Jeff
DKA: the death of the browser as the mechanism for accessing information is the threat here
Ashok: in the browser, you can go to a different web page, and from an App you can't
noah: many Apps do it, but they break
Ashok: this is a way to try to
earn money
... to package something that you can then charge for
yves: not only for paying, but for editorial control: you can censor things
DKA: why should you care? because
you won't be able to see information that isn't approved
... on the web I can find other points of view
noah: most of this is stuff Jeff
will be aware of
... perhaps we want to say that he should be more worried about
losing this war
DKA: there are other things about
debunking claims such as not being able to charge for
things
... or accessing location
... or accessing the camera
noah: in my experience geolocation doesn't work as well in the browser as in an App
DKA: the macro-issue is the other functions that the web can't do that Apps can
noah: vendors that support Apps
may limit the ability of the browsers to perform as well as the
Apps do
... Apps have more complete access to the platform
... they lose flexible linking to other web pages
... the threat is that this remains attractive: the web hasn't
blown these things away
Ashok: if APIs on the phone are really that much better than the APIs from the browser, that's a cause for concern
DKA: this is a complex area;
highlighting some stuff on the technical level would be a good
idea
... let me draft something for Jeff
... to give him some ammunition
Noah: CAs
Yves: we should note that there is work going on in IETF and other places to help...
JAR: Jeff ought to be mobilising w3c to work on this issue. This is really important.
Tim: do you mean the first response or designing a better system?
JAR: I mean that it's not obvious
where to go. We have some ideas...
... Issue is the trust structure...
Larry: In some case, if we don't figure it out then things won't advance. But in this case if we don't figure it out then bad things will happen.
Noah: I think we all agree.
... [wordsmithing the description]
… I'd like to see a paragraph "the practical effect of this is that right now in certain countries users are being redirected to fraudulent or improper copies of web sites - and that there is no way to fix this in the immediate future."
Yves: not only redirecting - but people having the feeling that they have a secure channel - and are not being spied on.
Noah: We should start with some brief war stories - Another example: we have seen man-in-the-middle attacks to spy on politically sensitive traffic...
Yves: as in Tunisia.
Noah: I think it's important to say: right now it's not obvious how the technology will be deployed to stop this from happening.
Larry: There's a concern that this is an architectural flaw rather than a set of isolated events. I share this concern.
Yves: I can redraft this.
... we might expand on what we mean by trust issues...
Noah: As long as the key points are up front.
JeniT: can we move that up to the top of the list?
Yves: I agree.
… add "red flag here."
Noah: Now - "SPDY and HTTP"
... The highlight here is - this is not a threat, this is
something you should be aware of …
Yves: there should be info at the bottom about the new efforts in this space - the IETF httpbis rechartering.
Noah: I can redraft this based on
what we heard from Mark Nottingham.
... now "Death of Protocols"
... Can someone offer an example of this?
Yves: not many things are using web sockets in a way you could call a "protocol." You just wait for data to come in without having the framing of http...
… I'm not aware of any widely deployed app using web sockets though. So it's a potential threat.
JAR: What's written here sounds right.
Noah: Can you give me an example?
Yves: e.g. the way communication was done in the past before TCP…
Noah: Why is that death of protocols rather than "death of standardised interoperable protocols."
JAR: It's not the death of existing protocol. It's the death of the process by which people publish their protocols.
Tim: It could be the birth of many protocols… Some of these may get standardised, some won't...
JAR: The outcomes could be good. There were objections from within IETF - for where the locus of documentation is. The traditional IEFT way of doing things is to write a RFC, associated that with a port, etc… Locus of communication is in IEFT and with the community around that (e.g. those building firewalls, routers, etc…).
… what people are bothered by is - even though innovation is supported - that it's disruptive.
Tim: if you run over TCP then you can tai end-to-end without talking to people...
Noah: We were in a world where code was hard to deploy - now when you go to a site and you want the weather, there's a chance the site will send you the javascript code at that moment to speak that protocol in order to get the weather. The code is mobile and hence the value of standard protocols is greatly diminished.
Tim: issue Jonathan was raising - you used to take new protocols to the IETF. But as long as you use an existing underlying protocol then maybe you're safe now [for using these new protocols].
JAR: I think this is complicated.
… having to do with what layers in the stack have information about the traffic. It used to be that routers were pretty simple. Modern routers at wire speed are looking at things like the URI of an http request and making decisions as the packet goes by...
… so it's a question of the locus of intelligent.
Larry: 20 years ago some guy at CERN wrote some code and I downloaded it. The Web was deployed before there were any standards.
Noah: But tim didn't download a new copy of the browser every time [he visited a web page.]
Larry: IETF and w3c have policy initiatives around security, privacy, monitoring, ports, firewalls, etc… if there's never any need to do protocol review or standardisation, how do we retain those "community goods."
<noah> Possible message to Jeff: "As dynamically downloaded JavaScript libraries are increasingly used to implement ad-hoc, problem-specific protocols to access data, the usage and value of Web technologies such as URIs and HTTP may be reduced."
Tim: e.g. quality review.
Yves: there's also the possible reuse of that protocol.
<jar> goes to the network as a commons
… if you're building something that gets widely used, and you're using a protocol that's not published then people will have a hard time reusing, etc...
Tim: I think it's better to come to Jeff with possible failure scenarios.
… one failure mode : when you buy some home automation hardware, it comes with its own Web server and runs its own protocol between the client and the server and as a result you have vendor lock-in.
<noah> Possible message to Jeff: "As dynamically downloaded JavaScript libraries are increasingly used to implement ad-hoc, problem-specific protocols to access data, the usage and value of Web technologies such as URIs and HTTP may be reduced, and we run the risks that result from proprietary protocols. For example {insert Tim's example of home toaster controller here}."
<JeniT> DKA: why is that any different now from in the past?
<JeniT> Yves: the difference is it's done in the browser
Tim: 2nd scenario : the toaster protocol might run on UDP so it brings my home network down…
<Yves> it's not vendor lock-in, it's difficult upgrade path, no review on what can go wrong (security etc...)
Tim: these are hidden
protocols.
... What's breaking is the ability to construct things in a
modular way.
Noah: No this might be well structured but it's all very immediate.
Tim: What am I missing?
Noah: I'm saying the right model is - I'd like to use this on things that don't support javascript, I'd like to be able to implement it in multiple environments, etc...
Tim: What you're trying to do is to combine multiple components...
Noah: damage would be no freedom of choice in toasters.
<noah> TBL: Issues include vendor lockin, badly designed (no IETF review)
Peter: This happens now...
JAR: difference is it goes through firewalls...
Peter: I think the main difference is that it's going to happen within the web browser [with Websockets].
Tim: example of using libraries - standardisation will happen between people making the libraries.
Peter: my fear is that people will use proprietary protocols so that makes it more difficult for others to re-use data across the Web.
Larry: people have already been layering for decades proprietary protocols over http. Maybe this is actually not a problem.
Noah: I'd like to take a look at where we stand. We're mostly there except on this one.
…worth another 10-15 minutes?
JAR: What larry said.
<Yves> +1
JeniT: Yes I agree we shouldn't raise it to Jeff if we can't articulate a problem.
+1
Noah: anyone who could offer
something to discuss on a telecon.
... Ok - for the moment this will not be included in the note
to jeff unless someone comes forward with a proposal on the
text.
... ok - Yves to draft something on "overlapping specs"; Dan to
do apps and web apps; Noah to do Certs (to be moved to top with
red flag); Yves to do SPDY; Protocols is out unless someone
comes out with text to discuss; all - please send me paragraphs
and I will integrate them.
<scribe> ACTION: Dan to draft text on apps and webapps [recorded in http://www.w3.org/2012/01/06-tagmem-minutes.html#action04]
<trackbot> Sorry, couldn't find user - Dan
<noah> ACTION: Noah to integrate input from DKA and Yves for note to Jeff, and draft section on CA Due: 2012-10-17 [recorded in http://www.w3.org/2012/01/06-tagmem-minutes.html#action05]
<trackbot> Created ACTION-660 - Integrate input from DKA and Yves for note to Jeff, and draft section on CA Due: 2012-10-17 [on Noah Mendelsohn - due 2012-01-13].
<noah> ACTION-660 Due 2012-01-17
<trackbot> ACTION-660 Integrate input from DKA and Yves for note to Jeff, and draft section on CA Due: 2012-10-17 due date now 2012-01-17
[end of discussion on Jeff note]
<noah> http://www.w3.org/2001/tag/group/track/actions/overdue
ACTION-344?
<trackbot> ACTION-344 -- Jonathan Rees to alert TAG chair when CORS and/or UMP goes to LC to trigger security review -- due 2012-01-01 -- OPEN
<trackbot> http://www.w3.org/2001/tag/group/track/actions/344
JAR: Answer has been "Ok - explain to users of the spec how to be careful."
<noah> ACTION-480?
<trackbot> ACTION-480 -- Daniel Appelquist to draft overview document framing Web applications as opposed to traditional Web of documents -- due 2011-07-05 -- OPEN
<trackbot> http://www.w3.org/2001/tag/group/track/actions/480
<noah> close ACTION-480
<trackbot> ACTION-480 Draft overview document framing Web applications as opposed to traditional Web of documents closed
ACTION-601?
<trackbot> ACTION-601 -- Noah Mendelsohn to document in product pages wrapup of HTML5 last call work, leading to HTML next review -- due 2011-12-27 -- OPEN
<trackbot> http://www.w3.org/2001/tag/group/track/actions/601
<noah> action-601?
<trackbot> ACTION-601 -- Noah Mendelsohn to document in product pages wrapup of HTML5 last call work, leading to HTML next review -- due 2011-12-27 -- OPEN
<trackbot> http://www.w3.org/2001/tag/group/track/actions/601
Noah: I believe I did this - can I close this action?
<noah> close ACTION-601
<trackbot> ACTION-601 Document in product pages wrapup of HTML5 last call work, leading to HTML next review closed
<noah> ACTION-645?
<trackbot> ACTION-645 -- Noah Mendelsohn to take off draft indication and put dates on URI Definition and Discovery Product page -- due 2011-12-29 -- OPEN
<trackbot> http://www.w3.org/2001/tag/group/track/actions/645
<noah> close ACTION-645
<trackbot> ACTION-645 Take off draft indication and put dates on URI Definition and Discovery Product page closed
Noah: You could make a case this is a security problem that the TAG should be involved in in an ongoing way. Seems like a really important development to me. We should be involved.
Ashok: It doesn't look like our's to tackle.
JAR: Everyone should have awareness of, especially us.
Noah: If we thought the Web was going to crumble then we should go to w3c membership and say that.
JAR: at least 3 different solutions have been put forward and it's not clear [which is best].
<jar> https://www.youtube.com/watch?v=Z7Wl2FW2TcA I think
JAR: e.g. "SSL and the future of
authenticity"
... …and a third one.
JeniT: is there anything useful we can say to people developing web applications with SSL about what they should or should not be doing...
<jar> https://www.eff.org/deeplinks/2011/08/iranian-man-middle-attack-against-google
Yves: i think it's more for users - that they should rely on more than just the ssl padlock...
Noah: How urgent is this?
Yves: If it's easy enough to divert DNS and create fake certificates...
JAR: whole part of the video [above] is that it's very easy to do this.
Yves: The first issue is that users should know that even if they think it's safe it might not be safe… E.g. in some countries even if they are using SSL then it might not be safe. In that case, they should be using secure VPNs…
Dan: Isn't someone like EFF already providing that advice to end users?
Yves: yes - EFF worked on https everywhere - working against fire sheep - but this can give the sense of false security. You need a bit of judgement.
… do you want to access sensitive data on a network with https if you think you might be snooped on.
JAR: The point of the 3 proposed improvements is that thinks don't have to be as bad as they are.
… question of how do you bootstrap trust...
… my intuition is tat it's important. Can we weigh in, review the solutions, etc…
JeniT: Larry suggests we get someone to brief us on these solutions.
JAR: What can w3c do?
<masinter> Mark also pointed at https://datatracker.ietf.org/wg/dane/charter/ effort in this area
Tim: One intriguing thing - they use the same keys for webid, ssl and ssh. If you use a key from one world in another world then you won't have to have on trust system for each domain (web sites, email, etc…).
… with some interoperability you could use a mixture of different sources of trust.
… if you join a group (e.g. an employer), you can get some certificates and a level of trust within that group.
… e.g. Csail - they sign their own certs, and you have to make browser exceptions in order to use their [secure web sites].
<jar> timbl: trust interoperability
… there should be a better way to do that - when you join a group you get trust associated with that groups including email certs, web browsing certs, etc...
Dan: I think getting experts in would be good.
JAR: I think Harry Halpin could do that.
Ashok: I'll ask Harry to join us on a telco and talk to us.
[Next step: Ashok to ask Harry to join us and brief us on the security proposals...]
JAR: I recommend watching the video.
<jar> Looking at Harry's email http://lists.w3.org/Archives/Public/www-tag/2011Dec/0083.html
Noah: One think I'd note is that web app sec as a narrower charter than web security ...
Yves: yes - mostly to work on cors.
Noah: Security domain lead - should we talk to Thomas and ask him "read harry's note - tell me what you're already doing about this?"
Dan: could we get Harry and Thomas on the next call to discuss?
<scribe> ACTION: Ashok to ask harry and thomas to join us on a future TAG call. [recorded in http://www.w3.org/2012/01/06-tagmem-minutes.html#action06]
<trackbot> Created ACTION-661 - Ask harry and thomas to join us on a future TAG call. [on Ashok Malhotra - due 2012-01-13].