Minutes from TAG TPAC sessions now ready for review

The minutes from our TAG TPAC sessions are now linked from the agenda at 
[1], and are appended in text-only form below. Specifically, the minutes 
from 31 October 2011 are at[2], and 4 Nov is at [3].

I expect we'll vote to approve these on the 17th.

Thank you very much

Noah

[1] http://www.w3.org/2001/tag/2011/10/31-agenda
[2] http://www.w3.org/2001/tag/2011/10/31-minutes
[3] http://www.w3.org/2001/tag/2011/11/04-minutes

===========================================================
    [1]W3C

       [1] http://www.w3.org/

                                - DRAFT -

               Technical Architecture Group Teleconference

31 Oct 2011

    See also: [2]IRC log

       [2] http://www.w3.org/2011/10/31-tagmem-irc

Attendees

    Present
           Henry Thompson, Tim Berners-Lee, Noah Mendelsohn, Dan
           Appelquist, Ashok Malhotra, Larry Masinter, Peter Linss,
           Qiuling_Pan (observer), Alexandre , Jeff Jaffe, Peter
           Saint-Andre, Thomas Roessler, Le Hegaret

    Regrets

    Chair
           Noah

    Scribe
           Dan

Contents

      * [3]Topics
          1. [4]Discussion on AppCache...
          2. [5]URIs
          3. [6]Links refer, URIs don't
          4. [7]IETF cross-topics
      * [8]Summary of Action Items
      _________________________________________________________

    <jar> anybody home?

    <noah> We don't have a quorum right now, but are starting to chat
    informally about client-side state.

    <noah> I'll take some notes. I don't >think< it's worth doing Skype,
    but I can try if you like.

    <jar> I'll just hang out for a while... will be here intermittently

    <noah> We're discussing comments on
    [9]http://www.w3.org/2001/tag/doc/IdentifyingApplicationState-201109
    30.html

       [9] 
http://www.w3.org/2001/tag/doc/IdentifyingApplicationState-20110930.html

    <noah> Email from Ashok:
    [10]http://lists.w3.org/Archives/Public/www-tag/2011Oct/0105.html

      [10] http://lists.w3.org/Archives/Public/www-tag/2011Oct/0105.html

    <noah> First comment email:
    [11]http://lists.w3.org/Archives/Public/www-tag/2011Sep/0076.html

      [11] http://lists.w3.org/Archives/Public/www-tag/2011Sep/0076.html

    <noah> Björn Höhrmann writes in the email:

    <noah> "I saw "All media type specifications and registrations,
    especially for

    <noah> new types, must specify fragment identifier semantics" while
    scrolling

    <noah> through it. This kind of wording is misleading considering
    that the TAG

    <noah> has no authority over the requirements for media type
    registrations."

    <jar> +1

    <noah> We're working on a rewording...

    <Ashok> How about if we add the words "This document recommends ..."
    at the start of the sentence?

    <noah> hmmm....

    <noah> Suggest a new section 6.0. Something like:

    <noah> "In order to ground the semantics of fragment identifiers in
    applicable specifications, it is recommended that IETF media-type
    registration procedures be updated as suggested below.

    <noah> Going on to next comment...from Jonathan...

    <noah>
    [12]http://lists.w3.org/Archives/Public/www-tag/2011Sep/0089.html

      [12] http://lists.w3.org/Archives/Public/www-tag/2011Sep/0089.html

    <Larry> IRI could reference both 3987 and 3987bis

    <noah> Ashok: Agree on documenting abbreviations

    <noah> Ashok: Agree that it's the HTML media type registration, not
    the HTML Recommendation

    <noah> NM: Right, but the media type registration may delegate to
    the spec (channeling Tim, I think)

    <jar> the problem with delegating # semantics to a format spec is
    that it doesn't have jurisdiction

    <Larry> the HTML media type registration is in the HTML spec, as per
    W3C policy

    <noah> NM: I assume the "editorial nit" is an editorial nit?

    <jar> it has juris. if it says it does.

    <jar> problem only comes if it defines fragid semantics but does not
    call itself the media type reg

    <noah> Right.

    <jar> i.e. the part of the format spec that says "a conforming
    document is ..." says nothing about conformance of documents of
    other media types that have fragids pointing in

    <jar> pedantry.

    <noah> NM: So, Ashok has gotten some informal advice (no quorum
    here). We'll expect a new draft, and will go for formal approval
    after nov. 15 (per Edinburgh resolution) unless we get more comments
    at TPAC

    <noah> moving on to next topic.

    <Larry> fragids and polyglot, and compound document formats

    <scribe> ScribeNick: DKA

    <scribe> Scribe: Dan

    trackbot, start meeting

    <trackbot> Date: 31 October 2011

    <scribe> ScribeNick: DKA

    <scribe> Scribe: Dan

    Note: Qiuling is observer from Huawei.

    <timbl> logger, pointer?

Discussion on AppCache...

    [discussion on whether or not app cache manifest is actually
    comprised of URIs (it is)]

URIs

Links refer, URIs don't

    Henry: My argument is that part of the problem we are struggling
    with is: we believe that if something is a URI then we know
    everything we need to know about what it means. I think that's
    wrong.
    ... the ideology of early web specs came from e.g. roy fielding
    treating time-varried resources as first-class citizens. Resources
    vary over time. [NYTimes homepage changes all the time but is still
    the NYTimes homepage.]
    ... if I put up 5 pages of the form weather/city then it's
    reasonable to try weather/some-other-city and expect to get the
    weather for that city.
    ... under web architecture you have no grounds for complaint, but
    web site maintainer has violated a social contract.
    ... "indexical" is a word like "you" or "now" that can have multiple
    interpretations (but one meaning).

    <Larry> quote from post: Look, this is not a scientific process of
    discovery -- "what is the meaning of a URI? Let's look at some use
    cases and discover..". This is a process of design; we're designing
    the protocol element URI (and IRI) and designing representation
    systems that carry meaning by using URIs. I'm saying that designing
    a system where meaning depends on external events is bad design. At
    best, Frege and Joyce might provide some

    <Larry> additional frameworks for discussing the design choices, but
    I don't think they contribute even that

    Henry: [presenting slides - which will be made available]
    ... [discussion on how use of names or identifiers depends on
    context]
    ... difference between referential and actionable context.

    <Larry> "what this name refers to" => "what the utterers of the name
    intend for interpreters of the name to understand"

    Henry: one reason to try to do this: there is a generalisation that
    covers a lot of use. By looking at it this way it becomes
    consistent. For example, contrast two local contexts, one using URI
    in an HREF (actionable), one using a URI in a RDF statement
    (referential)...
    ... for referential use, the fact that I could retrieve the document
    at the end of the URI is incidental to its use. For actionable, it's
    central to its use.

    Hence, the 200 OK [response] should be more flexible.

    Henry: So - 200 OK would no longer mean "here is a representation" -
    it would mean "here is a representation for the context"

    <Larry> in particular, i want to argue against doing what Henry is
    proposing but to focus on design of RDF and related followon
    semantic web language contexts

    Henry: What a URI in the first instance tells you is not what its
    referent is but how to get to its referent. What you get back may
    vary according the context in which you found the URI.

    Tim: That would require magic.

    Henry: One way to instantiate this would be to use mget (get
    metadata). [does not exist]

    Larry: I don't like the direction you're going on how to fix this.
    ... we're engaged in design, not in discovery.

    <Zakim> Larry, you wanted to agree with analysis but not the
    direction going

    Henry: The way I would hope this to be taken: leading towards a
    question: suppose we want to use the web as it stands. I want to
    give advice that will work [within the constraints of current web
    technology]. We need a meta accept header. Accept-reflect, or
    accept-see-other...

    Larry: I don't think that's the right advice.

    Henry: I'm trying to explore a class of solutions like that...

    <Larry> i'd rather see advice to create a new version of RDF which
    resolves the use/mention ambiguity explicitly

    Henry: I don't think RDF is compositional.

    <Zakim> noah, you wanted to say I don't like where this is headed
    for other reasons than Larry's

    Noah: I expect that get representation of is different then getting
    metadata about...

    <jar_> RDF is compositional and referentially transparent

    Henry: Every URI has one meaning, but that doesn't tell you what a
    get should do.

    <jar_> RDF takes URI meaning as an input

    Tim: The architecture does tell you what a get should do.

    <jar_> it's linked data practice that has to determine the
    interpretation

    <jar_> and the problem is only that linked data practice is
    inconsistent

    <Larry> i don't think the meaning of an assertion in RDF should
    depend on what HTTP servers return, I think that would be bad design
    of RDF

    <jar_> and that's because there is no validation

    <jar_> RDF does not work that way.

    <jar_> RDF meaning depends on how its users decide to interpret URIs
    referentially

    Henry: the problem is: 200 OK means "your requested use can be
    satisfied." It doesn't mean "your URI means something different than
    it did yesterday."

    Tim: I feel a tremendous weight of previous discussions [on this
    topic] ... I've also got alarm bells ringing about a two-level
    solution.
    ... [parallel to DTD language to specify SGML doctypes
    ... ]
    ... Always prefer a 1-level design. Clean, consistent and complete.
    ... [speaking against using arguments from natural language]

    <noah> nope, will try

    Tim: [saying that objectors seem to be against use of # because it
    is 'messy']

    Henry: The time-varying resource thing : believing that we know what
    we mean when we say [13]http://guardian.co.uk - we are just fooling
    ourselves. You can't tell me anything at all about what that
    referent has to be for that to be true. We don't have any real
    understanding of what coherence condition there has to be on those
    conditions...
    ... we haven't drilled on that.

      [13] http://guardian.co.uk/

    <jar_> I think there are two distinct attacks on RDF going on here.
    One based on actionability (HT), and one based on time (Larry)

    Henry: I feel it is not unreasonable to say "I have these URIs, I
    want to issue gets, I know you are going to issue a 303 and I'm
    going to have to issue another GET. Can't we get that out of the
    way?" One approach might be to use accept-see-other: in your gets.
    Then from the server you can serve something different rather than a
    303.
    ... what it means is: short-circuit the 303 referesh.

    <jar_> or should I say the use of http: URIs in RDF

    Tim: Yes. That's a piece of engineering we ought to do before the
    end of the afternoon.

    <Larry> i would rather see use/mention/time disambiguation (like
    tdb) added to RDF

    Henry: doing it with an accept header [is my proposal]

    <Larry> i would not recommend people deploy difference in the
    deployed HTTP infrastructure; in particular, I dislike tying fixing
    this protocol to HTTP and its error codes

    <jar_> did anyone read my web metadata memo? it was required reading
    for two f2fs. it addresses this in part

    Tim: If you send the accept header then the server can say 200 but
    send back something else...

    <Zakim> noah, you wanted to talk about HTTP

    <Larry> read != absorb

    [agreement to cut the chat]

    <timbl> [the cat, not the discussion]

    <Larry> i am opposed to tying semantic web meaning to the HTTP
    protocol at all

    <timbl> timbl:

    <jar_> ht, let's talk 3 relationships. 1. is representation of (what
    you GET), 2. is version of ( an info resource), 3. describes

    Noah: the first thing is - can we agree that this not about what
    URIs identify or what their "definition" is... that can save us
    thrashing. The discussion is about http and how you can use it.

    <jar_> ... you are connegging between 2 and 3

    <jar_> yes, "identify" needs to be banned

    <Zakim> Larry, you wanted to assert that the time-invarying meaning
    of [14]http://guardian.co.uk is "attempt to use HTTP protocol to
    talk to host guardian.co.uk with path ""

      [14] http://guardian.co.uk/

    Larry: I am strongly opposed to tying the semantics of RDF to http.

    <jar_> (to whomever might be reading the irc log, I am not in the
    room, and nobody seems to be reading my IRC entries. they are for
    your amusement)

    [discussion on the use-mention problem - and the fact that this
    discussion is not about it]

    JAR: The key is figuring out how to state the problem. What's worked
    best for me is to avoid the loaded words. E.g. identify, resource,
    representation.
    ... I've been doing work about how the linked data community might
    use http URIs. This reorients the whole discussion.

    <jar_> ht: not possible to dispense with URI identifying just one
    thing. Tim and Noah want to be able to say that

    Tim: I am happy to switch to "referent" or any other word [if it
    helps].

    Noah: Some of these words are used in the normative specifications.

    <jar_> I have 3 docs in draft (in esw wiki)

    <timbl> Tim: There is an antipattern in design of making a 1-level
    consistent system which almost works, and then to fix a remaining
    issue, adding a second level, a meta level of some sort. Problems
    this can get into trouble, sometimes by making, in the end meta-meta
    levels etc with no final solution, sometimes because there is
    actually no oil and water boundary between the two levels, sometimes
    because the second level just has its noew problems. the good
    pattern is:

    <timbl> level, consistent and complete.

    <jar_> 1. Requirements for change proposals. and a call

    <timbl> HT: Tm and I asgree that we need to find a way to fix the
    303 problem

    Henry: We discussed: Exploring something like shortcut-303 as an
    accept header is worth doing...

    <jar_> [15]http://www.w3.org/wiki/HttpRange14Options

      [15] http://www.w3.org/wiki/HttpRange14Options

    <ht>
    [16]http://lists.w3.org/Archives/Public/www-archive/2011Oct/0034.htm
    l

      [16] http://lists.w3.org/Archives/Public/www-archive/2011Oct/0034.html

    JAR: I've tried to lay out what a change proposal could look like...
    ... in the document linked above.
    ... next steps are documented there.

    <noah> ACTION: Noah to schedule followup discussion of
    [17]http://www.w3.org/wiki/HttpRange14Options (per agreement in
    Santa Clara) [recorded in
    [18]http://www.w3.org/2011/10/31-tagmem-irc]

      [17] http://www.w3.org/wiki/HttpRange14Options
      [18] http://www.w3.org/2011/10/31-tagmem-irc

    <trackbot> Created ACTION-625 - Schedule followup discussion of
    [19]http://www.w3.org/wiki/HttpRange14Options (per agreement in
    Santa Clara) [on Noah Mendelsohn - due 2011-11-07].

      [19] http://www.w3.org/wiki/HttpRange14Options

    <noah> HT: We've been talking about section 5.3?

    JAR: What you should look at is the wiki page. 5.3.

    [break]

    <ht> [20]http://www.ltg.ed.ac.uk/~ht/wantOther.html

      [20] http://www.ltg.ed.ac.uk/~ht/wantOther.html

    <jar_> larry, note that I put tdb: in the mix. attempting to keep
    all views represented - useful point of comparison

    <Larry> I'd like some time to discuss the current state of work in
    IETF to get TAG input, participation, as it relates to TAG topics:

    <Larry> - W3C and Registries (Happiana)

    <Larry> - MIME and Web (Happiana, mime-sniff in websec)

    <Larry> - Extensibility (IAB document)

    <Larry> - IRI everywhere (IRI working group specs)

    <Larry> (IETF groups are happiana, appsarea, websec, iri).

    <Larry> Getting W3C/IETF liaison(s) to join us for some part of
    these would be great, as well as W3C staff involved in MIME
    registrations.

IETF cross-topics

    Larry: I made a list of things going on in IEFT relevant to things
    TAG was talking about.

    <noah> Larry's list is in this e-mail:
    [21]http://lists.w3.org/Archives/Public/www-tag/2011Oct/0104.html

      [21] http://lists.w3.org/Archives/Public/www-tag/2011Oct/0104.html

    Larry: there are probably more... Partly this was to talk about how
    the TAG could be effective in influencing IETF documents...
    ... we've had a lot of discussion about registries and their use.
    There was an ad-hoc task force working on making people happier with
    using IANA as a registry. Some of the proposals have to do with more
    transparency. It has been observed that there has been lots of
    misunderstandng of how the process works.
    ... in the TAG I was working on and this group has reviewed a draft
    on MIME and the Web. I got feedback from IETF participants that it
    wasn't helpful.
    ... some of the comments in that document were about MIME "sniffing"
    - so I've been working in that [ietf] group on the mime sniffing
    document.
    ... we've talked a lot about languages and versioning in the past.
    This has been a topic that has absorbed the tag. I see there is an
    IAB document about extensibility. Can you talk about that?
    ... and then - the TAG has had an issue open for a long time about
    IRIs - IRI everywhere. Making sure that W3C documents reference
    IRIs, etc...
    ... I would also add the URN topic because. There is a URN working
    group [in ietf].

    [discussion on use of URNs]

    Noah: We are having some activity on persistent URIs. This could be
    related to the same space as URNs.

    Henry: We are going ahead work a workshop on the 8th of December in
    Bristol on persistent domain names - exploring the space of possible
    solutions ... addressing the issues entities such as national
    libraries have brought up about using URIs as references.

    <stpeter> happiana

    <Larry> happiana

    <Larry> a) Reviewing and updating in general the processes around
    getting items into IANA registries, including media types and URI
    schemes. This work is ongoing
    [22]https://www.ietf.org/mailman/listinfo/happiana; See
    [23]http://www.w3.org/wiki/FriendlyRegistryProcess for one summary
    of issues, status, and possible resolutions.

      [22] https://www.ietf.org/mailman/listinfo/happiana;
      [23] http://www.w3.org/wiki/FriendlyRegistryProcess

    stpeter: There's a bad UI to IANA... People outside the IEFT
    community don't know how they register things with IANA, etc... lots
    of things people have found confusing. Lots of people have said :
    let's just use a wiki. As a result, we've had discussions with IANA
    about how to improve the interface - e.g. better tracking tools
    [transparency of process].
    ... up to and including more polciy-related issues : what does
    registration mean? how much approval is there necessary?
    ... I tend towards the first-come-first-served side of things.
    ... I've been working on working on a ID encouraging people to not
    use X- things.
    ... working on this with Mark Nottingham and Dave Crocker.

    Tim: I support this.

    Noah: I like distributed extensibility - use namespaces - but some
    of that feels like X-.
    ... I'm not sure there are easy answers of how to get lots of people
    innovating.

    Tim: in CSS, you've got -opera-roundedcorners-...
    -ie-roundedcornders-...

    Larry: there is an activity in the IETF around improving
    accessibility and use of IANA as a registrar.
    ... we have a w3c process around using IANA as a registry...

    plh: we do have a process for this...

    Larry: I attempted on the happiana mailing list to raise a concern
    that it not sufficient to make registration easier. but that it is
    important to get the community to register things.

    stpeter: Those two are probably connected.

    Larry: We need to bring additional resources to get more things to
    be registered.

    stpeter: I think there are 2 things - how can w3c work with iana,
    and also how does joe developer interact with iana?
    ... joe developer doesn't know how to get his URI scheme register
    somewhere...
    ... the guys who made up Skype: are doing it on the fly.

    tim: Apple are doing this.

    stpeter: it should not be so difficult to get this done (from a
    w3c-iana liaison perspective). How we reach the people out on the
    margin I'm not so sure.

    Larry: there is a suggestion that the process for 3rd party
    registrations (so people could register things that they see).

    stpeter: we could have a crack team
    ... We've been talking to IANA about making it easier...

    <Larry> [24]http://www.w3.org/wiki/FriendlyRegistryProcess

      [24] http://www.w3.org/wiki/FriendlyRegistryProcess

    Tim: When it comes to mime types, is there a ticketing system?

    stpeter: There is a ticketing system, it's just not visible
    (exposed).
    ... that's what we're talking about.
    ... IANA keeps track of things but no external visibility.

    <stpeter> [25]http://www.w3.org/wiki/FriendlyRegistries

      [25] http://www.w3.org/wiki/FriendlyRegistries

    Larry: Mark Nottingham put together a page about friendly
    registration process.
    ... what could to the TAG do that could help?

    stpeter: Off the top of my head, I think it would be good to have a
    longer-term discussion. There are more policy issues here that it
    would be useful to chat through. The IESG members are sort of "in
    the way" here.
    ... it's slower than it has to be.

    plh: sometimes I have groups who invent a new format and do not
    register the media type for the new format...

    Tim: A rule in w3c is that all the info for the media type
    registration should be in the spec itself.

    <Larry> [26]http://tools.ietf.org/html/draft-freed-media-type-regs

      [26] http://tools.ietf.org/html/draft-freed-media-type-regs

    stpeter: we need to deliver something that is more useful. Opening
    up the tracking system is the best first thing we can do . What
    happens after that, we can continue to explore.

    Larry: the next topic which is related is mime sniffing.

    stpeter: Mime sniffing has a long history. The widget spec
    references it. There has been some push back from IETFers.

    <Larry> [27]http://tools.ietf.org/html/draft-ietf-websec-mime-sniff

      [27] http://tools.ietf.org/html/draft-ietf-websec-mime-sniff

    stpeter: it's more of an informational specification...
    ... should it just be limited to http? Should we talk more generally
    about mime sniffing in http and file: ? ...

    Larry: a thumb-drive of files should be the Web as well...

    Noah: When I deliver something through http - something served as
    text/plain and looks like xml with angle brackets but is not well
    formed. user agents that are correct display it as text. User agents
    that try to interpret it as xml will not work...

    <Larry> thumb drive using file extensions? file: ? packaging?
    manifest? auxiliary content-type annotations?

    Larry: another topic - google seems to be deploying an update to
    http - spdy. Is there an internet draft?

    stpeter: no.

    [some discussion on spdy]

    <Larry> W3C had HTTP-NG work

    [discussion on how much spdy is being used...]

    Noah: it's all over SSL so that raises caching issues...

    [discussion on IRIs]

    Tim: should the TAG tell everyone writing a spec to use IRIs instead
    of URIs?

    Larry: Maybe we should close this issue?

    stpeter: we have the same thing in ietf with ipv6. It forces people
    to think about it.

Summary of Action Items

    [NEW] ACTION: Noah to schedule followup discussion of
    [28]http://www.w3.org/wiki/HttpRange14Options (per agreement in
    Santa Clara) [recorded in
    [29]http://www.w3.org/2011/10/31-tagmem-irc]

      [28] http://www.w3.org/wiki/HttpRange14Options
      [29] http://www.w3.org/2011/10/31-tagmem-irc

    [End of minutes]
      _________________________________________________________


     Minutes formatted by David Booth's [30]scribe.perl version 1.135
     ([31]CVS log)
     $Date: 2011/11/10 19:48:57 $

      [30] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
      [31] http://dev.w3.org/cvsweb/2002/scribe/


===========================================================
    [1]W3C

       [1] http://www.w3.org/

                                - DRAFT -

                                 TAG f2f

04 Nov 2011

    [2]Agenda

       [2] http://lists.w3.org/Archives/Public/www-tag/2011Oct/0119.html

    See also: [3]IRC log

       [3] http://www.w3.org/2011/11/04-tagmem-irc

Attendees

    Present
           TAG members: Dan Appelquist, Tim Berners-Lee, Yves Lafon (for
           item 1), Peter Linss, Ashok Malhotra, Larry Masinter, Noah
           Mendelsohn, Jonathan Rees (for items 2 & 3), Henry S.
           Thompson; Guests: Mike Belshe (for item 1), Alexandre
           Bertails (for items 1 & 3), Chris Lilley (for item 3), Peter
           St. Andre (for item 1), Rigo Wenning (for item 2)

    Regrets
           Jeni Tennison

    Chair
           Noah Mendelsohn

    Scribe
           Henry S. Thompson

Contents

      * [4]Topics
          1. [5]SPDY
          2. [6]Publishing and Linking on the Web
          3. [7]3023bis -- Media type registration for the XML family
          4. [8]Web Storage
          5. [9]Admin
      * [10]Summary of Action Items
      _________________________________________________________

SPDY

    <stpeter_>
    [11]http://www.ietf.org/proceedings/80/slides/httpbis-7.pdf (SPDY
    slides from IETF 80)

      [11] http://www.ietf.org/proceedings/80/slides/httpbis-7.pdf

    NM: [introduces the TAG to MB]

    MB: I was at Google until 2 months ago, worked on the Chrome team,
    we started SPDY around 2008
    ... Google focus on performance, so interested in protocol speedup
    ... Using the existing mechanisms in HTTP was just gnarly
    ... So we started experimenting in the lab with doing something of
    our own
    ... but based on a lot of prior work in a lot of areas
    ... SPDY is beginning to spread -- Firefox have started some work
    ... to date we've owned it, published an informal spec., some unit
    tests
    ... Firefox adoption possibility has pushed us towards
    standardization
    ... Interop guarantee is necessary before we can move forward

    MB: Roberto Peon is the person at Google who is on point for SPDY
    now
    ... We looked at taking this to the IETF, and that took me to PStA

    NM: We got in to this also in part because of our contact with Jim
    Gettys over the buffer bloat issue

    MB: State of play -- Google Chrome now using SPDY for all SSL
    traffic, see [12]about:net-internals
    ... Firefox is implementing it
    ... Amazon Kindle Fire recently announced that they will be using
    SPDY
    ... A number of other less big names involved, implementations in
    Python and Ruby, etc.
    ... Main parts of SPDY:
     1. Multiplexing;
     2. Compression;
     3. Prioritization;
     4. [Maybe] Server push

      [12] about:net-internals

    MB:Primary focus is on improving page-load latency

    MB: Browser use of HTTP and TCP involves various attempts to game
    the TCP expected behaviour, particularly in the area of multiple
    connections
    ... SPDY tries to avoid this by addressing multiplexing, with
    prioritization, directly

    <Yves> problem in doing multiplexing at the spdy level (or httpmux
    earlier) is the bad interaction that might happen between the tcp
    window size and the chunk size at l7.

    MB: Google research found that when there are packet losses, having
    two connections is a real win
    ... Multiplexing uses fewer connections which overall simplifies
    things
    ... Performance == minimal latency

    NM: Trying to get this via multiple connections did seem to make
    things worse

    MB: Wins with NAT as well
    ... Two connections is already recommended in HTTP1.1

    <Yves> note that the number of // connections in http has been
    removed in httpbis (as it was not relfecting the reality of things)

    MB: When that got multiplied by separate hosts for e.g. js and jpg,
    and then 2 per went to 6 per, suddenly we were up to 12 --- 18
    connections for a single page

    TBL: TCP will back off and reroute if things get stuck

    MB: We took a serious approach to this
    ... The average hosts hit per page is 8, rising 9
    ... and the size of things retrieved has grown too
    ... Browsers are trading off resource fetching against page load
    performance
    ... SPDY was trying to take that optimization off of browsers' backs

    <Yves> having mux helps against multiple tcp conn and badly
    implemented http pipelining.

    MB: Important because the difficulty of modelling web pages has
    grown enormously
    ... JS, CSS have execution times, so building benchmarks for web
    page latency is very tricky
    ... Didn't want to just use Chrome, because it already has a set of
    decisions built in
    ... But building a platform from scratch for benchmarking was too
    big a job
    ... So we did in the end build plugins for Chrome to benchmark SPDY
    and HTTP side-by-side
    ... Very glad that we have Firefox now doing their own similar work
    ... First three goals above are parallel to HTTP -- GET etc.
    ... Server push is different/new, which requires client- and
    server-level rework
    ... We tried some experimental services using push
    ... The cache is an issue
    ... You have to understand the service/application detail, and you
    only get rid of one round-trip
    ... Doesn't look like there's enough energy

    HT: There are apps that make sense to build if you don't have to
    busy wait.

    MB: which is kind of what we have today

    HT: In HTTP 1.1 you can hang on a get.

    NM: Yes, comet.

    HT: But there were too many glitches.

    MB: Yes -- GoogleDocs does mutual refresh by hanging gets
    ... So if you have 30 GoogleDocs open, there's a problem with the
    limit of 6 connections per host
    ... So GoogleDocs fakes it with multiple hosts :-(
    ... SPDY multiplexing is enough to fix this

    TBL: Shared editing experience everywhere would be really good

    MB: Hanging get over SPDY is cheap, and does the job, so Server Push
    doesn't seem so urgent
    ... Server push is server-initiated

    MB: Why SSL? Well, SPDY doesn't require SSL
    ... First choice we had to make was TCP or UDP -- TCP, to save
    hassle
    ... So, what port? 80 or 443 -- pipelining not really there for 80,
    pblms with proxies, slowly getting sorted out

    NM: Problem would be that new stuff would confuse proxies if it went
    through port 80, right?

    MB: Right
    ... So we went to 443
    ... And in any case, we came to feel that securing the Web was
    independently a good thing

    [Various]: Certificates are broken, how to secure the web is a huge
    issue, maybe not in scope today

    LM: What about IPv6?

    TBL: For stuff that doesn't need to be secured, it's very tempting
    to get to a P2P solution.
    ... But some things, say NYT front page or a TED video, which are
    truly public
    ... must that be destined for P2P, or some other architecture?

    MB: You're right, we recognize that not everything should go this
    way
    ... But operators are not very good at recognizing the difference
    ... Even the front-page of the NYT is not a simple case, if it's
    personalized to you on the basis of personal (private) info, then
    that is important to keep secure

    <Yves> encrypting everything is a double edged sword...

    MB: The Google China experience made us all very sensitive to the
    fact that personal data can in fact be a matter of life and death,
    and you never know where it's going to turn up

    <timbl> (ages ago .. TBL: Peter, you said when 1 TCP connection has
    losses and slows down it is faster to add a second connection. Of
    course, when there is congestion adding more connections adds to the
    congestion, which on a large scale when everyone does it once, could
    have overall very detrimental effect on latency for everyone else.
    So one should simulate or measure the effect of doing this to
    everyone on the net )

    MB: The whole security layer needs improvement
    ... both in terms of security as such, and in terms of speed
    ... What about proxies?

    <noah> I think this will drive ubiquitous inclusion of SSL
    acceleration in hardware, something I've thought for a long time
    would be a good enabling step

    MB: On the back end, inside firewall, use SPDY w/o SSL

    MB: Proxies are a good thing, and SPDY doesn't have a story about
    how to play nice with proxies
    ... SPDY does not address cacheable secured content
    ... But everyone is using Content Distribution Networks [CDNs],
    which have largely overtaken proxies for many large operators
    ... But this lack is a weakness for SPDY

    <noah> I think it's really large organizations that are deploying
    CDNs...you prejudice against the long tail when you assume that
    everything accessed from distant locations is sourced by a large
    organization like CNN or Google

    LM: We need an analysis of what the impact of not being able to
    cache actually is

    MB: Right -- what's the impact in aggregate -- even though there are
    clear cases where it loses on an individual basis

    NM: For the original pre-CDN world, your ISP got you pages that
    started a long way away quickly
    ... That won't work with SPDY

    LM: Right, so that's why some more global measurement and analysis
    is required

    MB: And of course any SSL use today is already not proxied
    ... With SNI, you can see the hostname, but not in vanilla SSL,
    which makes virtual hosting difficult

    <Yves> java doesn't have SNI as well :(

    LM: Corporations may not be happy with the loss of filtering
    capability that follows from ubiquitous SSL usage

    PL: Moving to signed content is another important avenue to look at,
    necessary for peer to peer failover for http

    MB: There are a lot of horror stories out there from big sites about
    proxy badnesses
    ... SSL removes that vulnerability
    ... The mobile operators have this lose-lose tradeoff between
    idiosyncratic compression (fast but potentially bad on the device)
    vs. not (slow but reliable on the device)
    ... Patrick McManus of Firefox has looked at some numbers
    ... 83 connections for the NYT home page puts really bad pressure on
    NAT
    ... But the NAT things cuts both ways -- dependence on a single
    channel makes NAT dropout more noticeable/serious

    MB: Speculation about Kindle Fire -- you could push the multiplexing
    out to the Amazon connection point at EC2
    ... So that all traffic goes via a single connection (over 443) from
    the Kindle customer
    ... This appears to contradict the end-to-end story SSL demands
    ... Requires the notion of trusted proxy -- SSL man-in-the-middle
    ... So, and explicit proxy: Kindle to EC2, or anyone to their
    corporate firewall
    ... Yes, there is a potential for head-of-line blocking, which can
    amplify in certain cases
    ... But overall we are still winning
    ... It's difficult to model this, you have to collect empirical data
    ... No doubt that with multiple streams, you are more vulnerable
    ... I think there are some TCP tweaks that can help, we're working
    on it
    ... [Stuff about 'slow startup' which scribe didn't get]

    NM: Adding another stream to SPDY doesn't allow cross-stream fixup,
    right?

    MB: Yes
    ... SPDY does in general fix the head-of-line blocking problem
    ... Firefox guys have been trying to get pipelining working better,
    but it's really hard
    ... They presented at IETF last year [ref?]
    ... We were pushed to start all the way down at the packet protocol
    level, but resisted
    ... We think there's a lot of room to optimize on top of TCP, and
    we'll only look downwards after that's worked through

    DKA: So standardization -- What does "take this to IETF in 2012"
    mean in detail? Should the TAG stay involved, and if so why -- in
    what way does it impact on Web Arch?

    NM: Yes, I think TAG should stay involved, but we should discuss
    this on a telcon

    <noah> ACTION: Noah to schedule discussion of how, if at all, TAG
    should continue to be involved with SPDY [recorded in
    [13]http://www.w3.org/2001/tag/2011/11/04-minutes.html#action01]

      [13] http://www.w3.org/2001/tag/2011/11/04-minutes.html#action01

    <trackbot> Created ACTION-626 - Schedule discussion of how, if at
    all, TAG should continue to be involved with SPDY [on Noah
    Mendelsohn - due 2011-11-11].

    TBL: At the Web level, things such as Content type, the Link header,
    things like that
    ... Does SPDY change that? Where are HTTP headers?

    MB: Almost entirely untouched
    ... So the framing layer is pretty much all that changes
    ... We looked at 'improving' some aspects of HTTP -- absolutizing
    all URIs
    ... but there are some servers which don't support it

    TBL: Absolute URIs can indeed be problematic

    LM: Does HTTP 1.1 really require support?

    [Various]: Yes

    MB: Net-net on that -- we backed off doing anything like that
    ... Not yet sure how we go to IETF, exploring that with PStA

    LM: SPDY sounds extremely immature to me -- the impact of this on a
    wide scale, outside Chrome<->Google servers, is just unknown
    ... W3C used to have resources in this area, it would be good to
    have W3C involved in taking something like this forward
    ... Without any guarantee that the outcome will be much like SPDY
    ... But I think a prerequisite for standardization is more
    exploration of the requirements and consequences

    NM: W3C should do this? My sense is that we've been happy with IETF
    taking the lead

    LM: Well, comparing two protocols wrt 'page load latency' is a W3C
    area
    ... 'Doing this' will involve a lot of different tasks - "What does
    the Web require in terms of optimization" is a W3C issue, almost by
    definition

    TBL: Yes

    NM: So, push back on taking it to IETF?

    TBL: No, makes sense to do the protocol there, as LM said
    ... Low-level question -- Can the client change its mind about
    priority?

    MB: Change-priority is not supported, but tab-change in the browser
    might provoke us to rethink that
    ... Note that the priorities are advisory, the server can do
    whatever it likes

    <timbl> chrome://net-internals/#spdy

    MB: Wrt standardization and changes -- putting it out there means
    Google understands that other perspectives are now needed, and will
    lead to changes

    NM: Need to stop, so thanks to Mike
    ... the TAG will come back to this -- who do we feed back to?

    <noah> [14]http://groups.google.com/group/spdy-dev

      [14] http://groups.google.com/group/spdy-dev

    <noah> Send email to this group: spdy-dev 'at' googlegroups.com

    HST: I think the W3C IETF liaison needs to be aware of this, and
    help us decide where the W3C needs to be involved

    NM: Suspended until 1050

Publishing and Linking on the Web

    NM: Resuming

    <noah>
    [15]http://www.w3.org/2001/tag/doc/publishingAndLinkingOnTheWeb-2011
    -10-27.html

      [15] 
http://www.w3.org/2001/tag/doc/publishingAndLinkingOnTheWeb-2011-10-27.html

    DKA: [introduces the above doc]
    ... Questions for Rigo -- Is the overall goal sensible/useful; Are
    there other terms we should add, e.g. 'performance'; Are there other
    regulatory/policy issues we should add to the framing?

    LM: Great start, ready to figure out what the next steps are
    ... To be useful, this has to be published in a way that gets
    community consensus
    ... How do we get there

    <masinter> the TAG could publish it as a NOTE and start a community
    group?

    <jar> but we wanted this to go rec track.

    <jar> Thinh [Nguyen] says effectiveness much enhanced by rec status

    NM: Maybe we should schedule detailed review of this at some length
    at the January F2F

    <noah> ACTION: Noah to schedule very detailed line-by-line review of
    Pub&Linking draft at January F2F [recorded in
    [16]http://www.w3.org/2001/tag/2011/11/04-minutes.html#action02]

      [16] http://www.w3.org/2001/tag/2011/11/04-minutes.html#action02

    <trackbot> Created ACTION-627 - Schedule very detailed line-by-line
    review of Pub&Linking draft at January F2F [on Noah Mendelsohn - due
    2011-11-11].

    LM: Can we get the TAG out of the critical path?

    LM: Community group, which might have some lawyers in it

    RW: The scope of this document is too large

    <noah> I would prefer to focus on linking

    RW: Publishing is a nightmare, and linking even more so
    ... Legal tactic is to partition as much as possible
    ... Reduce this, make two documents

    <jar> jar to rw: this is not a legal document

    DKA: Start with linking?

    RW: Yes

    RW: Look at www.linksandlaw.com

    <jar> linksandlaw.com - I've given out the pointer a few times

    <noah> [17]http://www.linksandlaw.com/linkingcases.htm

      [17] http://www.linksandlaw.com/linkingcases.htm

    RW: there are a very large number of relevant cases

    LM: Would you come to the TAG for help if you were tasked to do so?

    RW: Your first document, to get the story clear, is not for the
    world

    TBL: Why not for the world?

    RW: If you talk too much, your central message goes away [tip of the
    hat to TLR]

    NM: Advice could focus on "if ... then ..." heads-up kinds of
    observations

    RW: There is a long-standing conflict between the technical and
    legal communities

    RW: In particular a conflict over ownership of terms
    ... There are already laws, with terminology definitions

    <masinter> I think we should take what we have and see how we can
    make it useful....

    LM: I just want to make this useful enough to publish
    ... Suppose this is just an outline of what the TAG understands in
    this space
    ... And accepting that we can't resolve the conflicts
    ... Could we point out where the conflicts are?

    RW: If you asked me, I would try to come up with a concise statement
    of the fact that publication implies the possibility of linking to
    ... See e.g. the KPMG example from linksandlaw:
    [18]http://www.linksandlaw.com/linkingcases-linkingpolicies.htm#KPMG

      [18] http://www.linksandlaw.com/linkingcases-linkingpolicies.htm#KPMG

    <jar> oops, I had promised to write something about "don't link"
    terms of use

    <jar> look at the american airlines site (whose URL I can't give
    you)

    RW: Not stop this document, to get our own understanding clear
    ... but also publish simple short observations that are at the core

    DKA: But that's a legal statement

    NM: We set out to avoid making policy statements
    ... We can't state that publishing gives a right to others to link

    RW: Not a right, but an expectation

    TBL: You're preaching to the choir, but how do we say this?

    RW: We can distinguish between linking itself and the existence of
    access control

    NM: [Hypothetical KPMG example]
    ... How do we make it a technical observation, not a policy one

    RW: The fact that you include a pointer to something on the Web in
    your document has no meaning for the content over there, and is
    completely unrelated to the thing identified

    TBL: If you can from the KPMG home page browse to another page, I
    should be able to pass that link to someone else
    ... The UK position appears to be that publishing a link collection
    to pirate music sites is to be an accessory to copyright theft
    ... We could try reciprocal banning, by notifying KPMG that they are
    not allowed to read the W3C site :-)

    JAR: Wrt what RW and LM said, the original idea, from Thinh Nguyen,
    was that it would be useful, to forestall bad decisions, if there
    was a document that simply stated what the technical community
    thinks these terms mean.
    ... Thinh went on to say that to get the necessary impact, it needs
    to go out as a REC
    ... So it doesn't try to argue with the law, it just says what the
    technical understanding of these terms is

    <jar> session with Thinh:
    [19]http://www.w3.org/2001/tag/2010/12/02-minutes.html#item01 ...

      [19] http://www.w3.org/2001/tag/2010/12/02-minutes.html#item01

    DKA: So we started from there, and the fact that a URI is public
    identifier, to get to the parallel between speech acts and URI use,
    and that brings in the 'right to link', parallel to free speech
    ... from which we got drawn in to the distinction between linking
    and embedding
    ... And getting embedding clear requires us to get the
    publishing/hosting distinction clear
    ... Because it isn't clear that a page with text and video involves
    multiple sources

    RW: The legal side knows about this

    DKA: So, maybe we don't need all of that?

    NM: We're getting different advice from different legal sources. . .

    AM: Alternatively, maybe we should not publish a TAG doc't, but
    something in the popular space, e.g. the NYT over Tim's byline

    <masinter> so we should make a short statement around which we can
    get consensus, focused on one small issue around linking and
    copyright

    <masinter> at least that's what Rigo is advocating

    RW: The TAG endorsement of a short statement about the passive
    linking case which is that URIs are constitutive of the Web and
    publishing a page with a URI creates a legitimate expectation of
    linkability

    AM: If we did a short statement, how do we get it out? To make an
    impact?

    <masinter> the current document is useful for us, but way too broad.

    RW: The publication channel of the TAG itself should be ordinary,
    which has our opinion

    LM: It could be a finding

    RW: And then you go to the NYT and say "The TAG has said. . ."

    NM: If the W3C/the membership/TBL want to say [a quasi-hortatory
    claim about the right to link], that's fine, but it's not for the
    TAG to do so
    ... The TAG does architecture

    TBL: But the architecture has a fundamental social component

    <masinter> i think this can be a finding, making a statement about
    the architectural assumption of the web and underlying many of the
    TAG's other activities is that linking is fundamental, that
    providing URIs for content is recommended, exactly to promote
    linking

    TBL: Spam is a social violation of the architecture
    ... So I think the TAG can speak on this subject in social terms

    NM: Yes, I'm comfortable with talking about social value, and the
    impact of policy on value

    TBL: Forbiding incoming links breaks the [social] system

    NM: But there are clearly (jurisdictional bounded) cases where
    linking to unacceptable material is itself unacceptable

    LM: We haven't said anything about laws

    TBL: But we are close to that in saying KPMG are doing something
    wrong

    RW: Wrt DKA's point, using the free speech analogy is going further
    that I would

    <masinter> we have designed the system such that linkability is a
    benefit. Attempts to restrict linkability is counter to the
    effective use of the system as designed.

    RW: I would focus on the passive case: it's wrong for sites to
    pubish rules which forbid linking to them

    <masinter> we don't have to say "the law is a bad idea", we have to
    point out the negative consequences of such laws

    <masinter> i would still like to see Jeni/Dan's document published
    as a note after review, even if we also have a finding about common
    use of linking and the TAG's position on it

    <jar> Thinh Nguyen: default rules are important in court... may be
    supplied by technical standards (that was Thinh last December)

    LM: Trying to avoid value judgements, but just say what the
    consequences of doing so would be

    <jar> The consequences speak for themselves. Generally speaking.

    NM: I want to focus on the consequences of laws which enable
    behaviours

    HST: That's much narrower than I hear from JAR and LM, who are happy
    to discuss consequences of actions, not just laws (or not laws at
    all)

    NM: Look at the KPMG case -- it's going to get legal very quickly
    ... Immediately the question will arise as to whether their
    statement is enforceable by law

    RW: As Thinh Nguyen said, the legal interpretation will be based on
    usage, on community habits

    RW: Is the expectation in practice that certain things hold?
    ... So for example the attempt to smuggle in obligations via
    shrink-wrap was surprising, contrary to expectation, and so not
    supported by the courts

    RW: Similarly the expectation is that publishing something on the
    Web gives the possibility of linking

    <masinter> it gives the possibility of linking, and we've encouraged
    that

    JAR: I actually do think it makes sense for a technical person to
    say that "operating under such-and-such a restriction would have the
    following [bad] consequences", so I'm not ruling out laws

    <noah> +1 to what Jonathan just said

    <masinter> i'm trying to distinguish between "such restrictions are
    bad" and "negative consequences of such restrictions are X, Y, Z;
    such restrictions are bad if they do not have clear redeeming value"

    <masinter> the web was designed for X, it was built with this
    assumption

    HST: OK, I see a continuum, was a mistake to try to push for two
    distinct positions

    DKA: Saying "restricting linking will have a detrimental effect on
    the Web" is weak from a legal perspective -- The lawyer will say
    "Not my problem"

    <noah> What I want to rule out, mostly, is: "law XXX should not be
    passed". I would rather say: "if you pass law XXX, you should
    understand that the consequences to the operation of the system, and
    to its positive social value will be YYY"

    <masinter> restrictions on linking are impossible to accomplish
    because of web architecture? search, robots.txt, harvesting, ...?

    <masinter> we should be clearer about who the primary audience is
    for this finding

    <masinter> maybe a blog post?

    DKA: We're not asking for unrestricted freedom to link, but no less
    freedom than the freedom of speech

    NM: The speech parallel is weak, because URIs don't point to
    consistent things necessarily
    ... I prefer the address analogy

    NM: Consider the address of [a prohibited organization] in a public
    works document about street repairs versus in a list of recommended
    destinations

    NM: So consider a log of URIs versus a Web page which references one

    LM: Summarizing RW -- the scope of this document is too broad, you
    should find a few one-page extracts

    DKA: I'll take this back to Jeni and consider all of the input we've
    gotten
    ... And decide whether to take this forward broadly but internally
    ... or whether we can pare it down effectively

    NM: There are clearly different opinions about:
    ... 1) What the goal of the document is;
    ... 2) What its scope is.
    ... We should acknowledge the lack of consensus, and maybe the
    divergence of advice we're getting

    RW: Pursuing (w/o publishing) this document, will improve the value
    of TAG utterances in the future
    ... That the passive case: I'm a site, and I forbid linking, is
    wrong is what you should say
    ... not the active "I have a right to link"
    ... The latter gets quickly extremely messy

    HST: I'm not sure sending the editors off to work harder when the
    TAG hasn't agreed on scope is an invitation to waste time

    DKA: I wasn't going to dive right in to cut the document down -- I
    want to work with all the feedback we've gotten this week,
    particularly on Wednesday, and that's where I want to focus

    LM: So a new draft is worth working on if it yields something we can
    publish

    NM: Remember Goals and Success Criteria -- we should keep these
    ([20]http://www.w3.org/2001/tag/products/PublishingLinking.html) in
    mind
    ... And consider revising them too

      [20] http://www.w3.org/2001/tag/products/PublishingLinking.html)

    <noah> ACTION: Appelquist with help from Jeni to propose changes to
    goals, success criteria etc. for publishing/linking product page
    [recorded in
    [21]http://www.w3.org/2001/tag/2011/11/04-minutes.html#action03]

      [21] http://www.w3.org/2001/tag/2011/11/04-minutes.html#action03

    <trackbot> Created ACTION-629 - With help from Jeni to propose
    changes to goals, success criteria etc. for publishing/linking
    product page [on Daniel Appelquist - due 2011-11-11].

    LM: Right -- for example split the work between a small 'official'
    publication and a larger background unofficial 'white paper'

    NM: Thank you Rigo

3023bis -- Media type registration for the XML family

    CL: I've mailed a summary of recent progress on 3023bis to www-tag
    ... Previous draft deprecated text/xml
    ... Implementors pushed back, as we have to support it even if new
    authors don't use it
    ... I took this to IETF80, got lots of interaction
    ... HTTPbis then removed the default charset handling rules, which
    is consistent with applications today
    ... But that left email out of sync with HTTP
    ... But but it now appears there is willingness to fix the text/...
    story to allow individual text/ media types to declare their own
    charset rules
    ... So text/xml can say there is no default charset and the XML spec
    rules determine [which will apply to email as well as HTTP]

    LM: So, what is in the way of publication?

    CL: On this front, nothing, but the fragid situation is still
    pending

    JAR: xhtml+xml can have RDFa in it

    HST: What's the problem with that?

    JAR: The case is this -- you have a URI with a fragid, and you want
    to follow your nose
    ... If you look at the 3023bis registration, it says XPointer tells
    you the semantics
    ... so you look for an ID, and don't find one
    ... So there's an error (per XPOinter)

    HST: The no new syntax move, for SVG:
    ... if syntax is not a valid xpointer, then defer to the specific
    media type reg
    ... the xhtml media type has to say something about RDFa
    ... and 3023 needs an override policy

    jar: the new piece for me was realizing that we need to also talk
    about application/xhtml+xml

    HST: 3023bis says it is the definitive spec for fragids
    ... what it needs to do is to say under what circumstances it defers
    to the subsidiary media type reg

    <masinter> why is this on critical path for 3023bis ?

    ACTION Henry to work with Chris Lilley to bring forward prose for
    3023bis wrt generic processing of fragment identifiers which
    addresses the rdf+xml and xhtml+xml issues

    <trackbot> Created ACTION-628 - Work with Chris Lilley to bring
    forward prose for 3023bis wrt generic processing of fragment
    identifiers which addresses the rdf+xml and xhtml+xml issues [on
    Henry Thompson - due 2011-11-11].

    ACTION-628 due 2011-12-31

    <trackbot> ACTION-628 Work with Chris Lilley to bring forward prose
    for 3023bis wrt generic processing of fragment identifiers which
    addresses the rdf+xml and xhtml+xml issues due date now 2011-12-31

    <noah> That's fine, Henry, assuming you'll also put in the cover
    email a bit of framing to remind something about the history and
    context for whatever is proposed

    LM: Isn't this a general point about mixins

    <masinter> use of RDFa as a mixin applies to more things, like JSON

    LM: Can't we get some cleaner layering?

    NM: In the interest of getting 3023bis out, can't we do this locally
    first, in 3023bis, on the assumption that it will be consistent with
    any cleaner general solution?

    LM: Getting mixins right is likely to be a huge problem, you're
    going to get stuck in a tarpit

    CL: You may be right, we'll see

    LM: The way to get follow-your-nose for mixins requires you to
    modify the host-language media type definitions systematically

    HST: In my recent email I distinguish between "witting" and
    "unwitting" host languages
    ... The unwitting case is when an XML language (e.g. XML Schema,
    XSLT, XML Signatures) allows any attribute on any element, as long
    as it's in a namespace that's not theirs

    HST: The witting cases work easily, but the unwitting are important.
    They allow mixins without explicitly naming them in their specs, so
    they can't be expected to change the media type registration

    <masinter> are there other mixins other than RDFa that add fragment
    identifier possibilities?

    <masinter> can i mix-in SVG into something and use SVG visible
    objects as fragment targets?

    HST: 'Unwitting' embedding of RDFa can't trigger a change to a media
    type registration

    LM: Would unwitting embedding of e.g. SVG introduce the possibility
    of using SVG-style fragids which only work in the SVG sub-part?

    NM: There is at least some discussion of that case in the
    Self-describing Web finding

    TBL: This comes under the XML functions story as well

    LM: So, go ahead, but be aware that there be dragons

Web Storage

    AM: There is a Web Storage Last Call WD just out
    ... We need to decide whether we want to comment

Admin

    NM: RDFa vs. Microdata will require our attention wrt HTML WG
    process by mid-January, we will return to this

Summary of Action Items

    [NEW] ACTION: Appelquist with help from Jeni to propose changes to
    goals, success criteria etc. for publishing/linking product page
    [recorded in
    [22]http://www.w3.org/2001/tag/2011/11/04-minutes.html#action03]
    [NEW] ACTION: Noah to schedule discussion of how, if at all, TAG
    should continue to be involved with SPDY [recorded in
    [23]http://www.w3.org/2001/tag/2011/11/04-minutes.html#action01]
    [NEW] ACTION: Noah to schedule very detailed line-by-line review of
    Pub&Linking draft at January F2F [recorded in
    [24]http://www.w3.org/2001/tag/2011/11/04-minutes.html#action02]

      [22] http://www.w3.org/2001/tag/2011/11/04-minutes.html#action03
      [23] http://www.w3.org/2001/tag/2011/11/04-minutes.html#action01
      [24] http://www.w3.org/2001/tag/2011/11/04-minutes.html#action02

    [End of minutes]
      _________________________________________________________


     Minutes formatted by David Booth's [25]scribe.perl version 1.135
     ([26]CVS log)
     $Date: 2011/11/06 17:48:54 $

      [25] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
      [26] http://dev.w3.org/cvsweb/2002/scribe/


===========================================================

Received on Thursday, 10 November 2011 20:02:04 UTC