W3C

- DRAFT -

TAG F2F

07 Jan 2014

Agenda

See also: IRC log

Attendees

Present
Tim Berners-Lee, Daniel Appelquist, Peter Linss, Henry Thompson, Yehuda Katz, Anne van Kesteren, Jeni Tennison, Alex Russell, Yves Lafon, Sergey Konstantinov
Chairs
Daniel Appelquist, Peter Linss
Special Guests
Dominique Hazaël-Massieux, Virginie Galindo
Scribe
Henry S. Thompson, Jeni Tennison

Contents


Agenda planning

[Per Wiki: http://www.w3.org/wiki/TAG/Planning/2014-01-F2F]

YK: DRM? Draw a line and say "not our pblm"?
... Too much politics for us?

TBL: I had hoped for some technical clarification

YK: AB more than us?

TBL: Interop is our business

YK: I think the consensus (minus TBL) was that the proposed technology would harm interop

TBL: Users of [NetFlix] think it's useful -- they are worried about an open platform

YK: Tech. focus needed if we put this on the agenda

DKA: Yes, we need to go back to the architecture of components and interfaces

HST: I'll do a 5-minute intro based on the thread from October (http://lists.w3.org/Archives/Public/www-tag/2013Oct/0050.html)

TBL: Focus on architecture

[some discussion about W3C work on [XML?] logfile format, scribe missed]

<Yves> last work on logfile was... in 96

Capability URIs

JT: I've produced a document

<JeniT> http://w3ctag.github.io/capability-urls/2014-01-03.html

JT: Document discusses why, whether, and how to use Capability URLs

YK: APIs might need some thought

JT: Haven't got to recommendations on what standardization is needed to help take this forward
... My main question is whether we should encourage this

YK: Moot -- they're already in widespread use

JT: So, OK, is the work needed to improve things/standardize/etc. worth the potential improvement?

YK: Well, e.g., Github users risk getting sniffed, overlooked, etc.

JT: There are lots of ways in which URLs get leaked, not just over-the-shoulder
... e.g. Referrer header

YK: Can Referrer header contain a cross-domain URI when scheme is https?

JT: I think so

AvK: I think not
... Some amount of Referrer control under development, opt-in

<annevk> Referer policy browsers are converging towards (I think only Chrome has this at the moment): http://wiki.whatwg.org/wiki/Meta_referrer

YK: Good to have a list of exposure points

JT: I have some of these in the document -- plain http is always a bad idea, 3rd-party scripts, ...

YK: Best practice: Don't Leak This

DKA: TAG recommendation along these lines?

YK: Header which says "this is secret"

AvK: CSP directive ?

DKA: Risk/exposure is in scope for the document

TBL: The simple observation is valuable that putting security in URLs, when URLs are in wide use, is intrinsicly risky

YK: Too late to say "don't do this"

JT: Not saying that, saying: here are the risks, consider them before going ahead

YK: Yes, but also, look for mitigation strategies

DKA: Suggest a WG do this?
... Focus on this document -- what more needs to be done before publishing it

YK: Looks good to me -- but check the Referrer facts

JT: Wrt risks, mitigations are listed: -- always use https, several levels; capability URLs should expire, no links to 3rd-party scripts

YK: Github ones can't be expired

AvK: better "no links to untrusted 3rd-party scripts" ...

SK: No use of capability URLs by robots
... Google Analytics, MS Mail, etc.
... because then open to e.g. searching

JT: Yes, search engines find URLs whereever they can

SK: Yes, once you find one, you can find lots via wildcarding

JT: robots.txt is only as good as crawlers let it be

TBL: Signalling that a capability URL is an important secret is a bit counter-productive -- it just tells bad guys where to focus their efforts. . .

DKA: Well, how can we avoid capability URLs being crawled if robots.txt isn't the right way

PL: I put poison links at the front and back of every page which I protect with robots.txt
... Anyone who follows them twice gets firewalled

TBL: Higher level thought: when we think about using URL for important stuff is that if one does leak, then I have no way of knowing who's accessing my data

HST: But you have server logs

TBL: You have IP addresses, but not identity
... So in the document, between sections 4.1 and 4.2, need something about recognizing compromises
... That is, how can I tell that it's been compromised
... JAR would be arguing for capabilities

AvK: Capabilities are great, but we're talking about using URLs for capabilities

YK: But URLs are the basic currency of the Web, it's natural to want to use them
... Trying for the perfect capability system would be too complicated

JT: What about capabilities via email -- any recommendations?

AvK: At least make it expire quickly

YK: Suggestions wrt shoulder-surfing section should move higher up

JT: Using replace-state means you can't bookmark
... So the back button won't work
... the swap mechanism fixes that
... but doesn't fix the bookmarking problem

AvK: And that's important
... Note that gist.github would then be completely useless

JT: OK, so I'll take it out

HST: No, just explain what it doesn't work for/breaks

TBL: Suppose all capability URLs were recognizable by browsers, then would it be obvious how to modify browsers to do the right thing?

AvK: Treat it like a password -- blur, etc.

YK: Yes, doesn't go into history

TBL: But then you can't cut and paste it

YK: I said before, this is a big open-ended topic, suitable for a new (or existing) WG, not us

JT: Another issue wrt moving forward
... when you have a resource (a doc, e.g.), and there are capability URLs to enable others to edit
... How do you indicate they are all for the same resource?
... rel=canonical isn't really right

YK: Seems like a lot of the semantics of rel=canonical are correct

TBL: If I give out two capability URLs for a calendar, neither of them is canonical

YK: But one could be

TBL: Giving bad guys too much information?

JT: Not all would be listed, all would point only to the core one
... And it could be the one with access control

AvK: [Flicker example -- scribe didn't catch]

YK: Making the canonical copy accessed-controlled is the right move

AvK: But we don't want them indexed. . .

YK: Similar to cache -- you want to cache the canonical one

AvK: Hunh?

JT: At least you have some abiitity to do comparisons across users

HST: So, something about this does belong in the document
... How it does correspond to the core use of canonical to some extent
... And what it does and doesn't give you

AvK: OK, but not v. important
... The redirecting thing is more important

JT: 301 Moved Permanently?

AvK: Yes
... Say anything about what happens if you try to use a capability URL which has expired?

JT: Not sure what the right response is

AvK: 404?

HST: Too weak -- wait, I see, maybe that's right, don't giving anything away

JT: 410 Gone might be more appropriate

AvK: Possible, but not required

YK: But 404 is retryable, while 410 is not

TBL: Does anyone ever distinguish between 4--?

YK: Yes, I did

<Yves> I don't know any implementation caring about the real meaning of 301 or 410

<Yves> is there any browser modifying/deleting bookmarks based on such response?

AvK: I tried using it, people kept re-fetching. . .

DKA: So what does the doc. say?

HST: See YK's meta-point -- this is part of further work

DKA: So add something saying the 410 is right in principle, but may not be well-supported
... In practice, if you try for an expired capability URL, do you get a 200 or a 404?

YK: Tried an example with gist.github, it gives a 404

AvK: To give a 410, you would have to have a history of your issued capabilities

<ht> Actually, keeping a history is probably a good idea anyway, IMO

[Meta discussion about publishing]

HST: +1 to Finding

<wycats_> I vote :shipit:

<Yves> note that giving a 404 hides that there was a capability

<Yves> 410 leaks that there was one

<ht> Yves, yes,

<Yves> like github giving out 404 for hidden/protected projects instead of 403

HST: Rec Track requires an AC review, I don't think we need to go there

<dka> Suggest process going forward: going to working draft, seeking some public feedback, and then publishing as a "finding."

JT: Use 2119 words officially?

AvK: In accordance with them, but not 'officially' . . .

TBL: Referencing the 2119 RFC isn't necessary
... w/o a Conformance section it doesn't make sense

JT: Best Practice boxes. . .

DKA: So, yes, publish a (F)PWD, seek feedback, we address, then publish a finding

JT: Not including standardisation

DKA: Agreed, but identifying gaps/further work is good
... w/o discussing solutions

<dka> RESOLUTION - we move ahead with the publication of Capability URLs towards a TAG finding.

Closing issues

<dka> Open issues: http://www.w3.org/2001/tag/group/track/issues/open

<dka> TAG products: http://www.w3.org/2001/tag/products/

<dka> Github repo: https://github.com/w3ctag

<dka> Spec review list (github issue tracker): https://github.com/w3ctag/spec-reviews/issues

DKA: Should we clarify where we're actually working
... I've edited the home page to suggest the way we're moving to Github
... Do we want to keep any of these issues?
... Is there things we should bring forward? Or have abandoned or been overtaken?

YK: URIs for packages?

DKA: Charter says Issues are what we're working on
... And we have two places where we are recording them

YK: Propose that onus is on individual to move issue from old list to Github

DKA: Proposed resolution: Github issues list shows what we are commited to work on

JT: Archived?

PL: working on that, but not in place yet

<dka> This is our github issue tracker: https://github.com/organizations/w3ctag/dashboard/issues

YK: Can be exported at any time

DKA: So if we move one, we would need to point back to the old Issue
... Happy not to go through the old list

dka: we should close some of these issues

<dka> Re issue-60, I propose that we record this as closed since the TAG has published work on this topic.

<dka> issue-67: html and xml - we had a task force, we've done everything we intend to do here.

<trackbot> Notes added to issue-67 HTML and XML Divergence.

HST: issue-64 and issue-65 can be closed

<dka> PROPOSED RESOLUTION: Summarily close all other issues except issue-57 unless TAG members wish to reopen them in github.

JT: Close 25, Deep Linking?

<dka> issue-25 can be closed - we have published work on this.

DKA: Yes

<dka> Issue-40 can be closed as we have completed work in this space - it can be re-opened if there are key URL/URI topics we need to work on.

<ht> Closing 40 should mention both capability URL and FragId drafts

PL: Use Postponed to indicate we 'closed' w/o review?

DKA: Fooling ourselves?

HST: The substance will persist on the Web regardless of what we call it

<dka> PROPOSED RESOLUTION: Summarily mark as "postponed" all other issues (not explicitly noted above) except issue-57 unless TAG members wish to reopen them in github.

PL: Thought it was worth it

TBL: Better to make the distinction

HST: Right, OK, because 'Closed' means we actually did something

RESOLUTION: Summarily mark as "postponed" all other issues (not explicitly noted above) except issue-57 unless TAG members wish to reopen them in github.

DKA: Moving on to Products

<dka> Products: http://www.w3.org/2001/tag/products/

DKA: Obsolete this page, and ref. Github?
... Not updated for some time. . .

YK: +1

<ht>(Note also http://www.w3.org/2001/tag/group/track/products)

<dka> PROPOSED RESOLUTION: we obsolete the tag products page, explicitly state on our home page that the current tag work is in github and info can be find in the github readme files associated with each product.

DKA: We can move some things to Completed, and then note that no further changes will be made

RESOLUTION: we obsolete the tag products page, explicitly state on our home page that the current tag work is in github and info can be find in the github readme files associated with each product.

<dka> ACTION: dan to make edits to the tag home page and product page accordingly. [recorded in http://www.w3.org/2001/tag/2014/01/07-morning-minutes.html#action01]

<trackbot> Created ACTION-846 - Make edits to the tag home page and product page accordingly. [on Daniel Appelquist - due 2014-01-14].

DKA: Github next, but we need AR for that

IETF London Action Plan

<dka> http://www.ietf.org/meeting/89/index.html

<dka> https://www.w3.org/2014/strint/Overview.html

Security workshop (aka STRINT) is 28/2--1/3, Friday and Saturday

DKA: Will be in London
... I'll be there
... Emphasis is, I believe, on technical issues

IETF is at Hilton Metropole 3-7/3

DKA: I'll attend the HTTP part of that, at least

DKA: What other APPSDIR stuff should we be looking at?

HST: Get Your Truck off my Lawn? We can ask MN tomorrow if he wants any help
... JSON?

DKA: We'll come back to that

<ht> Yves, are you going?

<Yves> dka, I don't know yet if I'll be there (london ietf) or not

Pause for lunch

<scribe> ScribeNick: annevk

<ht> Scribe: Anne van Kesteren

Data Activity

<PhilA> http://www.w3.org/2014/Talks/0701_phila_tag/ -> PhilA slides

[Recap: capability URLs will become a TAG finding. If you have feedback slightlyoff please pass it on.]

[We did not close ISSUE-57. If you care about an issue you need to open it in GitHub. We did not talk about HTTP2.]

<timbl> http://www.opc.ncep.noaa.gov/UA/USA.gif

PA: A decade ago I ended up sniffing around this W3C organization. I ended up in one of Dan's groups.

DA: I take no blame!

PA: If it's data and not something else, say HTML or XML, it's part of the data activity.

[Goes through aforementioned slides.]

PA: Interested in government data (e.g. mapping criminal activity), but also scientific data, such as free access to papers
... in the scientific world there's a question how you can have open access while still have peer review

[Slide projects web applications on one side and spreadsheets/data/etc. on the other side.]

PA: we'd like to bridge the gap between the data and the application, make it easier
... if you want to do more involved things with data you end up with semantic web technology
... a lot of the RDF stuff is done
... if you export to CSV (or tab-separated, etc.) you lose a lot of data
... we want to be able to describe the metadata separately
... so they can be independent actors

<slightlyoff> annevk: this is the open-vs-closed dataset issue I keep brining up

PA: we want to find the middle ground between RDF and CSV

<slightlyoff> annevk: in a world where people have incentives to lie, and data isn't pre-groomed, schemas are suggestions

<slightlyoff> and say very little about quality

HT: I have a PHD student that works on extracting scientific data out of HTML tables and RDFa
... you partition datasets around dimensions such as geography and/or time
... then we map these partitions on URLs
... so they make sense within the context of the web architecture
... and then you end up with HTML tables with RDFa annotation (through a small vocabulary) so the data can be extracted

<ht> and generic visualation and processing tools (e.g. Map-Reduce) can be deployed w/o prior knowledge of the particular format

PA: You can see this on e.g. Google when you query "Population of London"

<ht> I'll send a link to our XML Prague paper in a week or two

PA: The CSV group is trying a more practical approach
... taking existing data and annotating that in a structured way
... That's not the only thing, there's another WG on best practices
... we need to know how often datasets are updated
... whether they are kept alive
... how you cite datasets
... We have some workshops coming up [see slides]

Data Activity and the TAG

PA: we might need some work around packaging of data
... work around access control and payments
... having rendering of maps in the browser directly

AR: What does that even mean?

PA: Have more control over maps in the browser

TBL: when Google Maps started it was precomputed images
... now more of it is vectors

AR: we switched rendering technology, yes
... I still don't understand what is being meant here

TBL: Google Maps allows overlays, PA is proposing to have this in 3D

PA: the idea would be to have maps in the browsers, just as you have videos in the browser

[Discussion went into how this would be enormously complex went unminuted.]

[Question about URL fragments.]

JT: There's an RFC for addressing individual cells in a CSV file
... There's also addressing data within a packaged file, which is part of a larger issue we discussed before

TBL: Linking into text/plain would be good

JT: There's an RFC [draft?] for that too

[Scribe notes that because browsers render text/plain using text/html you might get some problems there.]

PA: this was about all I had

DA: In the TAG we've been talking about how to add features to the platform. "Extensible Web". I wonder how this relates and what it means to web developers.
... There's a huge ecosystem of startups and developers that want to make use of this data and the reality is that people always have to scrape data in the most heinous way possible.

PA: Part of that is political. When the Boris Bikes app went live it used a scraper because the API was not provided.
... The scraping went wrong sometimes because of staleness in one of the three mirrors.
... After the mayor enforced an open API and the website started using that, everything worked a little better, at the fine tune of one million pounds

[JT mentions it's not really open]

<dka> trackbot, this is tag

<trackbot> Sorry, dka, I don't understand 'trackbot, this is tag'. Please refer to <http://www.w3.org/2005/06/tracker/irc> for help.

PA: We want to make more data available, with more metadata

<dka> trackbot, start meeting

<trackbot> Meeting: Technical Architecture Group Teleconference

<trackbot> Date: 07 January 2014

PA: Someone said RDF is not web native
... this is not true

TBL: The person that said that is a great hacker. He's trying to make sure the W3C doesn't force RDF on people
... He's a good guy

JT: There was fragment identifiers into packaging
... We also think we should package CSV and the metadata
... And we need something for that and probably coordinate that

AR: For a web developer a CSV file is not useful; I need to bring my own toolchain

YK: JSON is useful because it uses the data structures you are already familiar with
... This is also the problem with XML

<darobin> [and the endemic issue with RDF]

AR: There are two small primitives we added, mutation observers and Object.observe()
... They allow for data binding in your application with relatively high fidelity
... I can't just have some source of data and render it

TBL: What is missing? The databinding?

AR: Just feeding UI elements with data really
... There's no mechanism for that

PA: Here is an example of CSV rendered in the browser

AVK: The point AR made is that you need to write that whole application

TBL: A lot of things that come out of SPARQL are tabular
... What would be nice is to be able to feed a table in your DOM with data

AR: As a web developer my task is to make a table useful

JT: Currently passing around tabular data is extremely poor

AR: What is my API story around some CSV

<dka> JT: Way it would fit into the extensible web story - eventually we would like to have native platform API for access csv or tabular data files... that's way off...

AR: It seems to me that even if you describe CSV, there's still no advantage to the web developer
... they still need to scrape and do things

[Argument about whether this should be part of the W3C]

PA: I had this same argument with JJ
... The XML community has this argument too

TBL: PHP is part of the ecosystem, should we do that?

JT: The group just started out scoping out use cases and requirements
... As part of that wondering how it relates to the browser is something we could do

<dka> DA: We should be focusing on the interfaces between layers, including the client (browser)...

AR: As a first pass I wonder what it makes sense to parse that

PA: it would be great to have something like Microsoft Excel in the browser

HT: This is about lowering the bar for publishers
... Let me make it clear, this is about lowering the bar for producers, in particular

AR: I was curious to know to what extent the question around data life cycle is on the table

PA: We don't have that as a specific thing

<dka> AR: Synchronization - if I'm watching a time series i.e. stocks, how do I keep the data [on the client] in sync?

TBL: In an ideal collaborative environment changes happen on all clients simultaneously

AR: I was thinking about best practices
... Doing it cleanly across multiple clients is very hard

JT: Not currently in scope

<JeniT> worth pointing to Max Ogden's work on dat in this context too

<JeniT> https://github.com/maxogden/dat

DA: I think it's important to focus on the client
... How does this improve the life of a web developer
... Hopefully you can help us out here JT

JT: My concern with the CSV group is that it's mostly a SemWeb crowd

DA: My concern with that SemWeb crowd is that they don't get web developers

JT: It might be that they're interested in other things than web developers

DA: Then it might not be relevant to the web platform stack

Web Crypto

<wseltzer> [wseltzer joins via webrtc]

VG: We are about to go to Last Call with Web Crypto and would like a review from the TAG
... DK asked me to present where we are

<PhilA> http://www.w3.org/2014/01/W3C_Web_Crypto_API_status_january_2014.pdf -> Virginie's slides

[VG goes through slides]

VG: Not much feedback on the specification
... The specification is low-level and requires crypto knowledge to do the right thing
... That was the sole feedback we got, from Dan Boneh
... We were not comfortable defining a high-level API as we were not sure what is correct

<slightlyoff> API examples here: https://dvcs.w3.org/hg/webcrypto-api/raw-file/tip/spec/Overview.html#examples-section

<dka_> Chair: Peter

TBL: I like that this is at the binary level

VG: We decided to not touch on SSL
... We think SSL should be a separate stack
... from Web Crypto
... The API is shaped around creating a key and then allowing it to be used by the app
... Where the key is stored and such is not up to the implementation
... We are not here to provide high security

YK: Can you enumerate what is supported?

VG: No

AR: So from a developer you cannot tell which browsers support what. You don't know what is supported.

AVK: Is there a baseline?

AR: There's no baseline because you can remove it

AVK: It seems if there's no baseline developers will end up relying on what's supported across UAs
... What's the idea around interoperability? Is there a race to the bottom?
... Do you have algo Y in browser A and Z in B?

YK: Yes

AVK: That seems bad

TBL: What is this implemented in?

AR: C++

YK: Prolly JavaScript too through Emscripten
... The process of determining whether an algorithm is present or not is slow and currently synchronous
... which seems bad

<slightlyoff> (we should note that we want something like subtleCrypto.availableAlgorithms().then(function(list){ ... });

<slightlyoff> )

<slightlyoff> also, the specific issue that YK flagged is that Section 18 should ensure that the errors are sent to the Promise error handler instead of throwing errors

<dka> AR: Can I implement an algorithm in JavaScript?

<timbl> http://code.google.com/p/crypto-js/

<timbl> http://bitwiseshiftleft.github.io/sjcl/

AR: Netflix might want to ensure the algorithm they want really is the algorithm they want.
... If they can supply the algorithm that would be possible
... This would be a secure worker of sorts
... That has access to the secure memory the same way these built-in algorithms do

<dka> VG: [idea of a trusted worker which would have higher security requirements]

VG: Web Crypto API Key Discovery is an idea from Netflix about reusing a key
... Microsoft has implemented an older version of the draft
... What are the TAG next steps?

AR: let's say two weeks

<dka> ACTION: alex to draft a review of web crypto api [recorded in http://www.w3.org/2014/01/07-tagmem-minutes.html#action02]

<trackbot> Created ACTION-847 - Draft a review of web crypto api [on Alex Russell - due 2014-01-14].

VG: thank you very much

Security in general

[Small introduction about the various security issues the TAG is looking into, but has no work item on, around TLS, HTTP, etc.]

VG: In W3C there are a couple of groups that discuss security
... WebAppSec WG, Web Cryto WG, SysApp WG, Web Security IG (trying to revive this one), W3C AC
... I would say the real group working on it is the WebAppSec WG

<scribe> [Continues going through slides...]

<wseltzer> [I'd say we have a strong security community when asked specific questions; I'd ask the TAG's help in engaging the Web community in thinking about security at an architectural level.]

VG: We should have systematic security review

AVK: I'm confident that features implemented in browsers will have to pass security review
... And therefore specifications will have had security review
... We could have more, of course

<timbl> http://www.fidoalliance.org

<dka> https://www.w3.org/2014/strint/program.html

<wseltzer> https://www.w3.org/2014/strint/

STRINT Workshop (A W3C/IAB workshop on Strengthening the Internet Against Pervasive Monitoring)

[Censor. Monitor that.]

DA: Should the TAG have an opinion?

AR: Is there anything we can do here?

[TAG is unsure whether they can do anything meaningful in life.]

RESOLUTION: TAG is in favor of world peace

DA: We should raise security on the existing web

<wseltzer> A question for TAG: are there Web analogs to the protocol-level principles IETF has adopted (i.e., encrypt all transports)?

HT: It's not clear that would be good for the W3C and IAB at that level to discuss

<slightlyoff> I'd like to note that the questions are very technical: https://www.w3.org/2014/strint/Overview.html

DH: The TAG could provide structural advice as to how the W3C should handle security manners

<Zakim> timbl, you wanted to ask about common UI for key discovery for security

<slightlyoff> ht: I agree with that...what high-level things do you think they can accomplish?

<wseltzer> A question for TAG: are there Web analogs to the protocol-level principles IETF has adopted (i.e., encrypt all transports)?

[Remote participants are being censored... Unclear whether it's on purpose.]

<timbl> s/I could tag you as "offensive user" in the webrtc interface…/I think you are all so charming/

<timbl> censorship of th elog

<wseltzer> High-level, What can be W3C's unique contribution to enhanced security against pervasive monitoring?

<JeniT> wseltzer: what do you think?

<timbl> seltzer, what do you think?

[Censor has been removed. Most seem happy.]

WS: Are there places for web security with non-vague architectural principles. Like recommending TLS on the protocol level?

YK: CSP is one thing we should totally recommend

AR: CSP is defensive-in-depth, helps against certain kinds of attacks
... Where are we going with this?

DA: Putting more meat around best practices [link?] we put out before

WS: I plan to attend and propose some inputs
... In the IETF there's a question across WGs about what pervasive monitoring means to their protocol
... What's the analogous question for the W3C?

<slightlyoff> dom: there's an uncomfortable question about what performance we'd be willing to give up to get resiliance

<slightlyoff> dom: I honestly think fingerprinting is a lost cause

there's no resilliance against fingerprinting

right

<slightlyoff> dom: but it'd be good if users could turn up a slider and become more resiliant to metadata snooping

<dom> "

<dom> The overall goal of the workshop is to steer IETF and W3C work so as to be able to improve or “strengthen” the Internet in the face of pervasive monitoring. A workshop report in the form of an IAB RFC will be produced after the event. "

DH: Is there anything beyond the transport layer that needs tackling?

<JeniT> previous minutes: http://www.w3.org/2001/tag/2013/10/01-minutes.html#item04

DH: Possible answers: 1) No; 2) Yes, X, Y, Z; 3) Yes, but not sure

YK: We know about XSS, CSRF, etc.

DH: This is about pervasive monitoring, not security in general

YK: If I were the NSA, I would [government]-in-the-middle all the time

<wseltzer> JeniT, in response to your earlier question, possibilities might be in UI, better identity management/authentication, stronger browser bindings for transport security and DNSSEC (DANE)

YK: It's pretty clear that the exploits that exist will be used for monitoring
... App has ads, ads can execute code, boom

<dka> AR: making pervasive monitoring visible, tamper-evident... [in general more difficult] should be a goal...

<wseltzer> +1 on making monitoring tamper-evident

TBL: Should we encourage encryption on the client so the server does not know the data?

AR: With Web Crypto we could enable that

[Scribe missed a bunch :/]

AR: Key management is the thing that's kills crypto systems

HT: Problem for XML digital signatures too

<timbl> pong

<annevk_> TBL: I use PGP, it integrates nicely with the mac client, but the key management is appalling

<dka_> ACTION: alex to prepare an input into the STRINT workshop from the TAG [recorded in http://www.w3.org/2014/01/07-tagmem-minutes.html#action03]

<trackbot> Created ACTION-848 - Prepare an input into the strint workshop from the tag [on Alex Russell - due 2014-01-14].

<JeniT> ScribeNick: JeniT

<dka_> VG: One of the STRINT topics is webrtc...

<scribe> ScribeNick: dka_

VG: In WebRTC there are some security/identity requirements (from IETF) which need to be addressed...

DH: My understanding is that this will depend on the browser - not from w3c. We provide the hooks on which the browsers can provide the interfaces...

<dka> scribenick: dka

DH: It's purely in the domain of the browser.
... If you are communicating p2p then you make pervasive monitoring more difficult...

<wseltzer> [thanks for the discussion. I'll look forward to following up with you.]

<annevk_> https://www.google.com/moderator/#15/e=20fe09&t=20fe09.40

<slightlyoff> Was there a plan about food?

<JeniT> there was the possibility of ordering pizza from Campus I think

<slightlyoff> Ahhhhh...ok

Summary of Action Items

[NEW] ACTION: alex to draft a review of web crypto api [recorded in http://www.w3.org/2014/01/07-tagmem-minutes.html#action02]
[NEW] ACTION: alex to prepare an input into the STRINT workshop from the TAG [recorded in http://www.w3.org/2014/01/07-tagmem-minutes.html#action03]
[NEW] ACTION: dan to make edits to the tag home page and product page accordingly. [recorded in http://www.w3.org/2014/01/07-tagmem-minutes.html#action01]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2014/01/15 15:27:06 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.138  of Date: 2013-04-25 13:59:11  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/YK/YK:/
WARNING: Bad s/// command: s/November [ref?]/October (http://lists.w3.org/Archives/Public/www-tag/2013Oct/0050.html)
WARNING: Bad s/// command: s/November [ref?]/October (http://lists.w3.org/Archives/Public/www-tag/2013Oct/0050.html)@g
Succeeded: s/Discussion happening/Document discusses/
Succeeded: s/about //
Succeeded: s/Cross-domain/Cross-domain when https/
Succeeded: s/observation/observation is valuable/
Succeeded: s/we're/but we're/
Succeeded: s/No/Topic: Closing issues/
Succeeded: s/close ISSUE-37/close ISSUE-57/
Succeeded: s/much]/much/
FAILED: s/I could tag you as "offensive user" in the webrtc interface…/I think you are all so charming/
Succeeded: s/government/[government]/
Succeeded: s/STRING/STRINT/
Found ScribeNick: ht
Found Scribe: Henry S. Thompson
Found Scribe: JeniT
Inferring ScribeNick: JeniT
Found Scribe: ht
Inferring ScribeNick: ht
Found ScribeNick: annevk
Found Scribe: Anne van Kesteren
Found ScribeNick: JeniT
Found ScribeNick: dka_
Found ScribeNick: dka
Scribes: Henry S. Thompson, JeniT, ht, Anne van Kesteren
ScribeNicks: ht, JeniT, annevk, dka_, dka

WARNING: Replacing list of attendees.
Old list: [ODI] Yves ODI
New list: Yves ODI

Default Present: Yves, ODI
Present: Yves ODI Phil Archer Virginie Wendy Dom
Agenda: http://www.w3.org/wiki/TAG/Planning/2014-01-F2F
Found Date: 07 Jan 2014
Guessing minutes URL: http://www.w3.org/2014/01/07-tagmem-minutes.html
People with action items: alex dan

[End of scribe.perl diagnostic output]