See also: IRC log
<wseltzer> Yesterday's whiteboard
<tanvi> who's in for dinner? maybe we should make reservations earlier today
<freddyb> +1
<deian> +1
<estark> +1
<bhill2> +1
<jww> I'm in for dinner, do I need to write anything else?
<jww> is there a Zakim command for dinner?
<dveditz> jww: sorry, doesn't count without a "+1"
<tanvi> dinner: francois, freddyb, deian, mkwst, estark, bhill2, tanvi, jww for dinner = 8 so far. looking on open table
<tanvi> I can get a reservation at La Fontine in Mountain View at 6:30 pm
<tanvi> http://www.yelp.com/biz/la-fontaine-restaurant-mountain-view
<tanvi> sound good?
<jww> tanvi: works for me.
<tanvi> okay done
<tanvi> parking may be tough
<tanvi> but 630 is fairly early so hopefully we will beat the crowds
<estark> thanks tanvi!
<devd> Zakim: I am here!
<inserted> scribenick: bhill2
estark: a policy you can set on a
page or for an element to relax the restriction on referrer
being withheld e.g. on https->http navigations
... or tighten policy to say never send a referrer
... spec is ready to be wrapped up, don't feel there is any
major work to be done
... dominic denicola is working on the PRs to HTML to integrate
it there and replace hand-wavey bits about that
... that is the major outstanding work on the spec
... in chrome there is a fair bit of implementation work to do
to catch up with where the spec is
... biggest work is implementing the header
... moved out of CSP to new header
... also added a referrer policy attribute to the link
element
... link tag (was already on anchor element)
... this has been updated a bit to integrate better with fetch
and service workers
dveditz: we proposed adding a couple of new states / policies
estark: hasn't been changed in
the spec yet, don't know how strongly you feel about adding
those
... happy to add them to the spec
... in terms of chrome implementation I can't say when we would
implement them because catching up on the header is a bit of a
project
... that is higher priority
<wseltzer> https://w3c.github.io/webappsec-referrer-policy/
francois: pretty strongly. spec
meets all of our needs except the three new proposed policy
states
... spec covers everything else, seems unlikely there will be a
V2 soon
... don't want to leave these behind
dveditz: we've had internal settings to do this in Firefox for some time, and every user of that has wanted to restrict not expand
estark: but that's a different audience, users not resource authors
dveditz: but those same people have their own sites; interesting to them even if not big commercial sites
francois: recently we were talking to womens shelters, they can't do a lot because they're not super technical, but they were leaking referrers
<JeffH> what is francois' affiliation?
francois: would like to be able to protect visitors to their sites, they use analytics internally and don't want to lose that with no referrer
(jeffh: Francois is with Mozilla)
estark: sounds like we should
probably add it, I personally won't have time to implement it
for a while, maybe we can find someone else to implement
it
... maybe that's OK
dev: do we have data on how many have adopted it
mkwst: a lot... 7.3% of page views
francois: that's skewed by prominent sites
hillbrad: facebook uses
origin-when-crossorigin
... to protect privacy and avoid redirects
... and we also user rel="noopener"
dveditz: is noopener being speced anywhere?
mkwst: yes, in the WHATWG HTML spec
<mkwst> https://html.spec.whatwg.org/#link-type-noopener
dveditz: would be nice if we could change so opener didn't give a ref unless explicitly wanted
<JeffH> https://html.spec.whatwg.org/#link-type-noreferrer
mkwst: probably very common with OAuth popups, return url navigates using that reference
<mkwst> https://github.com/w3c/webappsec-referrer-policy/pull/19
hillbrad: https://github.com/w3c/web-platform-tests/pull/2851
these are using the old CSP header, needs someone to adopt them and fix them up
(testsuite by @kristijanburnik)
mkwst: should we (is it enough) file bugs against the W3C HTML for these integration points that Dominic is doing?
wseltzer: W3C normative reference
policy is that they must be to things of equivalent stability
and openness
... goal is to have references in RECs be to W3C RECs or
similar in other groups
mkwst: defines some hooks in to
the navigation algorithm, maybe all in Fetch. there is some
integration into HTML to read the attributes as input to
fetch
... fetch calls back into referrer policy
... would be good if wseltzer can look at that and help us
figure that out
wseltzer: this is pretty well
entangled, more so than SRI was
... gold standard would be to get this in a W3C doc
mkwst: let's add this to our agenda
<JeffH> https://fetch.spec.whatwg.org/#referrer-policies
dev: do other browsers have a plan to implement the header version?
francois: we have someone working on the spec right now
mkwst: don't think FF has the CSP header
tanvi: we may already have it
mkwst: are we going to publish a new WD?
estark: yes, on my todo
list
... may publish a WD from before the HTML integration
started
... as it may make more sense in that state
wseltzer: are you set up with automatic publication?
estark: mike sent me instructions so I should be
mkwst: CSP3 doc at the moment is relatively stable, takes care of majority of things in CSP2 as well as a couple of new things
<JeffH> [ wrt referrer policy and HTML spec: https://html.spec.whatwg.org/#referrer-policy-attributes ]
<wseltzer> https://w3c.github.io/webappsec-csp/
mkwst: couple of small changes
I'd like to make, basically I split CSP into a couple documents
and want to pull some things back from the document
features
... subdocument back into the main document, sandbox attribute,
base uri, things already in CSP2
... not enough to justify a separate document without making it
more confusing to read
... at the top of CSP there is a list of changes
scribe: frame-src and
worker-src
... in CSP2 we have child-src
... in CSP3 we've split that out to frame-src and worker-src,
which defers back to child-src
... we need more granularity, the threat models may be similar,
but people simply want to control them differently
artur: why are they similar?
mkwst: they have ability to execute code on an origin
artur: to me from a security
perspective I would not normally need to restrict frame
src
... security effect from including an untrusted frame or
untrusted worker
mkwst: I don't entirely disagree
with you
... I think they end up with very similar capabilities
<JeffH> where in https://w3c.github.io/webappsec-csp/ is the CSP3 spec ?
hillbrad: does child-src cover pop-ups and auxillary browsing contexts?
mkwst: not yet, my impression is there isn't strong support but people are interested
dveditz: there are lots of people who think they want to block navigation to stop exfiltration
hillbrad: would be nice to restrict to where e.g. ads can navigate to for e.g. "cloaking" scenarios to only accept known-good locations
dveditz: we would probably need UI to work on this
mkwst: not opposed, we just haven't done the work
dev: things like prerender, prefetch are also relevant
<tanvi> we haven't implemented referrer policy header yet but it is assigned and slated to happen soon
dev: don't know if a declarative
mechanism is the way to go long-term
... maybe just service worker
mkwst: malvertising is concerned
about navigations by an embedded context
... want to know the landing page for an ad is the one they
assessed
... initial plan there was embedded enforcement
hillbrad: let's talk about this in the afternoon when Deian is back, this is needed for COWL, and we've talked about doing it only for iframes which covers his case and ads without so many of the UI issues associated with top level browsing contexts
mkwst: there were some changes to
reporting that we talked briefly about
... we are starting to use the new reporting API, idea is we're
creating a single backend API that things like CSP can talk
to
... so CSP doesn't have to define this, can just refer to
it
... that spec is a bit in flux, issues like retries, retries
across network state changes
... good to have a central place to work out those issues
... reporting also changed a bit such that we now should be
able to give full url of page we are on as well of initial
resource request (before redirects)
... tried to load example.com?redirto=foobar
dveditz: do we tell them that it was a redirect?
mkwst: no, but you will know that because it is in your whitelist
<JeffH> mkwst has been relating items listed here: https://www.w3.org/TR/CSP3/#changes-from-level-2
estark: does this solve Zach's issue?
mkwst: no but I think what we
will give will solve his issue, allowing tracking down the
original link
... no need to strip information that is available in JS
already
dev: in an ideal world would be nice to know the origin of redirects at least
mkwst: I think we can have that
conversation if you file a bug we can see what is
possible
... should be able to give more useful reports
<wseltzer> bhill2: regarding reporting, is it worth looking at imperative API to tune client-side?
<wseltzer> ... reporting is really useful, key in CSP adoption, and also really painful
<wseltzer> ... tuning reports; eliminating extensions, spurious reports
<wseltzer> ... imperative API to say "don't send reports on extensions"? or continue server-side?
<wseltzer> mkwst: if we could do a bretter job detecting extensions, you wouldn't need to suppress
<wseltzer> ... but it becomes complicated, e.g. event handlers callbacks
<wseltzer> ... we do have a handler that lets you send your own report
<wseltzer> ... let's figure out how to make the current stuff do what you want
<wseltzer> bhill2: can we reduce the pain and suffering about implementing CSP, so everyone doesn't have to write a blog post
artur: unless you know the
problem we are trying to solve, unsafe-dynamic may seem
crazy
... premise is you should trust the domains from which you load
scripts
... when CSP started it was maybe easier to meet that
criteria
... but it breaks down whenever there are several kinds of
endpoints at origins you trust
... two classes which are well known in pentester
community
... JSONP (json objects wrapped in callback functions that are
attacker specified)
... www.google.com/jsapi?callback=foo
... second thing is "javascript gadgets" which take data from
an unsafe location and uses that to look up a function of the
window object and execute it
... can also bypass CSP, Angular JS is one of those
gadgets
... if any of your script-src hosts have angular, you can use
that to bypass CSP
... there are a few other weird things people don't think about
that let an attacker include as a script from an otherwise safe
domain that lead to arbitrary script execution
... very likely that if you include one of the most common
script sources they have one of these bypasses available
dev: don't paths fix this?
artur: first of all you can
bypass path restrictions with an open redirect (Homakov's
bug)
... we actually adopted a policy like this but it is very
difficult to maintain in practice, especially for third party
sources
... few API owners will guarantee that paths don't change
... e.g. the recaptcha API may load subresources that change
locations
... now loads from www.google.com/recaptcha2/... and your
policy breaks
mkwst: will be a bigger problem once script modules are a thing
artur: in practice CSP probably
has a whitelist that lets XSS still happen
... but CSP already has an alternative - nonces and
hashes
... that may allow us to be much more granular about what is
allowed to load
... you can easily change templates to add this to script tags,
one thing left is dynamic element creation in JS
... unsafe-dynamic directive attempts to fix both of
these
... first it drops the whitelist, and then it adds trust
dynamically to scripts loaded by a trusted script
tanvi: before a dynamically loaded script would have to be on the whitelist?
artur: yes, or it would need to explicitly get a nonce, but that requires deep changes to every JS API on the internet or some polyfill
tanvi: with unsafe-dynamic you
ignore the whitelist
... what about createElement
artur: this is allowed to happen; attacker could not do this because they don't control an entry point that could do this
mkwst: we look at whether it was
parser inserted
... we whitelist non-parser inserted scripts
dveditz: this is my least
favorite part of the proposal
... could we make it its own directive that says ignore the
script-src directive?
<wseltzer> bhill2: dynamic-nonce-enforcement?
<wseltzer> artur_janc: it's similar to unsafe-eval
<tanvi> document.createElement allowed, but document.write and innerHTML inserted scripts not allowed
artur: it is unsafe similar to eval, as an attacker if you control something in createElement, you get more control because you are not bound by the whitelist
hillbrad: this is good, calling it unsafe will stigmatize its use
mkwst: don't care about changing
the name, creating a new directive complicates the policy
parsing and enforcement mechanism
... would have to special case
... not sure it's any more convoluted than the current
system
... I don't like special cases
dev: you have to use a nonce this
way; a different directive you could specify a specific named
entrypoint
... would have to be a separate directive
... to specify it
... personally just a fan of a new header
... in our case we would just list require.js
artur: you could already do that
with a hash right now
... give a hash to the loader script
... many apps have a lot of both inline and referenced
scripts
... e.g. sensitive data that can't be externalized like JSON
blocks in inline
dev: you can still use a nonce, but just not a fan of forcing you to use that
hillbrad: but a hash?
dev: would still have to use
inline there
... number of hashes might go up
dveditz: you could just put in a little inline bootstrap script
artur: it is a bit ugly, there is some precedent with nonces/hashes, etc..
hillbrad: don't think it's "ugly" just principle of least surprise
dveditz: syntax ratholes can go
on forever
... seems like everyone likes it
tanvi: a separate directive would be easier to implement
mkwst: for chrome it was significantly easier to not do a separate directive
dveditz: if a page has two policies, how does this combine?
mkwst: has to pass both
dveditz: that's right we don't combine, we just evaluate serially
mkwst: we've already implemented a few things, but easy to fix
artur: one thing we want to avoid is necessity to sniff user agents
artur: we experimented with the
dynamic policies and it was easy because google is set up to
add these nonces in its framework
... two remaining blockers are javascript: uris and inline
event handlers
... didn't seem too crazy to allow hashes to also whitelist
event handlers (js uris are less popular)
... could remove a lot of pain for authors
jww: how would you deal with backwards compat?
artur: have to have unsafe-inline
mkwst: not backwards compatible
as currently specified
... currently hash in policy turns off unsafe-inline
... then you put another policy that enables the inline event
handlers
... if you don't support this new directive, but do support
hashes it will break
dveditz: who supports hash now?
chrome, firefox, safari tech preview
artur: but it is already broken
mkwst: we could have a new type of hash to enable it
dev: but then we couldn't have nonces either
mkwst: if you add a nonce today
you don't get handlers
... no way to do this backwards compatible with browsers that
support nonce today
dveditz: could we make to also work with a nonce?
mkwst: could make first line of the onclick handler be a comment with the nonce
artur: benefit of a hash is we can pass over your app and tell you the hashes, but if you can make changes we can already outline scripts
<inserted> scribenick: wseltzer
[short break]
mkwst: from my opinion, we can
head to CR soon
... others should look at the stability
... remaining question whether we can use hashes to
whitelist
... as we discussed with SRI, I'd like to figure it out
... if we're going to make a new directive for unsafe-dynamic,
...
... Please look at the spec, see if you're happy with it.
... Spend some quality time with CSP
... Good to see that W3C HTML folks are doing integration
<mkwst> https://github.com/w3c/html/pull/387 <--
mkwst: Travis landed a big commit
closing lots of issues yesterday
... Let's get CSP to CR soon.
JeffH: if you implement CSP2, do you replace with CSP3?
mkwst: we've tried to design as backward-compatible
JeffH: Do you say that explicitly in the spec?
[it appears not]
[rename the spec? john suggested removing "security" yesterday]
bhill2: per the KFC precedent, we can just rename it CSP
bhill2: It would be good to discuss with Safari folks in the room
jww: SRI v1 is done
devd: lots of users
... for v2, requests include cross-origin caching
bhill2: Talk to your AC reps to support the PR
<JeffH> https://www.w3.org/TR/SRI/
https://www.w3.org/2002/09/wbs/33280/SRI-PR/
^ the poll, open to AC reps
francois: there is an issue on
the list. One of the notes is probably wrong
... can we remove it?
mkwst: notes are non-normative
francois: regarding origins for data URIs. We'll remove it.
jww: Everything else is V2
... require-sri-for
freddyb: pull request
... I have a draft implementation for FF
... github interested
... they had integrity for all scripts minus one, they were sad
not to have a way to enforce
jww: overall, I'm a fan, but
someone pointed out it's tricky to use without reporing
... because resources will stop loading unexpectedly
francois: but there's CSP reporting
https://github.com/w3c/webappsec-subresource-integrity/pull/32
devd: my concern is that
historically, SRI was for CDNs
... so I'm not sure what require-integrity gets us
... corporate proxy
dveditz: what if it were require-integrity for these domains?
jww: we've had people request straight require
devd: I don't think we have full
sample of the people who need it
... uncertainty about how it integrates in CSP
jww: what this gets is something
similar to upgrade-insecure. you don't need it in a perfect
word
... helps you transition
... it's not solely about CDNs, lots of people have different
uses
... we envisioned other future uses, like caching
... sounds useful, also like complexity I'm wary of adding
without user demand
devd: we should spend more time
thinking about it
... a better story around SRI-CSP integration
... think about cross-origin caching
jww: we're talking about experimental
freddyb: implementing doesn't
mean it's locked in
... require-sri-for scripts, does that include same-origin
scripts?
<JeffH> is pull/32 related to issues/23 ?
freddyb: what is it defending against?
devd: some sites with separate hosts
francois: lots more complexity
bhill2: edge caching with
integrity protection could be useful
... say that things on our CDN are OK so long as they're
integrity-tagged
devd: a set of issues that are
CSP-SRI integration; consider them together
... are lots more people interested in this directive?
bhill2: might be more useful in report-only than enforce mode
freddyb: maybe put it out behind a flag, get feedback
jww: make sure it's forward-compatible
dveditz: next on list, add
integrity attribute for processing of inline scripts
... Crispin or Daniel mentioned yesterday
... Integrity for downloads
mkwst: firefox handled encoding differently
bhill2: if you block it, the user
will just copy the link to the URL bar anyhow
... how do you make the UI meaningful? do they get to bypass,
what signal do they get?
... usability
francois: we might have existing UI for safe browsing
jww: integrity is a network error, currently
mkwst: the browser's gonna
browse
... a download isn't represented in the page
bhill2: do you have any way to get integrity feedback on your downloads?
dveditz: it's a navigation
mkwst: what happens is opaque to
the page
... there should be no impact of failrue, but agree with brad
that's a problem
devd: this emphasizes how hard it is to download safely on the web
jww: also cross-origin issues
mkwst: it would have to require CORS as well
devd: if we add anchor integrity, we also need to add CORS
francois: or we don't do the reporting
jww: there's lots of challenge here
francois: to summarize:
outstanding problems 1) reporting,
... UI is a question for the browser, "your download is
corrupt"
bhill2: layer 8 problem, knowing your download links are broken
dveditz: are we proposing that for a elements, you have download, integrity; enforce, if it fails say it's corrupt?
jww: not sure re telling
user
... if many of these things are corrupt, imagine all the errors
to the user
devd: rather than warning, just
fail it. no user decision
... up to UA to handle the failure
dveditz: apart from the warning, do we agree?
jww: not yet
dveditz: what we do with failures, what we tell users, CORS issue
JeffH: what's the use case?
dveditz: you link to downloads hosted elsewhere, on CDNs
bhill2: CDNs for downloading open source packages
JeffH: use case to make statement about the bag of bits, say "these two things need to match when downloaded"
freddyb: regarding reporting; if
we have no reporting, it's easier, but there's no way for
website to know there's a problem
... and we don't know hwo users respond
devd: I'm ok with that, because
if you're a site using anchor tags for download, you already
have that problem
... so many sites have iframe to do the download, iframe
post-messages to say network error, retry
... that's a better way to do it already
dveditz: add integrity to iframes?
mkwst: I wanted that
... would be nice for ads, "I audited this"
jww: you can do that with fetch
already
... modulo CORS
freddyb: performance issues if you wait for integrity
devd: integrity for downloads;
next steps, consider reporting
... and if not, just network error if integrity fails.
jww: only if we can find a way to
make onerror work
... I want the site to be able to see that there was a network
error
[then we have to do the CORS checks]
jww: let's talk more
JeffH: as CORS is currently wired into SRI
freddyb: this is a navigation
mkwst: navigations go through
fetch
... need to change HTML to pass attributes through
tanvi: you're saying don't do onerror, Dev?
devd: yes, that's easier
... it fails as a network error
... but no reporting
... as a site owner relying on a CDN for downloads, I get a
call if the CDN is down; no call if the CDN gets hacked,
today
... I'd rather get a call in the second instance too
JeffH: as a user, I'd like to see "integrity check passed"
tanvi: you could put it in the console
mkwst: dangerous to add UI that makes some links look better, new incentive for malware authors to decorate their links
JeffH: trying to get to the use case
devd: it gives value to the web
app author: users are getting what I sent them
... it's not a user-facing feature
JeffH: if you've downloaded and integrity fails, you shouldn't save to disk
francois: for safebrowsing, we don't move to download until checked
dveditz: you could send the hash
to malware scanning
... if it's the right format hash
mkwst: if it fails, send the failure to safebrowsing too
francois: if you tell authors to use the hash safebrowsing supports, can speed up the safebrowsing test
[lunch, resume at 1]
<tanvi> second call for dinner
<tanvi> anyone who didn't rsvp in the morning who wnats to come?
<inserted> scribenick: bhill2
jww: agreement this is a good thing, but there are complexities
dveditz: clarify?
jww: yes, thing is in cache, skip network
dev: CSP bypass /
cache-poisoning: attacker injects content with a hash claiming
one origin, but it's not really on that origin
... but browser includes it as-if-from-that-origin because
there's a matching cache entry
johnwilander: but if resource has
been loaded once from that origin?
... but any subsequent one you are allowed to take the hash
dev: but then the biggest advantages are lost
jww: there is an advantage, but destroys the bootstrapping effect
johnwilander: but it's "really" the first time
dev: but caching algorithm here is forever
dveditz: how many hashes do you keep?
dev: probably everyone uses sha256 and just keep those hashes
hillbrad: can we just add this to response to a HTTP HEAD request?
dev: we could, but since attack
is only for CSP, maybe we can add a new directive that says "I
accept this risk"
... or just add hashes to CSP policy, then it doesn't matter,
you're allowed to use that file anyway
johnwilander: but my load once suggestion was to avoid the privacy issue of seeing if someone's ever been there
francois: also works in reverse, reference files that don't exist on your server and see if they load
jww: yes, it's a timing attack but its a "use once"
francois: not in the case that I
referenced
... you can return a value that doesn't go into the cache, so
you don't destroy the signal
... <integrity="hash-of-jquery" src="do404.cgi">
<JeffH> potential descriptive names: cross origin content-based cache retrieval -- content aware storage policy -- cache aware storage
francois: explains attack at whiteboard
dev: only do it for crossorigin anonymous
jww: but the pattern of what's in your cache may allow fingerprinting collectively
dev: it already works, this makes
it easier
... attackers can work hard, developers don't want to
francois: not clear this will give us a real boost, real fingerprinting consequence
dev: but there is lots of developer interest
jww: also would be neat to do
shared-library style automatic upgrade
... also problems with crazy headers Eduardo found, e.g. what
if you set cache expiration to something low like 0
... set refresh url to javascript:doEvil()
... need to do research on this, don't even know what it means
to ignore a header
dveditz: there may be headers like CSP that come with a frame (e.g. if we do this for iframes)
artur: also content-encoding, same response may have different properties
mkwst: would this also require a preimage?
jww: no, you give a real thing but set bad headers on it
hillbrad: is there any data?
mkwst: talked to Elie Burzstein about it, what would it look like with an infinite cache
jww: also about how often do people see something the first time, request something expired
johnwilander: WebKit also has
cache partitioning to create distinct caches from different
origin requests; a compile time flag that safari uses
... it's a privacy feature
artur: firefox extension back in the day had "safe cache" but it had big performance concerns
francois: tor browser has this
wseltzer: this is a w3c process /
work mode discussion
... as part of the process when something reaches REC, it and
the things it refers to are stable and have gone
... through open standards processes and have compatible
intellectual property status, and are compatible with the open
web
... we do a lot of work here with WHATWG and have good
relations with them
... but Fetch and their HTML aren't being developed by the same
processes used at W3C
... no stable snapshots to assure that links always point at
the same place
... so for SRI going to PR we had to assert that the group
would continue to monitor Fetch and make sure that references
pointed to the current stat of that document
... one approach of the i18n group is to take snapshots of the
Encoding spec and making references to the snapshots
... that would be one way to do that
... and also would give the patent commitments from this
group
... I am trying hard to make this not just be extra work and
procedural hurdles but also about assuring that these
important
... components of the stack are things we are looking at and
comfortable about the process by which they've developed
... we need to address these because as further specs move
forward we will get these questions from the Director and
need
... good answers on the normative reference policy
dveditz: CSP is for example more dependent on Fetch
<wseltzer> Normative Reference Policy
mkwst: I think they both have
similar relationships, there are mutual hooks
... we do have normative references to concepts in Fetch to
make those algorithms work
... so I think the relationship there should work
... HTML is more difficult in a number of ways, involves
bidirectional requirements
... is very duplicative and doing work in one place doesn't
flow into another without effort
... not the same as encoding spec, not just a snapshot where
there is one editors draft
... seems reasonable to do with Fetch
... less likely that it would work for HTML
dveditz: it explicitly didn't it forked
mkwst: what I have been doing is
sending patches to WHATWG HTML and then filing bugs against W3C
saying "we landed this over here, please make it make sense in
your version"
... but documents have diverged in various ways that make it
difficult for me to assert the same patch has the same
meaning
... I don't want to commit to doing that review twice because
it is hard enough to do it once already
francois: and it would be a lot of work to patch W3C HTML
mkwst: we could go the other way,
but not the one I've chosen
... WHATWG HTML is the doc that I read and am familiar with how
it works, closer working relationships
dveditz: and things happen
mkwst: I think that model could work for Fetch if Anne is happy with it and somebody volunteers
wseltzer: do we have support for doing that?
deian: I'd like to try, would volunteer to do that
dveditz: is there w3c version of service workers?
bhill2: how much work is involved in porting between formats
mkwst: I think anne would support porting upstream to bikeshed
jww: are there other specs we will have to do this for?
room: DOM, WebSockets, URL....
dev: has there been concern about SRI and its references for implementers?
deian: I think there is still some disconnect between Firefox implementation and Fetch, e.g. the CSP integration points as described don't work that way yet in Firefox
dev: I don't remember stability of fetch being an issue at implementation time for SRI
johnwilander: I don't know the status of Fetch work, but I can find out; if we have any opinion we'd like to share I can find that out
mkwst: fetch API is behind a runtime flag in STP, but it is more than just the API
<wseltzer> scribenick: dveditz
johnw: I'm referencing shellshock
here so you can see the timeframe we started thinking about
this
... 3 common headers that don't cause a pre-flight in CORS, but
which aren't restricted with what characters can go into them
(in practice)
... these three headers could be used then to trigger the
shellshock vulnerability, for example
... I talked to the people involved and browser vendors decided
to go with a lax character definition
... that goes beyond the RFC of allowed header
characters.
... I would like to see browsers restrict the headers of these
three fields specifically, RFC 2616 section 14.1
<mkwst> https://tools.ietf.org/html/rfc7231#section-5.3.2
JeffH: 2616 is obsolete now
mkwst: but the new one is the same thing wrt this issue
john: how should we proceed
<JeffH> https://en.wikipedia.org/wiki/Shellshock_%28software_bug%29
mkwst: I think you're asking that we verify the fetch "safe" headers against a stricter set, not all http headers
john: yes
mkwst: fetch seems like a reasonable place to do that
john: second topic: associated domains
[on whiteboard: { apple.com, icloud.com, filemaker.com} {google.com, google.ru, youtube.com}
scribe: we are struggling to make
SSO work within these groups. Technically SOP says they are
separate origins, but they are related with a single account
that works across them
... Users expect things to work across these because in their
mind (branding) they are the same company
mkwst: there's already a plist
format for
... Google solves the same problem differently. the Play store
keeps a list of origins
... we could solve the binding problem. My concern is the
degree of laxness between them. what features do you want to
enable between them?
john: we could (assuming all sites opt-in) we could allow cookies across them, or our various storage partitions could group
JeffH: we have 9 use cases in the
problem statement for DBOUND in the ietf
... it's a hard problem to get alignment on solutions.
<wseltzer> https://datatracker.ietf.org/doc/draft-sullivan-dbound-problem-statement/?include_text=1
JeffH: cookies are just one
piece. there's also various UI indicators. The only real notion
today is the "effective TLD" (aka public suffix list)
... maintained by Mozilla. Doesn't track the reality on the
web. there's a timelag
dveditz: the list doesn't bind
things together, it'
... it's more to keep sites MORE separated than they might
otherwise be (e.g. no blogger.com cookies)
JeffH: it's not just the web,
there are other usecases (mail, tls... )
... the current scheme (eTLD) doesn't allow one to express
those relationships. There are 4 proposals in the DBOUND
working group
... IMHO two are [less workable]. the other two are one from
andrew sullivan's and myself, and the other by John Levine
who's also been around a long time
<wseltzer> https://datatracker.ietf.org/wg/dbound/documents/
JeffH: both approaches put
signalling in the DNS
... This is important and needs to get done, but we've been
struggling along without it for so long so people are working
on the "more important" daily urgencies
... It will take a bunch of work to get it done in terms of
spec and convincing people
... then the question is who will bother to deploy this. Andrew
and I are convinced this signalling needs to be in the DNS
mkwst: does this require DNSSEC?
JeffH: well, if you want to trust it. I'm not sure I'd put that into the spec because we've been able to muddle along without it
rbarnes: that's because we have TLS, and if you need real security you use that. if you're making security decisions based on it then you will need DNSSEC
JeffH: andrew is confident ICANN
will be interested in deploying this, mandating it for
top-level domains
... A huge chunk of what the public suffix list denotes is the
distinction between these domains and the public
... Would places like blogger or appspot pick this up to make
the distinctions between delegated authorities
mkwst: I'd like to see the problem statements and who would use this. There's much already baked into Chrome making distinctions between origins
deian: I think some of the disjointness and combination you are describing could be denoted by COWL labels
<JeffH> dbound problem statement https://datatracker.ietf.org/doc/draft-sullivan-dbound-problem-statement/
<JeffH> https://datatracker.ietf.org/wg/dbound/documents/
deian: we currently don't allow delegating privileges [not understanding]
john: that's a new thing. SOP is in place but COWL will let you do additional things?
deian: yes
jww: sounds like your idea is to
change the security boundary from "origins" to
"organizations".
... couldn't you do much of what you want with postMessage?
<JeffH> https://datatracker.ietf.org/wg/dbound/documents/
john: privacy is one of our
interests. there are 200 something orgs involved in loading
cnn.com
... if google.com is talking to google.ru it's the same
organization, if it's google and apple then they are
distinct
jww: it's hard, but you can do this today with postMessage
john: things like cache partitioning can't be done with postMessage!
bhill2: I've been thinking about
this for over a decade and I'm concerned about the implications
and economic incentives here
... In the beginning I worried about wanting additional priv's
(like SSO) would require letting big sit4s like google/facebook
set cookies or other things on your domain. market pressure on
small sites to comply
john: no, this is "does facebook own this domain" are they the same
bhill2: we need to make sure this
doesn't create financial incentives to collapse the privilege
separation on the web. we might end up with very large sets
that undo the security boundaries we are trying to set up
... We need to keep in mind the unintended consequences
_devd_: why isn't icloud.com
isn't on icloud.apple.com ? we already have a mechanism for
sharing at the domain level
... why _do_ sites create .ru/.de versions?
mkwst: lawyers
jww: amazon/audible is an example of joining, but there are organizations that divorce, too. Ebay/paypal for example
JeffH: all that is supported in the current drafts
<Zakim> wseltzer, you wanted to comment on layer-crossing
wseltzer: I've been watching
DBOUND as it struggles to get cycles/traction
... Worth spending a little more time if this is
meaningful.
john: we could set up our own
format "you need to set up a manifest, maybe even signed, and
we'll allow you to do more things"
... and that might become a Safari only thing
mkwst: it would be really helpful if you documented what you want to get out of this. I understand the grouping together, but not what features you are trying to get
<JeffH> https://tools.ietf.org/html/draft-sullivan-domain-origin-assert-02
john: originally we were making
more distinctions, not less
... we don't want to kill Google SSO for example
JeffH: there's also more than one SOP
<JeffH> https://tools.ietf.org/html/draft-levine-dbound-dns
john: 3rd party cookie blocking
is a classic example here -- these aren't 3rd parties to each
other
... but to browsers they are
deian: the motivation for parts of COWL are wanting to split maps and search on the same domain, for example
mkwst: depends on which features are being enabled
john: we are not opposed to google.com and youtube.com working together flawlessly
bhill2: what about facebook connect and some random site?
john: those are true 3rd party relationships
bhill2: I'm as concerned about
the things you want to break as well as the things you are
unbreaking
... what are the things that will break that you know about
francois: we have a concept like that in Firefox where for tracking protection we don't block google-analytics on google.com or youtub.com, but on other domains we do block it as a tracker
jww: we don't need to tie these things together to ...
francois: brad's point was excellent. we have a built in list, but if it's a one-way mapping on sites themselves people could opt-in to all kinds of things
bhill2: if you block things that people want they will find workarounds. sites LIKE analytics, it pays the bills, tells them what's working
john: there's friction here -- browsers treat these sites as separate and they aren't organizationally or in user's minds
JeffH: this is not what the user wants, this sharing is what "google" wants or "facebook" wants
jww: the user may want the "fast
login" enabled by this, without knowing what's underneath
it
... there might be an incentive to group things (by sites) to
make things easier or more performant, but makes things less
secure for the user
john: we already see sites that
tell users to disable 3rd party cookie blocking because two
formerly separate sites joined, but now the user has disabled
3rd party blocking for the entire web
... this is a real case
JeffH: andrew and i and john
levine have been talking about this for a long time as a basic
enabler at the DNS level. how it gets used is another matter
and up to the implementations at levels above
... policy is hung on domain names currently, with a crude
understanding of what a domain name maps to. setting up this
basic concept could enable a bunch of things to be build on
top
... Establish capabilities at the most basic realm and see what
people can do with it. whether or not the "web" invents an
equivlent mechanism as you're talking about here, I think it
still needs to exist at a lower level too
deian: this seems interesting to me but I'm still not sure how it would be used. some cases my be solved in COWL and some might not
john: look at network traffic after a google login to see what they have to go through to make this work
_devd_: that's the opposite use case -- google WANTS it to be separate, on accounts.google.com, to protect that data from other related domains
john: we are not opposed to SSO on related Google domains, but when we turn the screws on 3rd party cookies they have to go through all sorts of gymnastics -- postMessage and iframes and such -- to make it work
you can only be owned by one domains -- so a tracker couldn't claim to be owned by everyone.
scribe: we envision a tree
structure with apple.com at the top
... I don't think we wwould go the "relaxing" route, maybe we
would,
JeffH: I've heard from google folks that there's magic in chrome to make some of this work for them
wseltzer: sounds like the next steps have to happen at a lower level, the ietf group for example
JeffH: it doesn't have to, the web could have an equivalent
wseltzer: if it's established at a lower level we can then think about how we could safely use it at the web layer
JeffH: I put links in the minutes, comments appreciated
john: but it sounds like this is 3 ? years out? I heard DNSSEC mentioned....
JeffH: I don't think there should
be guidance necessarily that there must be DNSSEC to have a
SOPA record
... Ultimately you won't have a flag day to roll this out, but
you could build this list by awlking the DNS and build it that
way
... instead of a manual interaction
john; another way to do this is to crowdsource it
JeffH: not sure I understand
john: if you're creating a new
restriction, like the Disconnect list -- 'ok this broke, so we
crowdsource this exception'
... could we define a mechanism then the sites could do it
officially rather than having to do crowdsourcing it
bhill2: we're at time. I think next steps would be define what privileges you want to enable. heard that request from several people
next topic: Credential management and web authentication
<wseltzer> Mike West, Wendy Seltzer
<wseltzer> mkwst: Credential Management spec
<wseltzer> ... Chrome is going ahead with current, imperative version
mkwst: there's a credential
management spec for a couple years. Chrome is going ahead with
the imperative spec, with good feedback from MattN and
others.
... all the same properties apply. the API is more or less the
same as last year
... fairly generic, high level
... store, get, require user intermediation
... shipping in Chrome 51(?), the one right after GOogle
I/O
... a number of external companies interested, kayak, the
guardian, clipboard... folks are experimenting
... seem unhappy it doesn't give them access to everything, and
requires a secure context
... and that it requires asynce fetch()
... they're excited about the idea of doing it async--hadn't
thought about it before--but short term pain due to some of the
restrictions
... seems successful for the folks trying it out
... hope the UI witll be acceptable to users, will be
monitoring it carefully
... maybe in 6 weeks or so we can give you an idea how it's
going
... We'll talk about it a little at I/O and show examples of it
layered on top of existing sites (use fetch, then navigate to
the page where it would have posted the login)
bhill2: does this work cross-origin?
mkwst: hm, no. within an eTLD+1
we'll allow a user to choose a stored pasword, but not to
automatically submit it until a user has chosen it once
... we still limit this to the top-level page, not
frames,
... we're trying to be restrictive at first and maybe we can
loosen things later
... the paypal in-frame case is totally legit but we want to
move slowly
... it's difficult to explain to users what's going on -- why
is something asking for my paypal credentials on store.com? is
it legit?
tanvi: you said if the user didn't have a login for that domain you'll prompt for it?
mkwst: no, related domains. if
you had a login stored for foo.example.com and then
bar.example.com wants a credential we'll show the user they
have a foo.example.com login and let them choose it
... not allowing example.com to say "hey, I accept the
google.com login!"
MattN: one of my concerns is the
multi-login account, when you have a twitter and a facebook
account and a site accepts either
... if you've used one you're only shown one, but it's unclear
that if you cancel you coiuld then log in with the other
account
... I don't want to "cancel" I'm trying to log in. not
intuitive
mkwst: we're still working on the
UI here, talked about offering an "other login" option instead
of "cancel"
... the site is responsible for the UI. if it thinks you're in
the log in state it needs to distinguish between "I don't want
to log in" vs "I don't want one of these"
john: are you doing something about legacy HTTP auth in this group? Especially log-out is an unsolved problem
mkwst: I agree with richard, the less I think about http auth the happier I am
rbarnes: we just added telemetry and it looks like 1/4 times users use http auth it is an insecure connection
john: I would like a better, but still native support for web authentication
wseltzer: you ought to join our web auth WG then :-)
john: cookies for auth is terrible and we've been trying to save it with HttpOnly and secure and other kinds of things.
bhill2: is there any API (now or future) to ask "am I eTLD with foo.com"
mkwst: no such proposal, but could be done
<tanvi> telemetry on type="password" security - http://mzl.la/1XxVG9H
<bhill2> mkwst: could add to URL spec
bhill2: currently the way to do that is for a site to pull in it's own copy of the public suffix list, and no guarantee it's the same list used by the browser
JeffH: using this UI does not manifest a browser-implemented browser UI
mkwst: it does!
... the page calls get() in a way that will not show UI (log in
if log in available), or it can call it in a way that will show
UI to the user using native UX
<tanvi> 45% of login pages are over HTTP.
tanvi: pages with logins, or percent of times users submit logins?
<tanvi> pages with logins
tanvi: I ignore http logins, so I see every page on a site with a login box. If I logged in then those boxes wouldn't be there, lowering the count for an arguably worse situation
MattN: one of the other issues is
the initial registration. that's still being typed into the
form and reduces some of the security benefit
... since it's not going into browser UI
<Zakim> MattN, you wanted to discuss the initial login capture
MattN: Part of the point of this API is to have security properties. If we're not helping the initial web registration we're missing a benefit
mkwst: there was the "write-only" proposal that would help, too, but there didnt' seem to be much interest in implementation
rbarnes: write-only?
mkwst: a field that couldn't be
read by the page, only submitted by forms. there were more
subtlies involved
... not sure what would happen with service workers (maybe
punt? bypass?)
MattN: I still find it unusual
how this async login works when sites could just submit a form
like they usually do
... maybe requiring write-only would be a way to add this to
the existing flow/model
... for sites who don't want to use the fetch/navigate
approach
bhill2: I was in a conversation
with accessibility folks about password fields saying "star
star star star".
... do we need to be concerned about that with this feature as
well?
... how often do people synthesize their own "password-looking"
field to work around this or other issues?
tanvi: you're proposing that for password cred. we fill into the existing forms rather than separate UI?
MattN: I think that would be better, yes.
<JeffH> https://w3c.github.io/webauthn/
mkwst: do we need to integrate credential management with Web AuthN or are we going our separate ways?
JeffH: here's the "unofficial"
draft (not FPWD yet, but what we're working on)
... impedence mismatches I found over lunch, and we both have a
credential object but the semantics are different
... I don't know the full backstory why we pulled out the
abstraction we did that was based on an earlier rev of
credential management
... need to resolve within the Web AuthN WG
... is the WAN interface semantically the same as the
credential management interface, I'm not sure
... We used DOMString in various places that have gone to
buffersource and ...
mkwst: you have a different
credential interface that's named the same credential interface
-- we can't do that
... we can merge, or we can rename one of them
... one reason CM is so abstract is so it could take in things
other than just passwords
JeffH: we need to up-level this between the two groups. there's no technically reason they couldn't be more tightly bound
rbarnes: either they need to converge/align, or they need to really diverge
JeffH: agreed
mkwst: at least not using the same names for different things
rbarnes: to what extent does CM add value to asymmetric things like Web AuthN?
mkwst: having CM give you a representation of the credential and have future things hang off it. I'm not sure what's easiest for developers. my first instict was that having one thing for "auth stuff" would be best, but maybe a different API for passwords vs. auth would be better
JeffH: doesn't seem like the impedence mismatch is that high -- they are similar.
mkwst: we should talk
JeffH: is there value in the
convergence?
... basic state is we're going to FPWD real soon now, call for
consensus is out. maybe later June
... polishing this summer
mkwst: Microsoft is already doing this?
JeffH: they have an
implementation in Edge. something that is demoable
... have you looked at Web Auth N, john?
john: no, we've talked to Mike
about CM. this isn't in webkit this is in Safari land,
interacting with keychain and so on
... I've been talking to the folks mike talked to. If this is
now an API it will move password management into webkit from
"safari"
... please send/re-send me emails
tanvi: are you interested in doing this?
john: I woiuld say yes, but traditionally this is part of the safari team and as a webkit representitive I can't say so definitively without talking to them
tanvi: is there website demiand for this?
mkwst: I'm talking to a few folks who have imeplemented using this on their sites. I have the impression sites like it when users log in, and making it more frictionless is liked better
JeffH: the notion of federated is a shared secret?
mkwst: no, currently we only save
which login the user used last time but we don't log in for
them
... would love to do something like that. need to come up with
some kind of trusted browser UI to be used for this
devd: is there a mode I can find out "if the user had a credential that could be used now"
mkwst: no, we don't share whether
the user has credentials or not
... lots of people want to know that (avoid prompting if there
isn't a login already) but I don't know a good (safe) way to
share that info
devd: do you know how many users store how many passwords?
MattN: this api is giving sites a new way to interfere with password management by storing bad information. is there a way to block sites that behave badly?
mkwst: we mitigate this by involving the user
MattN: but the user doesn't see the password, it could be junk and not the real password
jww: we have evidence they do this from autocomplete=off
bhill2: ok, next topic
<rbarnes> fwiw: Firefox password manager fills a password ~15% of the time
<bhill2> https://github.com/w3c/webappsec-cowl/wiki/May-17-F2F-meeting
<rbarnes> (and no, i'm not sure what the denominator is)
deian: joel and I talked about
what parts of COWL we could implement that would be most
useful
... COWL supports "labels" for information that can segregate
it
... pages can delegate privileges to share information
... once you read sensitive data you are tainted with that
label. read data from "b.com" and you can't leak it back to
"a.com"
... Reduces the authority that the page would have
... this isn't to deal with malicious code, but to reduce
leakage or hacking of well-intentioned origins
... Our last feedback was not using origins for our labels
<MattN> rbarnes: not sure where you got that data from but Fx has saved logins for 27% of login forms visited
deian: Now we're using origins,
unique origins, and @@
... The other main feedback is to make the serialized labels
look like [origins?]
... The new model requires an iframe to have taint, so
top-level pages can always be navigated
... when you create a combined iframe you lose access to some
APIs
... that's a simpler deplyment scenario and more backwards
compatible if we change it later
bhill2: gives us more future flexibility, we can add new features and say they only apply in certain circumstances
deian: if you do turn on confinement in a page mid-way through it's hard to remove access in the DOM in chrome, easier in Firefox
mkwst: it's easier to reason about if it's more static
deian: I'll keep that in mind sa
we refine this
... briefly looked at the service worker spec. if we think of
it as a proxy we can make this work.
... there are other things brough up -- leaks by resizing
iframes. hard to address
... inclined to punt this to a future version since this is a
covert channel and there are others
... if we require it to be static that simplifies the
implementation
... Nicholas is about half way through rewriting the FIrefox
patch to the new approach
... for Chrome if we can intergrate COWL and suborigins we
might be able to get some of the features in pretty
easily
... With COWL google.com isn't isolated from
google.com/maps
jww: I think it's fair to say
sub-origins is a subset of COWL except for the asymmetry
... there are "bite sized" chunks of implementation we could
do, and some of them are similar and we could make them
match
deian: we required integrity and condidentiality, but many times the integrity label was empty. I propose that we set a sensible default and not make people specify it all the time
wseltzer: if we have a new editor we need to get him official as an invited expert
jww: sub-origins is a way of
defining a namespace within the scheme-host-port origin
tuple
... we have a draft spec that covers most of the issues that
needed to be covered. clearly some areas still need clean
up
<JeffH> https://w3c.github.io/webappsec-suborigins/
jww: Chrome has a spec-complete
implementation behind a flag (with a couple holes --
cookies,)
... cookies don't follow the SOP anyway and add
complexity
... there are times when you would want to share cookies. but
since one of the usecases is protection from less trusted
sections of the same domain we _don't_ want to share
cookies.
... there's no great short-term solution (can provide a doc of
the various attempts). Spec presents the option we think makes
the most sense
... cookies are attached to the same host/port sent over HTTP.
but document.cookies would be empty
<bhill2> dveditz: is it a same origin XHR?
jww: you could simulate cookies using local storage
<bhill2> jww: no, a suborigin request is not same origin with the same scheme/host/port
jww: up to the server to not send cookies to a suborigin it doesn't want that suborigin to know
tanvi: couldn't you just set the cookie locally for that suborigin?
devd: but what network request to you attach it to?
tanvi: key the cookies off the whole suborigin
devd: what do you _send_ with the request? you don't know the suborigin until you get the response back
jww: we think for most cases this is the best option
tanvi: could do a pre-flight and then send only the right cookies
devd: we could, we'll look at feedback on this
jww: we have to trust the server, the server is what sends us the suborigin header
dveditz: what about the origin specified in postMessage?
jww: spec has a serialized format for that
devd: we also have
unsafe-postMessage
... that sends to the whole origin
artur_janc: we tested this on a crazy set of extensions with safe and unsafe cookies
jww: there are a large number of apps where the cookie is totally untrusted and it's fine to see all the cookies. server can specify unsafe cookies and then document.cookie will be populated
artur_janc: still respecting httpOnly, of course. with this approach we found most applications continued to work, whereas blanking document.cookie broke things
jww: there's symmetry with sub-origins
devd: could put a wordpress "admin" area in a sub-origin so a comment XSS can't reach into it and do bad things
john: I agree it would be nice to segregate cookies, but if you spent all day discussing it there are probably some gems in there we can learn from
jww: I'll take an action item to share our working document
artur_janc: there's the unsfae mode we talked about, and for safe mode we could treat it as a separate origin
john: If I'm example.com with a hardcoded resource
<inserted> scribenick: bhill2
johnwilander: if I'm example.com and have a hardcoded subresource to example.com and move my main page to a suborigin, what happens
<dveditz> ... main page is in a sub-origin. what happens?
dev: fetch will be cross-origin
johnwilander: in safari that will be 3rd party and cookie won't be sent
dev: yes, shouldn't be treated that way
johnwilander: yes but if it's a new origin...
jww: as I said, thanks to cookies suborigin cant be platonic ideal of a new origin
johnwilander: for our case would be equivalent of opening in a clean-slate private browsing tab
dev: but you want it to work with some cookie sharing
dveditz: can we differentiate between domain cookies and host cookies?
dev: for dropbox we don't set a domain
dveditz: you'd have to for suborigins to see it
jww: setting of cookies is also concerning, don't want suborigin to be modifying cookies of another origin which breaks the symmetry you'd like a bit
artur: right now subdomains can
mess with your cookies, suborigins are supposed to mirror that
someway with less effort
... we can debate how much more we want out of suborigins than
subdomains
wseltzer: why are we not trying to fix the brokenness of cookies?
artur: it is difficult to get
infinite subdomains; caching, dns latency, apps stop working on
a subdomain because urls break, links break
... in practice a very breaking change to move to a
subdomain
... having a more lightweight mechanism that lets you do a
server side change with a header and options is much lower bar
for engineering effort
wseltzer: so trying to port some things over and avoid cookies mess
dev: but I want it to be as much
like a completely different origin as possible
... don't want to recreate the subdomain mess around
cookies
<wseltzer> s/avoid cookie mess/avoid fixing cookie mess/
dev: this is being fixed by the host cookie prefix
jeffh: you can't just change how
cookies work without breaking the web, I chaired the WG that
wrote the RFC
... we documented behavior, got noncompliance fixed and closed
it
... but going to reopen with Mike's work
jww: so there are efforts to fix,
but we're also trying to deal with current world in a backwards
compatible way here
... sandbox was too pure and harmed the ability to use it
johnwilander: would not be surprised if some of us end up betting on password/credential autofill and just wiping cookies every now and then
tanvi: you'll still need to deal with this in v2
dev: browser just treats all requests as cross-origin
tanvi: doesn't seem like a fixable problem
jww: goal was a set of behaviors
which we believe will help a lot of applications in a huge
number of cases
... wouldn't be surprised if there were other strategies needed
for other applications
johnwilander: have you considered adding an attribute to cookies; only cookies with suborigin attribute get shared
jww: would be excited about that at some point, not sure about v1
dev: impacts adopting, needs more server changes, makes sense to build it up over time
artur: we've spent hours/days on
this, and I am 100% comfortable using this on google.com with
the unsafe-cookie mode
... I think we get substantial improvement even with cookies
being shared with suborigins
... good to have ways to make it more strict for some apps, but
in practice apps that use http only auth cookies, they will be
much safer with suborigins
dev: being able to write to a
cookie from a compromised suborigin allows only session
fixation, nothing else
... vs. e.g. getting access to csrf token if document.cookie
was readable
artur: current proposal addresses both modes
johnwilander: but Google would be fine with the cookie flag because you could set that and it would work
tanvi: Google doesn't care because auth cookie is http only
deian: why not use different cookie jars?
dev: if you have multiple cookies with same name which do you send?
artur: makes it very unclear to
developers what cookie is sent where
... difficult to reason about what will be sent, which will be
readable in document.cookie, may be mismatches from what you
see in document.cookie and what actually is sent to server
jww: from an implementer
perspective would be extremely difficult to do that in
chrome
... networking stack would need to be involved and taught about
suborigins, vs. just as a web platform layer feature
johnwilander: we already have names about such conflicts with subdomains
dev: and they do confuse people
jeffh: yeah, it's gnarly
... as long as changes are backwards compatible we can make
necessary modifications
jww: I don't think choices made about cookies are bad, they are the choices that were made
jeffh: there has been some thought on reworking idea of state management
<tanvi> we can call it crackers
jeffh: through perhaps lack of cycles, etc.
bhill2; there was an "HTTP State Management" WG in the IETF ~3-4 years ago, their work was ultimately irrelevant because no browser vendors showed up and cared
scribe: but you could go look at that and implement if you think ideas were good
bhill2: IETF effort I'm thinking of was HTTP Authentication not State Management
jeffh: and there were some experimental RFCs and it's in the record now and demonstrated that there was little browser interest in having a notion of authentication wedded to the HTTP protocol
jww: somebody brought up a use
case we can't support today
... edge CDN, superfast but less well protected as a physical
site
... if not all UAs support that, you don't want to load from
that CDN if integrity is not present
... you can do it programmatically, but has lots of
problems
... it would be cool to have a way to do it declaratively
... original spec had noncanonical-src which we didn't do
... freddy is not a fan
rbarnes: can you elaborate what this would look like
<jww> bhill2: original motivation was proxies messing with resources
johnwilander: did you consider if
adding the hash to the URL
... if you want it to fail when not supported
dev: no, developer wants it to work everywhere, but only use the sketchy CDN if you know about integrity
jww: issues with programmatic polyfill outweight the benefits
bhill2: it could just be a
canary, users with browsers that do it can report to protect
users that don't
... but you could just change based on user agent
dev: you can get preload scanner
benefit by doing a link rel preload at the head
... or just do UA detection
jww: person with this use case thinks user agent sniffing may work
freddyb: friends who are pentesters like getting more scanners to break with more src-like attributes
dev: also preload scanners won't catch it
francois: also, how people do
this in practice, we have two different files supposed to be
the same, they will get out of sync
... may not notice their fallback doesn't really work
johnwilander: UCBrowser is getting really big, is it webkit / blink based?
<JeffH> https://en.wikipedia.org/wiki/UC_Browser
deian: scariest thing is that versions may diverge
bhill2: page weight of hashes is a concern given the number of JS files in Facebook
dev: we share that concern, maybe
addressing signature support is a better first step
... so for module management, and dynamic includes, may need
the full list
johnwilander: who would issue certificates
rbarnes: it's your own page, just
do a public key, no certificate needed
... we did something similar for content going into about:
content
... we don't have any way to declare which public key, we just
require that everything that arrives is signed with a
particular public key
... defined with martin thompson's signature header:
https://tools.ietf.org/html/draft-thomson-http-content-signature
dev: this is way easier and cleaner
jww: is there merkle tree dovetailing here?
johnwilander: is there replay
protection?
... if there is a vulnerable version with a valid signature
francois: you change the key?
dev: for sites with so much javascript there is a pipeline that makes this trivial
jww: content hashes will still be the easiest for most sites
rbarnes: this means CDN can substitute any file signed (with a vulnerability)
bhill2: so like Artur's earlier
statement about issues with CSP and whitelisting for large sets
of JS
... you could put metadata covered by the signature that is
checked by a module loading system like require.js
... should this be coordinated with the new module system for
JS on the web?
dev: at some time all this
packaging work is irrelevant
... so signatures are more valuable
bhill2: to prevent file substitution, should modules have standardized metadata that interacts with integrity/signature checks?
dev: attractiveness is not having to reinvent this stuff
francois: but its a very different use case
dev: Merkle?
rbarnes: the merkle proposal is just straight hash, no signatures
dev: merkle lets you stream it in verified chunks
jww: is it better for anything we do today?
dev: for downloads
johnwilander: my immediate thought was: is this going to mirror what we consider active content?
<JeffH> https://www.ietf.org/proceedings/95/slides/slides-95-httpbis-0.pdf -- Thiomson's preso from IETF-95 on MICE
bhill2: do scripts start to parse before integrity is verified?
<freddyb> thx, JeffH
<JeffH> https://martinthomson.github.io/http-mice/
bhill2: would streaming help webasm?
dev: is there browser interest in mice?
freddyb: too early to tell
francois: not useful for script/style, only if we do other content types
dev: so for download
... dropbox can't do it without signatures, too
jww: so we should keep that on the table
bhill2: signature sounds more like a csp directive than sri as it currently looks
dev: could be that, make policy very much smpler
johnwilander: also would make your page not need to change
deian: how often would you need to revoke your keys?
dev: just do it every day?
... have multiple keys to cover a window of validity
<wseltzer> [adjourned]
This is scribe.perl Revision: 1.144 of Date: 2015/11/17 08:39:34 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/Teleconference/F2F/ Succeeded: s/no way/...no way/ Succeeded: i|[short break]|scribenick: wseltzer Succeeded: i|estark: a policy|scribenick: bhill2 Succeeded: i/TOPIC: cross-origin caching/scribenick: bhill2 Succeeded: s/as "work in progress."// Succeeded: s/implications/implications and economic incentives/ Succeeded: s/to are/you are/ Succeeded: s/taking myself off// Succeeded: s/SOFA/SOPA/ FAILED: s/avoid cookie mess/avoid fixing cookie mess/ Succeeded: s/JeffH: thx/thx, JeffH/ Succeeded: i|if I'm example.com|scribenick: bhill2 Found ScribeNick: bhill2 Found ScribeNick: wseltzer Found ScribeNick: bhill2 Found ScribeNick: dveditz Found ScribeNick: bhill2 Inferring Scribes: bhill2, wseltzer, dveditz Scribes: bhill2, wseltzer, dveditz ScribeNicks: bhill2, wseltzer, dveditz Default Present: Dan, Veditz, Brad, Hill, Joel, Weinberger, Emily, Stark, Francois, Marier, Frederik, Braun, Tanvi, Vyas, Artur_Janc, Jeff_Hodges, John_Wilander, Deian_Stefan, Richard_Barnes Present: Dan_Veditz Brad_Hill Joel_Weinberger Emily_Stark Francois_Marier Frederik_Braun Tanvi_Vyas Artur_Janc Jeff_Hodges John_Wilander Deian_Stefan Richard_Barnes MattN Mike_West Wendy_Seltzer Agenda: https://docs.google.com/document/d/1KQ_TWHBc1QBn4Xf2yJ7AYDQumuJioaGDfxbzwIJjxOI/edit# Found Date: 17 May 2016 Guessing minutes URL: http://www.w3.org/2016/05/17-webappsec-minutes.html People with action items:[End of scribe.perl diagnostic output]