WebAppSec TPAC 2016 F2F Day 1

22 Sep 2016


See also: IRC log


dveditz, francois, Jungkee_Song, bhill2, tarawhalen, teddink, jochen__, JeffH, jcj_moz
bhill2, dveditz
bhill2, dveditz


<bhill2> Remote participation available at: https://talky.io/webappsec

<bhill2> Please ask here to have comments voiced to the room

<mkwst> bhill: Introductions!

<mkwst> ... We can use talky.io for some remote folks. Will set that up when we get there.

<mkwst> ... [Introduces self]

<mkwst> [Introductions]

<mkwst> [awkward walking around the table to get the mic back]

<mkwst> ... Agenda at https://docs.google.com/document/d/1yAsZiacMJ55JUPWC6kZAfBzzW1HNc0H7noPtEzcdxqI/edit?usp=sharing

<mkwst> ... [Walking through agenda]

<mkwst> ... Any additional agenda items?

<mkwst> [crickets]

<bhill2> scribenick: bhill2

mkwst: have various specs close to Rec
... CSP2 is one, been CR for a long time, trying to move to PR
... ... how is the PR request going?

Moving "finished" specs forward.

<mkwst> mkwst: CSP2 exists. Transition request from CR to PR sent.

<mkwst> ... Some concerns from director about test suite.

<mkwst> ... Brad's done heroic work to get the test suite into better shape.

<mkwst> ... Some TODOs causing concern.

<wseltzer> [Once the tests are complete, I believe Director will be ready to approve]

<wseltzer> [and yes, thanks Brad for pushing the tests so far]

ted: we may have some tests that we have or will soon push to WPT

mkwst: don't think there is any value in editorial nits at this time, just patent protection from getting to REC
... let's focus on making sure that CSP3 is right, not going back in time to 2012
... Mixed Content just went back to CR
... think we can move to PR

bhill2: Agree, we can go ahead and move to PR.
... solid implementations of everything in it, block-all-mixed-content was last thing and now implemented in Firefox

mkwst: upgrade insecure requests has been CR for ~1yr
... nothing left in this doc, moving to PR seems reasonable
... next is Secure Contexts, just went to CR
... chrome is missing some pieces

<wseltzer> [when drafting Transition Requests, please make reference to the Chair's declaration of consensus, not just the Call for Consensus. Thanks, from the team]

mkwst: good to work on a shared test suite to demonstrate interoperability
... I think we are a little bit far away from a PR at this point in terms of interop
... move to PR hopefully early next year

bhill2: Are there features AT RISK?

mkwst: a few, added late in the process
... mozilla wanted some sandbox behavior
... changed localhost behavior, domainame 'localhost' is not a secure context due to DNS issues that popped up
... there are some ongoing discussions on that, maybe browsers should bypass resolution for this name
... even though that is not what system resolvers would do

jungkees: we use localhost a lot for service workers to do testing

mkwst: we expect localhost to resolve to loopback but it actually goes to the network in various cases b/c system level resolver
... that ambiguity doesn't consider localhost a secure context, but is a secure context
... and browsers should provide a mechanism to declare that an origin is secure for testing
... further discussion here: https://github.com/w3c/webappsec-secure-contexts/issues/43

<JeffH> ... Allow "localhost" as long as it actually resolves to a localhost address #43

dankaminsky: I think that localhost should bypass resolver, it surprises less people

dveditz: doesn't seem totally terrible

mkwst: from my perspective this is about how we do DNS resolution, not an issue for secure contexts
... totally happy with secure contexts spec saying this is a secure context if that is true, but if there is ambiguity
... that is strange, surprising and bad and am less happy doing that
... don't know who the right people are to talk to about changing the underlying assumptions
... this is surprising for developers and would be good to change
... as written I think the spec is technically correct
... would be nice to make it more developer friendly, but it is accurate as-is
... can drop that if we figure out how to make it work during CR, hence marked at-risk
... concern is that going to a coffeeshop with crazy DNS would expose highly permissioned configuration to not actually localhost
... CR period is a time for implementation, so hope browsers will implement or tell us what is bad to implement
... my goal is PR first half of next year

jungkees: PR being opened to HTML to disown opener when a secure context is created from insecure one

mkwst: CSP2, MIX ready for PR, Upgrade Insecure needs a test suite, and Secure Contexts should be ready some time next year
... any comments or feedback on that?

<mkwst> rbarnes: Priming!

<mkwst> ... https://mikewest.github.io/hsts-priming/

<mkwst> ... Implementation is in final review, should be landing soon.

<mkwst> ... Two prefs to Firefox:

HSTS Priming

<mkwst> ... 1. When the browser encounters something it would block as mixed content, it sends a priming request to check for HSTS header.

<mkwst> ... Caches the result, positive or negative, for next time.

<mkwst> ... Won't keep priming against the resource.

<mkwst> ... 2. Flips MIX and HSTS.

<mkwst> ... HSTS takes effect before checking whether a resource is mixed content.

<mkwst> ... This allows HSTS to upgrade the request, ensuring that it won't be treated as mixed content.

<mkwst> ... Plan right now is to release the priming request bits first. Should have zero compat impact.

<mkwst> ... The MIX/HSTS flip will ride out to Beta, won't ship until more experience and implementation from other browsers.

<mkwst> ... to avoid compat risk.

<rbarnes> https://bugzilla.mozilla.org/show_bug.cgi?id=1246540

mkwst: there was some issue about revealing information to a divergent host
... have now decided that I am not concerned about that, http/https should be considered the same, forbes aside

rbarnes: we have a bunch of telemetry so will be able to give data about how much difference this makes to the web
... so for a chair's POV we should be accelerating this spec

bhill2: are there errata necessary to HSTS re: switching ordering of MIX / HSTS checks?

mkwst: that's in Fetch, not in HSTS?

jeffh: not sure
... would write up changes the way cookie changes have been done and send to websec@ietf.org mailing list
... if we think a 6797bis is appropriate

mkwst: I don't think it is explicit enough that changes are needed
... if you look at fetch

<mkwst> https://fetch.spec.whatwg.org/#main-fetch

rbarnes: for priming, we are doing a head request to the resource
... spec says something different

mkwst: we were trying to avoid revealing information

bhill2: also some sites may not send HSTS header for all resources, only well-known entry points

mkwst: we could add telemetry, flip between actual resource and /

<rbarnes> looks like the patch sends the request to the resource itself, vs. the origin. so slight mismatch with the spec

adjourned until 10:30

jungkees is jungkees


Clear Site Data

mkwst: gives a website ability to remove all data it created on a user's machine
... e.g. logout, want to make sure everything is deleted, G+ wants this so locally stored images get wiped
... have to iterate through localstorage, cookies, cache today
... would like a more complete mechanism
... chrome has an incomplete implementation behind a flag that only works for navigational, not subresource requests
... enthusiastic about the future
... gives us all the infrastructure to do a "forget this site" feature similar to what Firefox has
... syntax is fairly terrible; header sent in response
... JSON object with list of types
... turn into bare words
... open question what kind of syntax we want
... flip the flag and play around in Chrome 53

where is conversation w/HTTP WG re: JSON in HTTP headers

mkwst: they are reluctant to do so, deficiencies with JSON, inconsistencies in parsers for things like floats
... my perspective is that its a good thing for developers to create a readable header with some structure
... but there are challenges to using it in practice
... http wg considering a few things
... 1) reinvent the wheel with a new format

2) go with a "web safe" subset of JSON

mkwst: ... don't use floats, use ASCII, not Unicode strings
... I'm torn. A number of headers we'd like to define in forward compatible ways
... don't want to wait for this, will take some time
... want to make something compatible which will be regarded as not bad in a few years
... for this spec we can avoid the problem by simplifying
... but unquoted strings would require a new syntax if we go with JSON in the future
... but quoted, comma-separated strings are already a JSON list
... some folks seem to disagree, would love feedback on general issue
... referrer policy is without quotes, question whether to add quotes in spec
... referrer policy doesn't have a tree, just a value; need to define what happens when multiple values present, e.g. comma-coalescing multiple headers
... brian smith wanted more power in referrer policy header in the future
... we need more structure to get more power
... most today do like CSP, keyword: value; keyword2: value2
... but we can do better and not have unique snowflake parsers for each header
... being forward compatible with JSON is reasonable, but downside is if we don't do that, we have unneeded quotes
... don't see that as terrible, and would like to advocate for quotes

jeffh: trying to be forward compatible makes good sense

mkwst: do you or richard have opinions: where is issue going?

rbarnes: <shaking head>

<rbarnes> i do not have a good read on the HTTP WG with regard to this issue

mkwst: some open questions, specifically around cookies w/different origin model
... chrome's current impl is fairly draconian
... when origin asks for cookies to be cleared, clears eTLD+1 not just origin
... would be unclear what state an application would be in if we only deleted some of its cookies

dveditz: so would clearing cookies for maps.google.com also clear for foo.google.com?

mkwst: yes, would clear everything for google.com and ancestors

timbl: for github.io?

mkwst; different there because github.io is a public suffix

mkwst: though draconian, was at least a reasonable thing to deal with, incomplete clearing not so clear what an application should do

dveditz: kind of conflicts with leave secure cookies alone spec, in that you try to preserve secure cookies and blow all cookies around

mkwst: clear site data is locked to secure contexts only

timbl: is it a long term goal of the group that github.io should not need to exist vs. protecting on a directory basis

mkwst: not really

bhill2: we have suborigins

mkwst: not the same, because suborigins are resource specific, and cookies
... Public Suffix List is one way to deal with this, but going to have problems soon
... afraid.org wanted to add 85k entries to the PSL recently
... all are similar to github.io in semantics
... but no way to scale support the way it is implemented

timbl: does public suffix have an API?

dank: should live in DNS

rbarnes, jeffh: working on it in IETF

mkwst: cookies are broken and we are stuck with it

<rbarnes> "working" might be too strong; the DBOUND WG was chartered, but i don't think it made any progress

people are "talking" about it and recognize a problem exists. :)

<rbarnes> https://tools.ietf.org/wg/dbound/

<rbarnes> ^^^ appears to be closed

facebook likes subdomain clearing properties of cookie, will there be an includeSubdomains for other things like local storage?

mkwst: some concern about one origin controlling another's storage
... google working around by sending requests to those subdomains and they reply with statement to do it themselves
... this can be expensive
... clearing cache is expensive b/c no origin-based index

jungkees: did you consider service worker registration map as site data here?
... is removed when user clears all site data from the browser menus

mkwst: plan is we will discard service workers too
... spec needs flushing out, but some specs like webSQL don't have necessary things to refer to

dveditz: does this clear service worker's cache?

mkwst: want to clear cache API as well as disk cache, but currently chrome's implementation only clears disk cache
... chrome presents to users "cookies and site data" which is everything, so being very granular doesn't align well with that
... but would be open to feedback from developers

dveditz: on mozilla's UI side we have "forget about this site" but developer could consider service workers differently

Credential Management

mkwst: shipped in Chrome 51 and getting feedback from deveopers
... hasn't been much change to this doc since last F2F
... a few new issues from Mozilla and others not yet addressed, trying to get password manager team to take this over at Google
... incremental improvements to the API
... but pretty unhappy with how API ended up, very high-level and generic in nature
... because we thought WebAuthn group would use it

jeffh: we tried

mkwst: I think we have an opportunity w/only one impl and only 30 websites using it at scale to take another look
... decide if there are uses for high level API or if we can shrink down to tight focus on password and federated auth
... we could make it a lot simpler if we just did something like navigator.passwordManager.get (not that but something like that)
... given that more generic use cases have gone a different direction or not materialized

rbarnes: did federation go away?

mkwst: no, but current API is very generic with lots of indirection for extensibility, but could refocus for the only two cases we have today

timbl: does this give access to certificates?

mkwst: no it doesn't, provides only origin-locked credentials

<mkwst> https://developers.google.com/identity/casestudies/aliexpress-smartlock-casestudy.pdf

teddink: you said there were a small number of sites using it?

mkwst: I think like 30, I've worked directly with about 7, worked with a couple before I/O when we launched
... case studies are more for Android to this point

<mkwst> https://developers.google.com/identity/smartlock-passwords/case-studies

bhill2: apple was interested in app-level authentication features for the web platform

<JeffH> bhill2: johnW had noted that a desired feature is a "login, and be logged-in forever"

mkwst: platforms have this, passwords are terrible, but password managers make passwords much less terrible
... if people like it, it can go to CR, but if people want to propose something smaller, happy to work on that
... we have shipped in chrome and there are deployments, so would need a really better API to justify the changes
... but not too late
... to suggest those kinds of changes
... Matthew at mozilla suggested some syntatic sugar to just dump a form into the API

jochen: can we fix the dealbreakers with WebAuthn WG use cases?

jeffh: I didn't drive that, kinda happened organically, probably worthwhile to re-check, folks who drove that decision aren't here


mkwst: little to say beyond last F2F
... idea is to protect local services from potentially malicious access by the Web
... spotify, google, etc.
... those local servers are very often doing a terrible job in protecting themselves from the web
... see this especially recently in AntiVirus software packages
... problematic for things like routers that never update and are vulnerable often to CSRF
... proposal is to segregate local network and localhost from the web by creating a small onion
... (((localhost)RFC1918)internet)
... require a preflight to go deeper, accept origin and external request explicitly

<scribe> ... new header Access-Control-Allow-External

UNKNOWN_SPEAKER: implementation is problematic if public DNS names end up resolving to localhost or RFC1918
... because of layering issues with network and name resolution stack, hard problem I haven't solved yet
... no implementation outside blink
... sounded like Crispin and other MSFT folks were *very* enthusiastic about this idea, but not aware that experimentation has started yet
... very much like to see something like this happen to solve a real class problems, but it turns out to be hard

dank: enterprises ended up blocking previous work in this area where they deliberately link from "public" content into their intranet

mkwst: this will break things period
... if you are not able to update, you shouldn't be accessible from the internet

danK: this is how we ended up with IE6 stuck on so many environments for years and years

mkwst: localhost seems straightforward enough, also want to protect routers by looking at gateway
... would need some admin configuration, PAC files
... worry about that later, solve the complex cases first, IPv6 makes this even harder

<mkwst> https://www.chromestatus.com/metrics/feature/timeline/popularity/530

mkwst: according to Chrome's data, we see 1.3% of pageviews including something from an RFC1918 address which is significantly more than I want it to be
... fairly certain that a large portion of this is AV injecting references to localhost

teddink: doesn't dropbox?

mkwst: and spotify, google, etc..

wendy: can w3c help in information gathering?

(link shows % of pageview in blink that load RFC1918 content from a page on a non-RFC1918 address)

wendy: maybe information distribution about potential breakage

mkwst: for better or worse users will get caught in the middle

drogers: will be a problem with routers, internet may stop working

mkwst: synology does things that map public DNS to local hosts

Plex famous local media server https implementation also broken by this?

No idea how it is implemented, but I wonder if any of e.g. Comcast's automatic authorization of access to paid content when you are coming from inside their network relies on behavior like this

mkwst: will have to be behind a switch, test and find breakage, try to fix in advance
... would like to coordinate with browsers so answer isn't just switch to a browser that doesn't do this

dveditz: Opera / Presto used to block like this, no longer does, but do they have implementation experience to share?

<wseltzer> [/me hears W3C can help with publicity, outreach on the need to update]

jochen: could also for awhile disable the check for certain domains

danK: is this like web USB?
... means there is momentum in the marketplace for this style of thing

<JeffH> .... the "property" that generates the graph mkwst points at is "MixedContentPrivateHostnameInPublicHostname"

mkwst: only a counter, no URL information here

<mkwst> bhill2: UI Security. Dan, you want to talk about this?

<mkwst> ... [hooking things up to other things]

UI Security

<mkwst> dakami: Hi.

<mkwst> ... This is the first standards body I've ever been part of. Yay.

<mkwst> ... Content on the interet doesn't know if it's visible to users.

<mkwst> ... This is a security property, SOP means you shouldn't know things about other origins.

<mkwst> ... That said, it's actually quite useful to understand when a framed page is loaded.

<mkwst> ... Advertisers, for instance, wish to understand when their content is viewable.

<mkwst> ... They generally use bad mechanisms to do so.

<mkwst> ... IntersectionObserver is a better solution. Shipping in Chrome.

<mkwst> ... That said, it doesn't solve the whole problem.

<mkwst> ... If I want to buy an LED, I go to one origin, then navigate to PayPal.

<mkwst> ... This is crazy. I don't want to go to a store, then go somewhere else to buy it.

<mkwst> ... PayPal embeds itself in eBay because they have a special relationship.

<mkwst> ... But if you do this in untrusted contexts, it's possible to change the displayed UI.

<mkwst> ... PayPal's embed could go from $1000 to $1 with an overlayed image, for instance.

<mkwst> ... There are tons of ways to manipulate pixels. Lots of new CSS features can be dangerous.

<mkwst> ... So, folks send users elsewhere, or pop up windows in order to ensure that they're in control of the rendered pixels.

<mkwst> ... I want to solve clickjacking.

<mkwst> ... Initial approach was to dive into the GPU, and move the frame's layer up to the top of the stack, to ensure that the pixels can't be manipulated.

<mkwst> ... Been working on an implementation for the last ~year.

<mkwst> ... Back and forth with browser folks.

<mkwst> ... IntersectionObserver is an interesting model.

<mkwst> ... But doesn't correctly report things in edge cases that can cause problems.


<mkwst> ... The active approach is destructive, and potentially bad for perf.

<mkwst> ... Perhaps a passive approach is more compatible with browsers and sites.

<mkwst> ... Status right now: Working with paint folks on the Chrome team.

<mkwst> ... There's clearly interest and excitement in folding this into IntersectionObserver v2.

<mkwst> ... It's not obvious that a passive approach can work, but the browser-side interest makes it appealing.

<mkwst> ... Some question about cases in which developers legitimately overlay transparent things for user interaction.

<mkwst> bhill2: Plan is to consider as part of IntersectionObserver v2.

<mkwst> ... Initially only the IO API. Create observer, receive events, deal with those events.

<mkwst> ... Need to maintain the stack of visibility events, check whether you were visible enough when a click is received.

<mkwst> ... Once we've proven that, we can progress to something more declarative.

<mkwst> ... "If the click happens when these conditions aren't met, do something."

<mkwst> ... Facebook applies not-so-reliable heuristics today, it would be great to get something from the browser.

<mkwst> timbl: A declarative version sounds like a good idea.

<mkwst> ... If you're in a state where the click isn't going to work, do we show that to the user?

<mkwst> ... Bad to confuse users by swallowing their clicks without letting them know.

<mkwst> dakami: It's fuzzy.

<mkwst> ... Amazon has different rules for one-click than Facebook for Like.

<mkwst> ... It's not clear that we can choose one rule that works for everyone.

<mkwst> ... The imperative approach allows those sites to define their own rules.

<mkwst> ... Given the variation, that seems like a good approach to start with.

<mkwst> ... [Demo of https://autoclave.run/ ]

<mkwst> ... Looking for use-cases, interested folks, help building a passive version.

<mkwst> jochen: Will this come with some kind of library? Sounds hard to get this right.

<mkwst> ... For instance, if I reveal the frame just before I think a click is going to happen, that seems hard to deal with.

<mkwst> dakami: The declarative model could work well for this. "Until the frame has been visible for X milliseconds, and Y, and Z, don't accept the click."

<mkwst> ... Looking for use-cases to flesh out what those declarations might be.

<mkwst> ... The IntersectionObserver-style API pushes that work down to the developer.

<mkwst> ... v1 doesn't give enough data, v2 might.

<mkwst> bhill2: In practice, this will be a complicated API to get right.

<mkwst> ... You'll have to deal with a stream of events.

<mkwst> ... I imagine libraries will spring up, and we can learn from them what kinds of things developers need.

<mkwst> ... Use those as a springboard to inform a declarative API in the future.

<mkwst> ... Browser vendors prefer an incremental approach.

<mkwst> dakami: Right now, the hill to climb is moving from an active approach to a passive approach.

<mkwst> bhill2: I think we want to leave UI considerations up to the browser vendors. We don't want to grey things out while scrolling, for instance.

<mkwst> ... Those decisions are delegated to the user-interface developers for each site.

<mkwst> ... They can make informed decisions about what user experience they want to provide.

<mkwst> dakami: We can polyfill declarative approaches.

<mkwst> bill: You and bhill2 have been working a lot on this. Pushing fewer things to the browsers.

<mkwst> dakami: Inability to see visibility information means that a lot of things are being loaded in the background that aren't needed for the actual page.

<mkwst> bhill2: IntersectionObserver came up because of bad performance implications of visibility checks that advertisers perform.

<mkwst> ... Polling, Flash, etc.

<mkwst> jochen: Those visibility scripts were generally injected into first-party contexts.

<mkwst> ... Which makes things even worse.

<mkwst> ... IntersectionObserver allows the network to prove visibility without injecting into the first-party context.

<mkwst> teddink: Also tiny Flash objects.

<mkwst> dakami: 'isTopRect' is literally one of the calls in Flash.

<mkwst> ... Flash throttles if it knows it's not on screen.

<mkwst> ... So check if it's throttling, kill battery, etc.

<mkwst> bhill2: Sounds like we might deprecate this spec to NOTE, and then do most of the work in IntersectionObserver v2 in WICG.

<mkwst> ... Don't know if we need to make it a joint deliverable. Perhaps can graduate IntersectionObserver into this group once it's baked in WICG.

<mkwst> ... Will add that to the rechartering discussion tomorrow.


<mkwst> [om nom nom]

agenda reminder: https://docs.google.com/document/d/1yAsZiacMJ55JUPWC6kZAfBzzW1HNc0H7noPtEzcdxqI/edit#

block of time for CSP is next at 13:00

about to begin again

remote participation reminder: https://talky.io/webappsec

finalizing strict-dynamic-syntax

<mkwst> aaj: I work on Google's security team, and do two things relevant to this group:

artur: I work at Google, on the panel rewarding vulnerability reports

<mkwst> ... 1. I work on Google's VRP team, so I reward folks who attack Google.

artur: we reward more for impact and exploitablity
... when we get it wrong, they complain, and so we have lots of knowledge of web vuln space and what google encounters, as well as our acquisitions with lots of different tech stacks
... also work on building better frameworks, scanning, etc.
... but will still have bugs
... so we want to have protection for our users even when mistakes are made
... we do "boring" things like moving apps to subdomains
... a more promising thing is looking at web platform features like suborigins, CSP, isolation proposals
... using things that web apps can't build themselves but have to rely on browsers for can help us a lot
... over 60% of rewards and 60% of high risks bugs are XSS in last several years, over $1M paid
... we care about XSS a lot and CSP is the only standard that can help us with XSS right now
... strict-dynamic: drops the whitelist in script-src, propagates trust from scripts that already are allowed to execute, e.g. by a nonce or hash
... not a big feature but hugely changes how we deploy CSP
... some adoption stories from Google products over past several years
... deployed to 15 apps, about half of which are user-facing
... photos.google.com required maybe a 10 line change because the template system already had auto-noncing
... cloud console app that lets people push code to their cloud instances, XSS there could allow code execution
... had a plan a few years ago to add CSP, came up with 4 different policies depending on the part of the application
... talked with my team in Zurich, already shipping in production, worked quite easily
... a few other products already using whitelist based CSP were switched, developers are happy to not have to worry about the whitelist
... but still get XSS protection
... we can keep the whitelist and have a separate policy with strict-dynamic and the nonces
... 300M monthly users; no significant blockers so far
... did a study of over 100 XSS bugs in Google properties, about 3/4 would be mitigated by this kind of policy
... HTML injections, JavaScript URIs..
... 1/4 of bugs would not be mitigated, but even big projects like automated scanners can only identify ~30% of bugs due to diversity of environments and bug classes
... so this has gone really well and hoping to get more folks to implement it tonight and ship it tomorrow

mkwst: shipping in chrome since 52, two stable releases

francois: this is doing 2 things, propagating nonces and dropping whitelist, thought about splitting that?

artur: we could to that with two keywords, but almost any application would have to set both
... but those policies would be unsafe because whitelists are bypassable
... dropping whitelist but turning off trust propagation seemed like an edge case because there may be dynamic script loading in the future
... also simpler to have one thing than two, syntax-wise

mkwst: conversation linked: wanted ability to whitelist a loader file, and then have propagation
... some issues with that; isn't backwards compatible
... if browser doesn't understand new syntax, things blow up
... model so far has been to have policy mean something in an old browser, and new keywords can turn off some parts of policy and replace it with a stricter effect
... such as nonces and hashes turning off unsafe-inline
... I think there are good arguments that it is backwards compatible and shown to be deployable as-is today

artur: I worry about people using nonce propagation without dropping the whitelist
... people would not add the whitelist drop directive if not needed for things to work, and thus the documented bypasses would still exist

(when whitelists are too broad)

francois: the backwards compatibility aspect is super useful
... but someone who doesn't want dynamic propagation but would want to remove the whitelist seems useful
... still have whitelist for CSP1 browsers that don't support nonces/hashes

teddink: in a few months IE will be last significant browser...

mkwst: you can get that by a policy that is whitelist + nonce, and then a policy with nonce only
... depends on how implementation works
... if implementation throws away a policy with no directives it understands or if it interprets that as "nothing passes"

artur: trying to prevent need to do user agent sniffing
... security wise the benefit of the proposed change is APIs that take tainted data and dynamically load scripts
... we have found very few vulnerabilities of that sort, nothing in the last year of 100+ real XSS bugs at Google
... so I think cases where the uses of this would produce a concrete benefit are rare enough to not warrant splitting it out

christoph: was worried that we were making CSP too complex, but if it is in a backwards compatible way, maybe better than versioning the header. I see the benefit, but where to go from here?
... do we find a year from now that we need something more?

mkwst: I hear from Francois it might be worth exploring a new keyword that gives whitelist dropping functionality without dynamic nonce propagation
... if it seems valuable, you can write it up

christoph: that google is already deploying it to good effect convinces me

artur: if we had found vulnerabilities related to trust propagation, but that's not in the data set we have

christoph: is there a reduction in the bugs filed?

artur: we pay full amounts even if only one browser exploitable, so things that CSP mitigates still result in a bug payout
... until we get full adoption
... if you can inject scripts you can probably inject phishing content, so maybe no bug reduction

christoph: can we put stuff in web platform tests to make sure we are interoperable on edge cases around parser inserted scripts

artur: so far only API where we had to manually pass the nonce was document.write


mkwst: basically done from a feature perspective
... two things that are open questions whether to add
... generally speaking the things we've discussed as a group are in the doc
... there is polish work to really call it done
... but by and large the functionality is there that we want and a good job of explaining what the spec does in terms of HTML and Fetch
... most of the hooks we need are in those documents and spelled out in ways they weren't before

<JeffH> https://w3c.github.io/webappsec-csp/#changes-from-level-2

mkwst: some open questions about how to get that into W3C versions
... there is an issue filed against relevant github for each cases and then listed in this doc
... two things that are open questions are
... 1) what to do with inline script hashes?
... proposal in the doc, not backwards compatible wrt: older browsers

2) navigation

mkwst: there a few open issues in github having to deal with navigation
... antimalware team at google wants to use something like this for advertising
... would be work to hook into spec language for navigation
... we already do this for form posts, do we also want to do it for window.open, regular navigation, etc.
... good time to look at the doc and give comments
... a few 'breaking' changes like what does * mean for non-web content on custom schemes
... some questions about what are we hashing - UTF-8 conversion first?
... where native text mode is UCS-2
... both chrome and ff appear to do UTF-8 encoding first before hashing, would love help to know if that is correct

facebook hat on, would support navigation source restrictions, people are doing this today with nested frames and frame-src

ok to not have it work on top-level browsing contexts

also: re: exfiltration and confinement, postMessage is an explicit channel that Deian wanted to control in context of COWL

mkwst: maybe better to solve by limiting when you have a handle

artur: reporting...
... never been usable because of extensions tampering with pages
... maybe not tractable, but maybe smart people have ideas
... I would like to see if a particular report was caused by an extension

dveditz: if we knew, it wouldn't be a violation

artur: for example, could the report tell us if it was in content added to the DOM by an extension

teddink: developers want line and column for every single time
... and that is really hard sometimes depending on the context when it happens

mkwst: is best effort

christoph: sometimes we put a script sample in the report / console

mkwst: we have argued about that because it can leak cross-origin data

artur: maybe something different for inline violations

mkwst: we currently set blocked uri to "inline" or "eval" but maybe there is metadata we can add there

What to do long-term with CSP?

<dveditz> terri: ping -- did you want to join the talk.io channel?

<dveditz> scribenick: dveditz

RRSAgent: scribenick dveditz

mkwst: reflecting on what we've been doing w/CSP and what's been working and what doesn't
... do we carry on with "CSP4" in much the same way? or do we come up with a whole new "better" thing?
... on the one hand I don't want to do just another CSP, and on the other building something new just to be new isn't appealing either
... if we think about what we would have done differently maybe we can come to an answer

ckerschb__: do we rip stuff out then?

mkwst: browsers are slow to move, there's an advantage to tacking things on to existing CSP. then again if we didn't have historical baggage we could come up with some beautiful thing.
... Artur is interested mainly in XSS preventing. other than script-src the rest of CSP is just getting in the way (hyperbole)
... Jan is interested in stopping exfiltration
... if we had a header that was just a nonce it would be clean and simple. would that be good enough?
... would lose some of the good features we have such as containment and so on
... At the end of the day we could definitely come up with better syntax, but from a high elevation it could be effectively the same
... We put other things into CSP because it was a convenient place to put policy. upgrade-insecure-request, block-all-mixed-content, and so on
... it's convenient, but is it worthwhile complexity?
... whether we have 5 new headers or one header with 5 options it's probably about the same

dankaminski: your proposal about isolation is beautiful.

mkwst: the nice thing about isolation is it's tied to an origin, rather than to a document like CSP. I think that's something we should build.
... one aspect of origin policy is the ability to set headers/policy on the entire site, even if the site forgets to cover all pages (e.g. forgetting CSP on an error page)
... maybe we could have done better with an origin-wide policy to start with

tbl: I'm always trying to find a way to do more rather than less. what if I wanted to turn on full errors for CORS -- normally seen as a security risk

mkwst: if the thing you're reading sets the policy "yes you can have error messages" then that would be OK. we could do something similar for "don't send me CORS pre-flights -- I understand CORS just go for it"
... feature-policy is another organization-wide policy
... could reduce threat surface by turning things off "I know I don't use WebRTC"
... having a place for a policy like that is good

tbl: then you're restricting the abilities of sites

mkwst: the way we enhance site's abilities is by involving users.

tbl: this is going in a good direction toward trusted apps on the web. you could do a tradeoff.

???: if I put my Crispin hat on, "don't ask the user security questions-- they don't know the answer or understand the consequences"

<JeffH> me s/???/Ted/ ?

tbl: if you believe users can't be trusted then we can't ever replace native apps, because users have answered the questions there

(thx JeffH)

artur: the messy state of CSP we have now is because developers didn't know what they were doing. they deployed policies they didn't understand that gave them things they didn't want. it's not just "users" who don't understand
... the "next thing" should not have the same potential for misuse and misunderstanding as CSP

mkwst: then we need a clear definition of what we're making. even CSP 3 doc doesn't have a clear definition of what it's good for

bhill2: there isn't one true way of web devleopement, and not just one true way to secure web sites
... if I didn't have a mechanism for central enforcement a small security team could not keep thousands of developers from importing random code from various places.
... we have policies like don't load images from remote, because our traffic can melt sites. CSP enforces that we host a copy of the image/resource instead for example

artur: there's a good reason CSP is in the state it is -- different people want different things from it.
... some people want whitelists, and we have a bunch of new keywords for additional features
... we should examine our threat model (scheduled for tomorrow)
... tries to solve XSS, clickjacking (frame-ancestors), ???, and half a thing is site hygiene
... but it doesn't work for clickjacking -- need XFO as well because not all browsers support frame-ancestors
... mixed-content blocking keyword, but all modern browsers already do most of that already
... the two main things that help are XSS protection and origin hygiene. The security guarantees given by a whitelist are much lower than people assume.
... completely bypassable if you have a nonce, for one thing.
... there's value in having something like this, I'm just not sure these two features should be in the same mechanism
... the whitelist was not the goal of CSP as much as a side-effect of the anti-injection goal

mkwst: I've seen tweets like "the best thing about CSP is the marketing folks have to come to me when they launch a new campaign"

artur: but if there's a nonce then the marketing dept can just get around it

bhill2: maybe there's not a nonce in the policy

artur: what would facebook be vulnerable to if this didn't exist?

bhill2: we're a valuable enough target that if we loaded scripts from 500 domains people would hack many of those domains to get at us. We can't guarantee how secure those sites are
... if we don't have site control then it's equivalent to giving all those sites commit privs to our application.
... we've never been able to _rely_ on CSP for protection because not all browsers support it, but enough big browsers do that the CSP keeps the developers from doing things outside our parameters

artur: for us killing off XSS is intractable without something like CSP, but script inclusion is easily handled with integration tests. sounds like it's the opposite for you

bhill2: there's no "one true way of software development" and I appreciate that CSP supports multiple approaches
... how do we make it serve more people better without taking away some of the flexibility

francois: it's clear from artur's paper that the current policies are not effective against XSS, and if we want to focus on that then maybe a small simpler policy language would be better.

mkwst: if the problem we want to address is developer ergonomics maybe we need to develop a different language for that
... we have a bunch of code that implements security policies (not all CSP) and even if we come up with something different browsers still have all this code and it's not going away. we should leverage that.

bhill2: as you said, a new header that supports a simpler syntax (nonce-only, for instance) that is underneath implemented as CSP might be helpful, without getting rid of CSP.

rbarnes: we still have IndexedDB right? very developer hostile, but it's still there and people use it
... even if we factor things differently, if we end up with something that's effectively equivent then is it worth making that change?

francois: there's probably a group of devs who can handle the complexity of CSP and want that flexibility, but others who would be served better by a simpler subset

mkwst: my thought in 2012/13 was CSP is a great place for everything -- it's got "security policy" in the name!
... for instance SRI -- should that be somewhere else?

artur: if we took out the whitelist aspect of CSP we'd be left with a set of flags of what we want the site to do and not do. would that be easier to understand? Put the whitelist in a separate place

mkwst: if we want to make a change with a different syntax, then we lose backwards compatibility and that gives us freedom. we might as well start over. Come up with something we think is pretty now, and in 5 years we'll be having this same conversation
... if I gave you an Artur: header that let you define a nonce and solves your problem, and defines it in terms of CSP, then you'd be happy.

artur: regardless of syntax, CSP can do whatever we want at Google. I worry about everyone else -- they aren't getting what they think they are getting from it
... if 3 years from now we ship the next big thing that's as hard to understand as CSP then we'll have the same results of people deploying policies they don't understand

francois: if you get the "Artur: Yes" header then you could ignore the complexity of CSP

artur: if there was something to replace CSP that was simpler and harder to misuse that would be easier to evangelize than "Here's CSP, but only use this bit of it and don't touch the rest if you don't know what you're doing"

ckerschb__: I'm torn. yesterday I thought we should rip everything out of CSP, today I feel differently, tomorrow who knows
... if we did take everything out, where would "require SRI" go? where would "upgrade insecure request" go? then I have to think of 10 things instead of one thing

artur: if there was one central policy then you could have a site expert manage it and the individual developers just have to live within it.

dankaminsky: this is a developer budget issue -- time budget, thought budget. if there's one thing CSP is really good at people should do that

bhill2: I like the idea of a simpler header (but don't _call_ it Simple because it never is)
... evangelize the simple solution, like a stereo with two simple buttons, and a glass slider that reveals dozens of knobs and switches that you rarely get into

dankaminsky: there are some JS frameworks that make security infeasible. yes they make development simpler but...

mkwst: nice thing about "Artur: yes" is it's polyfillable -- can rewrite it into a CSP policy automatically

ckerschb__: if we decided to start over from scratch and create something new and shiny, as we add new things to the web and then have to add them to the new thing and in a few years it's not so nice and shiny

bhill2: with feature-policy and origin-policy we have more places to put these kinds of policy things

jochen___: referrer policy is one case of something that was removed from CSP into it's own header

Referrer Policy

<bhill2> jochen: we hope that we can move to CR pretty soon, currently two blocking issues

<bhill2> ... one is what we touched on earlier today: whether value should be quoted or not

<bhill2> ... other issue is want to add some text about CSS. CSS spec doesn't talk about how it loads resources.

<bhill2> ... and browser implementations don't do the same thing according to my tests

<bhill2> ... after talking over dinner with Anne came up with a proposal of what do to

<bhill2> ... add some text to describe how it should be and wait for CSS to describe how they do it and update the spec

<bhill2> ... another question is module: can trigger loads, including for nested loads, not really clear what settings for fetch should be

<bhill2> ... in CSS loading a subresource, the stylesheet is sent as referrer in old implementation

<bhill2> annevk: pretty sure what you're saying doesn't match the HTML spec

<bhill2> ... once you do a module spec, it creates its own environment, so any further spec should use that environment's settings

<bhill2> jochen: would it send the module's origin?

<bhill2> annevk: origin is inherited from document

<bhill2> jochen: 2nd level inherits from where?

<mkwst> jochen: Now I have a microphone.

<mkwst> ... Modules: The loader spec isn't done yet. Would rather not address in the referrer policy spec.

<mkwst> annevk: The HTML spec defines how modules are loaded.

<mkwst> ... The loader spec is just hooks. Doesn't change how browsers load modules.

<mkwst> jochen: Using hooks is fine. The problem with CSS is that it doesn't have hooks.

<mkwst> ... Loader will integrate with Fetch, and then we're set because Fetch has a referrer policy on a request.

<JeffH> ie: https://html.spec.whatwg.org/#module-script ?

<mkwst> annevk: Not sure if modules should respect a policy set on them.

<mkwst> jochen: Differences in browsers is down to when inheritance takes place.

<mkwst> ... Insert a stylesheet, then use the history API to change the URL, and see what happens.

<mkwst> ... I think it makes sense to capture the state at the point at which the stylesheet is injected.

<mkwst> ... Similarly, it makes sense to snapshot at the time an `import()` statement is passed.

<mkwst> annevk: URL or referrer policy?

<mkwst> jochen: I'd like to capture both.

<mkwst> annevk: Why capture the URL?

<mkwst> jochen: <link rel="stylesheet">

<mkwst> ... #foo { background-image: url(); }

<mkwst> ... pushState()

<mkwst> ... appendChild(createElement().id="foo")

<mkwst> ... What happens?

<mkwst> annevk: Uses the URL of the stylesheet, not the document.

<mkwst> ... Because the stylesheet creates the fetch.

<mkwst> ... I think that's how Firefox works.

<mkwst> jochen: Not always.

<mkwst> annevk: [skeptical noises]

<mkwst> jochen: Point is, we should write this down. And that means teaching CSS about how fetching is supposed to work.

<mkwst> ... So this is why we need a section in the referrer policy document that describes the expectations.

<mkwst> annevk: You should convince the CSSWG to do the work.

<mkwst> ... Otherwise it's undefined.

<mkwst> jochen: Apparently people are looking into fixing this, but how long should we wait?

<mkwst> annevk: Referrer policy might not need any changes.

<mkwst> jochen: Either we don't say anything about CSS, leave it undefined, and then make sure that CSSWG puts in reasonable hooks for Fetch.

<mkwst> ... Or write something vague in referrer policy to specify what we'd like to see happen.

<mkwst> ... Should get URL and policy at the same point in time and from the same location.

<mkwst> annevk: I think bz disagrees.

<mkwst> ... Thinks it should be the Gecko model.

<mkwst> jochen: Gecko is weird, because gets the policy at a different point in time than the URL.

<mkwst> ... You can change the page's policy in the document, and see that it isn't captured.

<mkwst> annevk: Sure, might make sense to capture the policy, but always use the URL of the stylesheet.

<mkwst> jochen: I think all browsers agree on the referrer URL.

<mkwst> annevk: I don't think Gecko uses the referrer of the document.

<mkwst> jochen: Might be wrong. We can look.

<mkwst> ... Regardless I don't want to specify that in this specification.

<mkwst> annevk: If you're going to say "We haven't figured out CSS yet." that's fine. If you say "We haven't figured CSS yet, so just do what Chrome does." that might not be fine.

<mkwst> jochen: We could just not talk about CSS>

<mkwst> mkwst: I think we should say something high-level, but we shouldn't say nothing.

<mkwst> jochen: Requirements are possible in both models.

<mkwst> annevk: We inherit other things from the document, so why not the policy?

<mkwst> jochen: Why not the document's URL?

<mkwst> annevk: Because we have something better.

<mkwst> ... [discussion]

<mkwst> ... Talk to bz.

<mkwst> ... He has opinions.

<mkwst> jochen: Ok.

<mkwst> ... That's it.

<mkwst> bhill2: What resolution do you want on quotes or no quotes.

<mkwst> ... Show of hands?

<mkwst> jochen: Sure.

<mkwst> ... My personal opinion is that our policies are already pretty tree-like. We encode several complex ideas into a single keyword.

<mkwst> ... If we want something more complex with a JSON representation, we might as well scrap all the existing values.

<mkwst> ... From that point of view, no reason to be forward compatible.

<mkwst> ... If there's a strong belief that it should be JSON-like, just put quotes there. Whatever.

<mkwst> annevk: You need to consider extensibility. How are you going to extend the API and attribute?

<mkwst> ... Back-compat?

<mkwst> ... The header idea of doing headers in JSON.

<mkwst> ... Mike seems to be the only one doing that.

<mkwst> jochen: Without an indication of what the future looks like, a new header is the right answer.

<mkwst> annevk: With an older header and a newer header, the old one still works for older browsers.

<mkwst> dakami: Of the tree protocols, JSON has a simple parser.

<mkwst> annevk: The HTTPWG hasn't done their homework on headers and parsing.

<mkwst> ... Was an informal gathering, and folks were opposed to JSON.

<mkwst> dakami: Between binary and JSON, JSON looks great.

<mkwst> ... Between JSON and status-quo, I don't know.

<mkwst> annevk: 5-10 a year? Usually very simple.

<mkwst> dakami: JSON encapsulates some of the garbage around header escaping and splitting.

<mkwst> ... Some standardization there is of value.

<mkwst> annevk: Header-combining intermediaries.

<mkwst> ... First half of a value in the first header, and the second header has the last bit.

<mkwst> ... If not combined, maybe or maybe not a valid value.

<mkwst> dakami: Requires a clear definition of how to combine headers.

<mkwst> annevk: If we define a list of headers parsed in the old way, and the rest in the new way, great.

<mkwst> ... Look at token binding, for instance.

<mkwst> mkwst: [complains about the bay area]

<mkwst> bhill2: Folks are sending unquoted values today.

<mkwst> ... If we introduce quoted values, then we'd have folks sending two formats, even before a new format.

<mkwst> ... If we need to add more values, not sure if we actually get forward compatibility.

<mkwst> ... So maybe don't add quotes?

<mkwst> ... That makes it simpler inside a meta tag, slightly.

<mkwst> dakami: I like the idea of some standardization of parsing.

<mkwst> ... Loose parsing is bad over time.

<mkwst> bhill2: I think that's a decision made not here.

<mkwst> ... The decision to make here is whether it's worth trying to anticipate that decision in this and other headers.

<devd> From the implementor side, worth pointing out all the pain that quoting, CSP's syntax leads to

<mkwst> jochen: I hear that the easiest way forward is to stick with the plain value.

<devd> so if we can go straight to JSON instead, that would be best

<mkwst> dakami: How don't you end up with an attack vector? Request splitting, etc.

<mkwst> bhill2: We have a fixed set of known values here.

<JeffH> A JSON Encoding for HTTP Header Field Values: https://tools.ietf.org/html/draft-ietf-httpbis-jfv

<mkwst> ... If we quote them, great. If we don't, we need a new header.

<mkwst> annevk: Note that something more elaborate than JSON, then simple strings are incompatible.

<bhill2> <some discussion omitted accidentally due to lack of scribe>

<bhill2> tl;dr: forward compat story for e.g. HTML attributes and API is not clear in any case, they are enums today that accept a single string value

<bhill2> jochen: fundamentally the policy space for this header is not so large we couldn't just do it with an enum

<bhill2> ... only possibility is to be able to do something like map parts of url space to policies instead of annotating all anchors, etc. individually

RESOLUTION: The group doesn't care.

<mkwst> But is going to run with unquoted strings.

CORS for Developers

<mkwst> bhill2: This is an attempt to write an explainer for CORS, which is sometimes somewhat difficult to understand.

<mkwst> ... [CORS] isn't exactly an up-to-date reference.

<mkwst> ... Published this note as an explanatory reference that's more up to date.

<wseltzer> https://w3c.github.io/webappsec-cors-for-developers/

<mkwst> ... Advice for developers. History lesson.

<mkwst> ... [Chat roulette interlude.]

<mkwst> ... Attempts to explain more simply with tables and etc.

<mkwst> ... Aimed at web developers, not browser implementers.

<mkwst> ... Anne had some comments, took those into account. Looking for more feedback.

<mkwst> ... Seemed to be support for producing this kind of documentation, aim to CfC to publish as a note.

<mkwst> ... Note, not a normative spec. Attempt to help developers out.

<rbarnes> "pre-Javascript browsers" ??

<mkwst> mkwst: Yay. More of this, please.

<mkwst> artur: Does it make sense to do something similar for CSP?

<mkwst> bhill2: I think it's a general problem that we write specs for browsers, but expect users to somehow understand them.

<mkwst> ... We need to do a better job explaining things to different audiences.

<mkwst> teddink: Reading Stack Overflow, it's clear that developers _do_ read the specs, but often don't understand the nuance, specialized language.

<mkwst> wseltzer: We hear from other groups that more explainers would be helpful.

<mkwst> bhill2: Folks still want a solution to "No, really, this is public even if you sent cookies."

<mkwst> ... It's not clear that we could do that without repeating the issues of the past (`crossdomain.xml`, etc).

<mkwst> annevk: We do leak timing information with `*` with `Timing-Allow-Origin`.

<mkwst> ... Which is actually somewhat important.

<mkwst> ... Inconsistent. Not clear how it got past security review, but it did.

<mkwst> ... Since no one is paying attention, maybe we can do that here too?</sarcastic>

<mkwst> rbarnes: What's the issue with "*"?

<mkwst> bhill2: Can't send `*` in `Allow-Origin` for credentialed request.

<mkwst> annevk: A lot of the complexity resulted from a security review in 2006.

<mkwst> ... Might be worth reevaluating things with the folks paying attention these days.

<mkwst> ... On the other hand, the requirement isn't onerous.

<mkwst> annevk: [Chrome has problems. Probably Mike's fault.]

<mkwst> ... Also, Appcache is bad.

<mkwst> bhill2: Maybe an opt-in to refetch without credentials in some cases?

<mkwst> annevk: A library could do that.

<mkwst> ... Also, T-A-O allows multiple values.

<mkwst> ... We could do that too.

<mkwst> ... Could address a CDN that works with 5 domains, doesn't address the "truly-public" case.

<mkwst> annevk: If you make the request from a unique origin, the server can make a static response, as `null` is a reasonable non-`*` response.

<mkwst> bhill2: What about redirects? Do we no longer set the origin to `null`?

<mkwst> annevk: I don't think so?

Summary of Action Items

Summary of Resolutions

  1. The group doesn't care.
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.144 (CVS log)
$Date: 2017/02/15 22:32:50 $