WebAppSec F2F Berlin, 14-July-2015

14 Jul 2015


See also: IRC log


dveditz, freddyb, wseltzer, bhill2, dbaron, Axel, JonathanKingston, rbarnes, francois, mkwst, Yan, mnot, dka, Dan, Appelquist, slightlyoff
bhill2, dveditz
Dan Veditz


<bhill2> Scribe: Dan Veditz

<bhill2> Scribenick: dveditz

Upgrade Insecure Requests

mkwst: upgrade insecure requests spec -- implemented in Chrome and just recently Firefox

<wseltzer> https://w3c.github.io/webappsec/specs/upgrade/

mkwst: getting good feedback from ppl in Google experimenting with this
... trying to migrate over historical content
... washington post is interested, w3c is interested. good to see. TAG review a couple months ago, suggestions for improvements
... only one not implemented is changing the flat token to a source-style directive (to upgrade specified hosts)
... this would allow continuing to serve insecure images from a particular site while upgrading your own content
... this would be consistent with other parts of CSP and might be relatively trivial to add. Will probably add that in the relatively near future
... Discussion going on right now about the name and the name of the signalling header
... Sending a header "Https: 1" turns out to break sites for various reasons (infinite redirect loops, or try to redirect to https event hough they don't support it)
... some have suggested just breaking those sites but I don't want to do that
... current proposal is to use the current name upgrade-insecure-requests. bsmith wants "non-secure"
... I don't want to argue for months
... would like this group to bless a name so we're done
... proposal to merge this feature spec into the Mixed content spec. Don't think it's a great idea
... bsmith's argument is this is a mitigation strategy for mixed content, so it's totally related

wseltzer: I think we have other ways of doing that

francois: hypothetically if we did MIX-2 with opportunistic upgrade would it be appropriate to merge at that time?

mkwst: sure, nothing preventing us from doing that
... spec seems to be doing good work, implementations are coming out -- have two compatible (or mostly) implementations for the core of it

dveditz: I don't think the Firefox patch included the signalling header yet. didn't see it in the patch (but could have missed it)

mkwst: looking through the github issues...

bhill2: for navigation requests... those are only upgraded for same-origin requests?

mkwst: correct

bhill2: how would that work with a source list

mkwst: could have two lists, or use the same lists for both resources and navigations
... or could have a list of resources and then if you navigate to those sites that would be covered
... but '*' would be problematic in that sense because it will likely break for some sites.

<mkwst> https://github.com/w3c/webappsec/issues/184

mkwst: splitting would explain the current behavior so maybe that's a good way to do it

bhill2: that seems interesting to me to have those addressable independently
... I can see a site having a constellation of hosts that will be upgraded together
... or ebay and paypal upgrading together

mkwst: that's the only real outstanding issue from my perspective
... couple of editorial issues
... concept of a navigation set inside nested browsing contexts
... that's an additive thing in nested contexts. This may be poorly explained in the spec (Yan, among others, were confused by it)
... if I frame example.com and example.com says it can be upgraded, maybe we should propagate the navigation upwards as well
... not clear we can do that for '*'. not sure it can be safe to support resources pushed up, but navigations should be.
... pde wanted this in the **** client because they want to support Safari because it supports HSTS but doesn't support upgrade,

<mkwst> dveditz: confused about preloadable hsts hosts.

mkwst: we send a signalling header, want to reduce noise

<rbarnes> i totally do not understand the let's encrypt argument. there is currently no browser-facing component to LE, except for the certs.

mkwst: we need to send it to https because site needs to know if it's safe to set HSTS
... lets say we go to w3c and if they don't get this header, they can redirect back to http because they can't support the upgrade
... no browser supports this feature that doesn't already support hsts. it would be more complicated if that were not true

slightlyoff: if conceivable that our preload list will get so large we use a bloom filter or some other lossy format. why wouldn't we send this header then?

mkwst: chrome doesn't actually implement this yet
... I basically agree with you that the scenario when we can remove the signalling header is distant, but I'd like to remove it someday. I didn't want to send the header at all, but was convinced otherwise.
... would still like to be able to remove it in the future, or reduce its use

bhill2: anything else on the topic?

mkwst: would like a rubber stamp on the name of the header.... "upgrade-insecure-requests: 1"

bhill2: is that still what you want if we split the directives?

mkwst: yes, that's still the name of the feature so the second directive name is irrelevant wrt the signalling header

<wseltzer> [no objections]

<rbarnes> bhill2 for parliamentarian

bhill2: any objections? unanimous consent

mkwst: any objections to splitting the directive?
... advantage is if you supported multiple navigation targets, but maybe ??
... there was a CSP3 proposal to block navigations, but we were concerned about breaking navigations on the web (malware sites preventing people from leaving)

bhill2: this would be less restrictive and more useful

mkwst: upgrade-insecure-navigations? does that need a source list or is * good enough

yan1: the use case would be if you know one of your sub-sites doesn't support https yet

mkwst: that makes sense for the subresource case. does it make sense to make a distinction between subresources and navigations?
... one pedantic reasons is we can't explain the current distinction between upgrading everything but only upgrading navigations to self
... but it's easier to reason "upgrade all the resources i might use" vs. "I know everywhere I might navigate to is upgradable"

bhill2: it's useful to have them treated separately, is it useful to have something more granular than 'self' vs '*'

mkwst: splitting them in chrome would be pretty trivial
... one option would be the current directive is magic and makes the distinction between subresources (all) and navigations (self), but if you add a source list
... then the source list applies to both subresources and navigations. Makes sense, but makes the bare directive pretty magic and hard to explain
... so in that sense two directives would be easier to explain
... acn I take the silent majority to be agreeing with what we've been talking about

rbarnes: rather than using this upgrade-insecure for navigations we could rely on HSTS on the destination to upgrade navigations
... I'm also fine splitting them out

mkwst: I will fiddle with splitting those out to see what they look like, and I'll change the name of the signalling header in the spec
... I think that will take care of the feedback we've gotten so should be able to go to CR with those changes

wseltzer: thank you

<bhill2> https://tools.ietf.org/html/draft-hoffman-trytls-02

bhill2: is it worth taking a moment to consider interactions with other mechanisms such as various opportunistic encryption mechanisms
... not sure how much status this has other than "crazy idea"

mnot: haven't heard much talk about it, just an idea

mkwst: expired in october, right?

rbarnes: there was strong resistant to doing anything fancy in dns like this because of the last-mile problems

mnot: there are people in ietf very enthusiastic about opportunistic encryption and don't understand the resistence

bhill2: an http header isn't apropriate for upgrading other protocols like IMAP or SNMP
... ignoring last mile problems, if we magically know some domain wants to be upgraded what do we do?

axel: would upgrade impact currently unreachable URLs like chrome:// or about:

bhill2: this wouldn't impact other URL types

mkwst: the spec covers only http: and ws:, nothing else

bhill2: back to the previous questions... do we think it's appropriate for a web browser to accept signals not through the web channels to get this information

mkwst: we have this already, the hsts pre-load list, so using other sources (if we trust the mechanism) seems reasonable

bhill2: is thre a need to define the order of operations for all these different things so a new specification can define where in the process it is injected

mkwst: we currently have this in fetch. I have poked Annevk when I've needed hooks added to the fetch algorithm. adding steps to the fetch alg seems like the right general thing to do for this

<mkwst> https://fetch.spec.whatwg.org/#main-fetch

bhill2: that wraps up that topic, taking a quick break and then we can introduce new people and have a more freeform discussion of upgrading the web

Securing the Web

wseltzer: sorry, I can do it if no one else steps forth

<wseltzer> 100% HTTPS: Roadmap for the entire Web

<missed some>

bhill2: I interested in places where WASWG and TAG and cooperate. specs, social, political... let's put it all on the table

slightlyoff: the TAG can make more sweeping statements on what's good and bad, would be good to have WASWG backup on @@

mkwst: from my perspective it would be good to understand what we can do in the browser to make it easier for developers to amek this switch. upgrade is one, I dont know if there are others

bhill2: WASWG can created technical specs, can do reviews, harder for us to "set policy" for other areas in w3c in terms of mandatory https and nuances in their specs

<rbarnes> 1?

bhill2: the TAG is in a better position as an elected body to make some of those statements
... what is the security model of the web platform and what will advance the platform
... feature parity with mobile platforms which have a very different model

wseltzer: from the w3c perspective these groups are good places to help us know where the web should be and how we can get there
... give pushes to those who have not yet realized their customers are demanding greater security
... we've also been talking about stepping up the web security interest group to fill in some of those guidance pieces to users and developers
... all these pieces coming together are very helpful

mnot: that sounds good. the focus on making https easier to use is good, lots more to do there
... we haven't talked much about securiting http, that's a controversial topic
... getting parity with https on http. if we want to sacrifice some of those guarantees that's where we get controversy. would love to hear that play out
... to take into our discussion with TBL later in the week

bhill2: worth discussing in this group whether it's possible to do in some way

mnot: there's opportunistic encryption in httpbis, with at least one browser vendor interested in it
... feedback we've heard is that subsetting those https guarantees is harmfull. is there any subset that would be useful?
... our customers are interested in http/2 but pushback about switching to https. you explain it and they're eventually OK

dka: when I go out and talk to developers at conferences about the move to https, people get interested in the possibility of a "secure http". maybe they're not thinking things all the wqay through, but the easier migration is interesting to them
... As far as the conceptual model goes, web vs mobile platforms is that you have the user agent enforcing rules as a trusted 3rd party on the web
... this came up on edgeconf in the discussion about "dropping the lock"
... one of the things I talk about is that the web has a visible indication of security. I don't know if the visual indicator is as important as the concept that the neutral rd party is enforcing

yan: I don't have much to add to what mark said except to agree opportunistic encryption is controversial

mnot: trying to channel ekr, users don't know or care about the lock...

<mkwst> https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure, btw.

dka: when I was talking about the lock that was just a prompt to me thinking about the role the UA plays vs a native application

mnot: ... user experience in these discussions is a red herring. if you have secure http you always have the possibility of a downgrade attack

wseltzer: our goal is to serve as many of these users as we can, the ones who don't know what a lock is but want security, and those who would prefer "no web" to an insecure one

dbaron: I was wondering about partitioning storage, cookies being separate for secure and insecure, is that an obstacle to migration?

mkwst: yes, that's an obstacle. don't know how major, but it is an issue that comes up. a challenge

dbaron: could secure-http be helpful in that migration, where each site could have its own flag day

mkwst: would it be useful for a site to send a header/signal to say "please upgrade my insecure crap"

mnot: but it's not "his", it's the other origin's, that might not actually be his

bhill2: unfortunately the alternative is you suck everything out of one window and up to the server or post message to the "other" origin to convert.
... that's no less secure than the proposal

mkwst: a one time conversion that neuters the data in the http origin
... forbes.com proves everything -- one site is not the same as the other one

freddyb: maybe if we have hsts maybe we could in those sites suck over the data from http
... it would have to be opt-in

mkwst: but the http: site could have bogus poisoned data populated by a local wifi attack.

dka: when it comes to flag days, one topic in the STRINT workshop is that cert warnings are broken and when there's an invalid cert users are encouraged to click through because of social factors
... such as the apple employee going to their health care and called them and was told "that error's OK, that's how you know it's the right site"

bhill2: finding more progress through data driven research about populations rather than us smart people sitting in a room trying to imagine what users will/would do.

mkwst: apf has done research and data driven results.

<wseltzer> http://research.google.com/pubs/AdrienneFelt.html

bhill2: hsts does some of that... if there's an invalid cert give users no recourse, no click through
... I'm not interested in making it easier to bypass warnings
... the evidence I've seen is that (hard thing to measure) when you make it harder the abuse goes away but when you make it easier the abuse comes back
... making it harder pushes the fraud to another part of the system

mnot: do you think UX is in scope for this group in any way?

bhill2: maybe a little?

slightlyoff: I have concern about putting UI in scope for groups that are making specs. Chrome team has a principled objection to standardizing UI in W3c, doesn't allow us to adapt to user needs and changing situations

<Zakim> dka, you wanted to mention certs

<slightlyoff> more importantly, we understand that things are _likely_ to change in UIs

wseltzer: I have zero interest in standardizing broken UI

<slightlyoff> and as a result, we need the flexibility to do better, improved UI without breaking conformance

<slightlyoff> Adrienne's research is proof of this principle

JonathanKingston: it's good to suggest consistent behavior between browsers in certain cases, even if the appearance is different

<wseltzer> https://github.com/w3c/webappsec/blob/master/admin/100_percent_https_roadmap.md

bhill2: this outline is kind of brainstormy
... trying to avoid flag days where we chop off parts of the internet that haven't converted
... URLs as data, urls as stable identifiers
... urls that start http:// might be a stable identifier, could be hard to go through a codebase and just use sed to convert them all
... if we had a system that supported all the capabilities of https in http
... an http application can pull in http resources but not from an https instance of that application
... starting assumptions: 1) users can't deal with nuanced security models. secure or not secure is about it
... site authors would rather be completely insecure than partially secure where users see an indication of "not secure"

dka: but we just said users don't see indicators

bhill2: maybe the vast majority don't care, but a big enough minority that do notice and do care

dka: but that minority might be smart enough to know the diff between fully secure and partially secure

mkwst: I would be upset if browsers were making the assertions you were just making [NSA-proof vs pretty secure]
... chrome is looking to see if we can get rid of the lock. tell users when they're insecure, not say when they're secure

dka: if we're trying to strengthen the encryption and make more things encrypted, isn't there a tension there?

mkwst: the browser can make a limited number of assertions. I'm talking to a server who is authenticated and the connection is encrypted
... anything short of that should be a warning.

bhill2: even if I'm able to understand those nuances the effort to make subtle distinctions is just not worth it
... big PITA to figure out if I should worry about this yellow triangle thing

axiom 2) not subsetting the guarantees of https (CIA)

[please see notes]

bhill2: [see notes The Invariants]
... tranquility property is especially interesting

[No Read Up]

[No Read Down/No Write Up]

[No Write Down]

scribe: not enforced by today's browsers


scribe: doesn't get talked about a lot. application doesn't change from secure to insecure in the middle of a transaction
... browser will enforce the concept of tranquility on this transaction
... becomes an issue in any kind of scheme where downgrade is possible
... such as an http app opportunistically upgraded, but then the next request fails and falls back to insecure
... going back to the "open data" application example. code is loaded over https but some of the data is from insecure sources
... could be perfectly safe, but the browser can't really tell the difference
... no way for the app to say "I'm loading over https but don't show any security guarantees so that I don't get downgraded later"

mnot: as a user, if I type in https or hover over a link and see https:// the contract is already made
... having https: mean "secure, unless the server doessn't want it to be" gives me the heebie-jeebies
... often the server has one threat model in mind but the user has a different model in mind

wseltzer: and we may be imposing a model that matches neither

bhill2: currently mixed content blocks before hsts upgrades happen. we agreed to this yesterday

mkwst: I think it will be difficult to change this behavior
... I'm skeptical we will come to another decision if we talk it through again

bhill2: what does it mean to be able to upgrade some of these
... we have the upgrade draft to update insecure resources.
... csp form-action can help some, upgrading navigations can help
... we have issues with existing local data
... what if I can upgrade entirely to https and never send any data in the clear

mkwst: would I be treating a page differently if I navigate it vs framing it?
... that would be complicated

bhill2: how do we handle tranquility, what if we're assembling component parts and a later resource doesn't upgrade successfully?

<scribe> ... new primitives: opportunistically secure without promising tranquility. could be a header required on every resource, or domain-wide flag

UNKNOWN_SPEAKER: browsers wouldn't make any guarantee including not showing the 's' in https:

<rbarnes> for the record, this idea that we might selectively discard tranquility makes me really uncomfortable

UNKNOWN_SPEAKER: what does it mean for the same origin policy

[see notes on details for new SOP and data flow rules]

mkwst: what's the problem we're trying to solve? seems the concern TBL and others have is that everyone has to do work for us to be able to grab their data

<wseltzer> [one problem solved by optimistic upgrade is some protection against the passive eavesdropper, but not an active attacker]

mkwst: these proposals are similar to today's model except no one needs to change the scheme. doesn't seem like the problem we're being asked to solve
... still blocks us from moving to https, worse makes us weaken the guarantees that we would otherwise give
... this doesn't seem to solve the problem (still requires everyone else to do work) and if it does it does by reducing rather than increasing security

bhill2: the web exists out there with lots of http: links out there. What if LetEncrypt rolls out and everyone upgrades Apache and is magically https guaranteed

mkwst: if you're replacin the server anyway then you can add redirects

bhill2: but that violates tranquility ... I have to go out unencrypted first then get upgraded
... what if I can try https first and get a better indication that we can do that?
... the insecure web can interact with itself as a remnant and the general case is everything else can be upgraded

yan: a side comment. a lot of these upgrade things assume http/https are equivalent. HTTP Everywhere shows that's really not true. Lots of special case rules

mkwst: I understood a lot of those rewrites were because of cert errors

yan: that's one of the cases.

mnot: I want to know how upgrade-insecure-request goes and adoption rates. will it help? will it not be adopted?
... I worry people won't understand, or understand just enough to abuse it
... and not understand the tradeoffs being made
... let's let upgrade-insecure bake for a while first

bhill2: who wants their name associated with the Why We Can't Have Nice Things documents

axel: http/2 solves part of the problem doesn't it?
... Chrome warns me "this is a protential phishing site". why doesn't it have a collection of rules for "it's OK to use https on this site"
... would it make admins lazy by helping them too much

<wseltzer> [the flip side of invisible upgrades is invisible failures]

yan: no... I used to be the maintainer of https everywhere. It was constantly breaking, site changed all the time
... would be hard to incorporate into browsers for the general public

mkwst: we have a mechanism by which sites can opt into this -- hsts preload lists
... that puts it in the control of the site operator, something like https everywhere takes the control away

axel: I used this for years and haven't experienced things breaking
... when browsers say "I won't give you access to this powerful feature unless you're secure" doesn't that conflict with saying site's should be in control

mkwst: the definition of "not breaking" is not simply "the site works". The site might not be ready to switch, maybe they know they're releasing a new application that doesn't work over https
... for powerful features we prioritize users over the sites, but we shouldn't prioritize the browser itself over the site
... in some cases taking control from sites is justified, but I don't think this is one of those cases

mnot: the akamai folks dislike https everywhere because https everywhere "fixes" people who have not bought certs

(some of the akamai folks)

dbaron: another factor is we don't want browsers to be gatekeepers. we want people to be able to publish on the web on an equal footing
... things like the anti-phishing you mentioned is kind of a step away from that, but we have a set of known bad behaviors

s/folks hate/folks dislike/

scribe: we've talked of lots of problems of silent automatic upgrades (sites taht don't match)

wseltzer: difference between users saying (via an addon) and browsers just doing it on their own

bhill2: server operators need to opt in, or at least, doing so would help us know better when it's safe to do it

JonathanKingston: would you hard fail if securit guarantees were not maintained

bhill2: sometimes yes and sometimes no, depending on whether the site has opted into/out of Tranquility
... you might have an upgraded site that has opted out of tranquility, but some other site that has not could use resources automatically upgraded from that site without breaking their own guarantees

<wseltzer> [brad draws on the whiteboard]

<wseltzer> [two state differences: lock/no lock; secure+tranquil/no-tranquil]

<wseltzer> mnot: right now, alt-services is non-blocking, so you're still sending lots of data in the clear

<wseltzer> ... we could come up with a blocking alt-svces, but it would be something different

<wseltzer> [lunch]

<wseltzer> [resume]

<mkwst> scribenick: mkwst

bhill2: agenda bashing!

dka: Uhhh.
... What would you like the TAG to talk about tomorrow?
... (Ha ha.)
... But seriously, what can the TAG do?

bhill2: Checking in on specs yesterday. MIX. SRI. Priviliged contexts.

dka: Outreach mechanism to web developers?
... Move to HTTPS, secure contexts, etc.
... Is there some coordinated effort we could make within W3C to get the word out?
... At Edgeconf, we talked to an audience of developers who kinda know what's going on.
... In Transylvania, this wasn't the case.
... How do we address the awareness gap?
... Working groups need to take some responsibility for advocacy.
... TAG runs outreach events, what can WGs do?

JonathanKingston: Hurdles to adoption (CSP, for instance).
... How do we get the message out?
... Could carry on bashing out the HTTP->HTTPS discussion.
... Lock icon, broken locks.

dveditz: Don't know concrete plans, but Chrome and FF have talked about degrading the HTTP iconography.
... Negative for insecure rather than positive for secure.

slightlyoff: Discussion of a place folks could go to understand the landscape.
... Statistics. Trends.
... HTTPArchive? No good public place to centralize this information.
... TLS needs a posse.

rbarnes: areweencryptedyet.com?

slightlyoff: who's registering it?

rbarnes: How do powerful feature restrictions fit into migrations.
... Process statements enforcing requirements onto new documents?
... Would be useful to have clear discussion for requirements, use cases, etc. that are broken today by TLS.

mnot: Whatever Dan said.
... Useful to continue with Brad's discussion.
... Document that explains why these decisions aren't easy ones.
... Don't understand how browser's power is being used.
... Roles of the TAG are to do advocacy, talk about policy. Feedback about how they could help.

freddyb: Powerful features is interesting.
... What is a powerful feature? Upcoming API integration.

dveditz: Deprecation?

francois: What about people that don't want to migrate? What do we do with that legacy content?

dveditz: Right. Abandoned sites, for instance.
... TBL's concern involves those sites.
... Using data from abandoned sites === lost data.

dbaron: Powerful features is interesting.
... What are the barriers to folks migrating today.
... What problems are hardest for them, what can we do to fix those?
... Is there a list of those things?

<rbarnes> firefox historical telemetry on HTTPS usage, by pageview https://ipv.sx/telemetry/ssl-page-historical.html and by transaction https://ipv.sx/telemetry/ssl-tx-historical.html

wseltzer: Where do we want to be giving guidance to other groups?
... Interested in whether we get to a point to where we can distinguish between user needs that we address. Thinking about sophisticated users with specific concerns. Distrust local network, censoring proxy, etc. Escaping those.

bhill2: How do we get to the end state, what are the barriers, how can we encourage.

dveditz: Nothing new to add.

francois: Some folks that come from various orgs. Akamai, for instance, might have specific problems that we could help solve.
... Mike comes up with specs that help Google, perhaps others have distinct needs?

dveditz: Concerned most about legacy.
... How we can help sites that want to use legacy data. Is there some way to allow them to do so?
... "No, really, this is just anonymous data."
... -1 powerful features. we agree that the concept is good, but individual feature decisions shouldn't be made by us.
... we can all individually have opinions, probably agreement on that, might want to lobby on behalf.

mkwst: Make the TAG make groups do things. Way around charter objections.

<dveditz> mkwst: was disappointed last week at edgeconf that all we talked about was https. once a site has moved to https it's not done, there's more to do

<dveditz> ... alex talked about storage isolation, are there other types of isolation that would be useful to provide for web developers

<dveditz> ... to reduce their attack surface or exposure to risk. Future looking planning

<dveditz> dka: would the summary of that be "now we have TLS everywhere, what now?"

<dveditz> mkwst: at Google we're mostly(?) all TLS, but we're not done. what more do we need to do

yan: What if you want to migrate, but subresources won't migrate.
... Ads.
... There's possibly more we can do in the spec space.

slightlyoff: Example?

yan: Stronger SRI on the resource, opportunisticly encrypted? Perhaps that's enough to not block it. Strawman. :)

dveditz: Harder to use SRI now, requires CORS.

wseltzer: Deprecate CORS.
... Seems like an anti-pattern.

slightlyoff: Why?

wseltzer: Website has to state that it wants to be public, rather than the other way around. Opposite of the webby approach.

dveditz: Aspect of CORS is positive: enable one party to do something new, but only if the other party opts in. 'document.domain'
... Imposition is bad. Maybe CORS isn't the best, but it's safer than potentially breaking things.\

mnot: Totally the right approach. When surprise is possible, opt-in is better.
... The way that CORS works on the wire might be optimized, but the fundamentals are fine.
... To the browser folks: can you imagine trading latency for a new security option?

rbarnes: To a degree that it's acceptable, it would depend on where the latency accrues.

mnot: Just first time I go to a website? Add a round-trip for first visit?

rbarnes: No hard answers!

mnot: Sure. Fuzzy answer?

rbarnes: Maybe?

dveditz: Reduce latency with QUIC/SPDY; some vendors would say no.

dbaron: how much latency, how much security?

rbarnes: accrue latency to insecure sites?
... Open TLS connection, only open insecure if that fails.

bhill2: what are we voting for?

mnot: The thing on the board that bhill2 drew. Possible HTTP->HTTPS thing.

bhill2: What are the concrete problems?

[rbarnes is writing on the board]

[because bhill2 told him to]

[still writing on the board]


* Changing links

* Subresources having to upgrade

* Data in storage

* Certificates

* Different content (forbes)

* UX incentives

* SNI / IP addresses

* Administration, key management

* Debugging (wireshark, telnet, etc)

* anonymity

bhill2: Things most interesting for us in this room to work on?
... formal objections to working on UI/UX in this group. Also already in flight.
... Debugging is interesting for "secure contexts"
... SNI is probably outside our scope.
... Anonymity is out of scope.

So we're left with:

* Changing links

* Subresource having to upgrade

* Data in storage

* Different content (forbes)

mnot: Different content is interesting. TLB seems to agressively think that we shouldn't allow that.

<rbarnes> fun fact: i filed a bug to fix "cypher" in NSS, and found that their API guarantees prevent fixing it

dveditz: Someone said that Google's internal study said 97% of HTTP/HTTPS were similar (not subresource, just top-level, etc.).

mnot: One approach is to assert that they're the same (HSTS).
... Other approach is is to assert that they're not the same

yan: Which is "being forbes.com"

<rbarnes> Forbes: 1

mnot: Making it opt-out seems "not cool".

bhill2: HSTS is like an opt-in already.

dveditz: Would it be valuable to have a weaker assertion than HSTS?

slightlyoff: 99.7%. Not 97%.

mnot: HSTS adoption is low for top 10k sites.

slightlyoff: 99.7 includes the things that Google can find. Images, PDFs, top level documents. Not scripts, etc. Things that are in search.

<mnot> http://trends.builtwith.com/docinfo/HSTS

slightlyoff: Adjust error bars accordingly.

<rbarnes> dbaron: fwiw, >90% of HTTPS transactions are TLS 1.2

<dbaron> rbarnes, yeah, but there are a bunch of sites I depend on that aren't... (though I didn't have that many problems with allowing only 1.2 and 1.1)

slightlyoff: [hedges wildly]

<rbarnes> dbaron: sounds like your problem :)

slightlyoff: Doesn't include sites that don't have HTTP _and_ HTTPS. Doesn't include sites excluded via robots.txt.

bhill2: As percentage of traffic, small number?

mnot: What do we do in the address bar is one question. What we do for migration is another?
... What's the plan here? Open 443 first? See if you get a handshake?

[discuss of latency; races]

scribe: timeouts, downgrade, dos.

mkwst: None of this is interesting unless there's hard failure somewhere.

[UC browser appeared somehow]

bhill2: that's the core of this proposal. outlining those situations in which hard-failure is appropriate.
... not lots of latency outside the first request.

rbarnes: connection reuse might mean that h/2 is a net decrease in latency.

mnot: and coalescing.

bhill2: another problem is involuntary upgrade.
... I haven't bought the fancy Akamai package yet.
... Can't serve HTTPS to everyone, etc.

rbarnes: mechanism in H/2 for disclaiming coalescing.

dveditz: another type of involuntary upgrade. framing with upgrade-insecure-requests.
... Possibly problematic. Might be useful to do this as an attack.

<slightlyoff> areweencryptedyet.org has been registered :-)

[discussion of frame-ancestors, origin header, advocacy]

dveditz: would it be helpful if we didn't block credentialless xhr.

[discussion of mixed content]

[sorry, mkwst was talking]

bhill2: Can introduce non-tranquil HTTPS, remove it later to address the issue of upgrading content.

rbarnes: sounds to me less like non-tranquil HTTPS, but like HTTP that was loaded over TLS when you could.

bhill2: would do that to get telemetry on what else is going on, how it feels to live in the world where everything is loaded securely.

mkwst: we should make people feel bad about sending insecure content.

rbarnes: order of operations: first do brad's thing, then make people feel bad about doing it.
... two models: for a given application, how do you get into one of those two states?
... how do we determine whether TLS or TPC, how does the browser determine which security policy to apply.
... currently tied together.
... brad's proposal decouples scheme from that.
... maybe allow one more combination of states: load over TLS, but treat as low-sec.

bhill2: yes, as a transitional state.

mkwst: I've missed something, explain?

bhill: [ explains ]

<mnot> Just to make sure people are aware: https://httpwg.github.io/http-extensions/encryption.html

<slightlyoff> dka: further inconveniences: sed -i doesn't actually fix things; as likely to break content = (

<slightlyoff> mnot: so then like 1/3'd of an ad?

Data in storage

<inserted> scribenick: wseltzer

bhill: should we add "migrate insecure data" to HSTS as a one-time option?

mkwst: we should do it as a separate header, with requirements on HSTS longevity and @@
... do we need more protection against bad data?
... either manual, or event-triggered?
... it seems we need remove data from http, replace or merge
... the problem with a read API is differences in data storage mechanisms

dveditz: exposing cookies would be problematic because some cookies are explicitly hidden from scripts

mkwst: use case is "I've been working for a while offline and now the site upgraded to HTTPS"
... cookies don't matter, but localstorage, indexdb, sql

freddyb: permissions?

mkwst: probably not

slightlyoff: a note of caution - sites will have to test pretty stringently
... whereas they could do a post-message

bhill2: but once they've set HSTS, they lose the HTTP access

[more discussion whether this is a sufficiently common use case to worry about]

<dveditz> slightlyoff: is the reason people want to use hsts preloading because we don't provide another way to do it?

<dveditz> ... like there was no way to set the header at first?

<dveditz> mkwst: that was HPKP, we always had the header for hsts. preloading just solves the initial window vuln

<dveditz> bhill2: do we discourage people from using hsts by not having a transfer mechanism, or is it small enough number of sites to not worry about

<dveditz> francois: it's just sites that have offline data that need to take a slower translation path, and they can use upgrade-insecure as an interim step

<dveditz> ... should our group create a guide for people converting their sites to https?

<dveditz> mkwst: yes, we should do that, and showcase the various specs we have produced that can help them

<dveditz> ... we currently have the TAG, this group, and the web security IG (which hasn't been doing much lately)

<dveditz> ... that could do something like this. Someone should, maybe it should be us

<dveditz> ... I was unhappy with our recent chartering process that had members in this room objecting to our group making recommendations

<dveditz> wseltzer: interest groups don't have to be "black holes". they can bring in invited experts and have fairly broad leeway

<dveditz> ... I invite people to help revitalize the security interest group

<scribe> ACTION: JonathanKingston to write Same-Origin documentation [recorded in http://www.w3.org/2015/07/14-webappsec-minutes.html#action01]

<trackbot> Error finding 'JonathanKingston'. You can review and register nicknames at <http://www.w3.org/2011/webappsec/track/users>.

bhill2: I will semi raise my hand to write up "Why we can't have nice things?" why upgrades are complicated, origin model, etc.
... and that dovetails to evangelism, outreach, education, TAG role

dka: for publicitly purposes, the press seems to be looking for a package of things with a story

mkwst: without a good mobile story, without Apple, it's hard to tell the package

bhill2: we're close to rec on CSP2, Mix, Powerful Features, SRI

We can invite Safari to implement

dka: W3C could do press release, get statements fo support from implementers

wseltzer: Great. W3C comms would like help in telling the story

dka: I can help write stuff

mkwst: CSP2 has implementations in chrome and firefox
... formally, we need to republish CR and wait, then move ahead

bhill2: everyone's on Mix
... SRI has ff and chrome
... upgrade is pretty close
... powerful features needs a test suite
... TAG can take it forward

<bhill2> mixed content

<bhill2> referrer policy

mkwst: referrer policy is in FF, chrome, safari
... not sure about IE

bhill2: "W3C did 6 things" sounds press-worthy

mkwst: do we need a formal CfC to republish CSP2


bhill2: any objections to removing unsafe-redirect and republishing CSP2 as discussed on list?
... discussion was open, we heard no objections, and hence

RESOLUTION: Republish CSP2 CR without unsafe-redirect

mkwst: I'll republish
... Was there interest in this group in "clear site data"?
... proposal for a header to reset state

dveditz: secure?

mkwst: still debate about it
... someone said, "if attacker can remove your data, reason to migrate"
... fits into scope of charter
... attack surface mitigation
... e.g., remove data on logout, gplus would like to store data locally, iff they can remove
... so providing this header seems good way to mitigate risk

bhill2: sounds within scope

mnot: I like it, perhaps option for cookies-only

slightlyoff: what about the pwnd service worker case, where you want to lock out an old service worker
... API missing to prevent writes from existing contexts
... this is necessary predicate

mkwst: sandbox all currently-open windows of that origin, then resume after clear event
... where the thread wound up

slightlyoff: that sounds like good behavior

mkwst: Great, I'll do it in this group.

bhill2: I like that spec too, as Facebook

rbarnes: powerful features, and what to ask the TAG to do
... as a strawman, ask TAG to set minimum security policy across W3C specs
... e.g. limitation to secure contexts

dka: carrot and stick, both "here are some findings" and "here are some people who can help you meet them"
... and implied threat of non-approval of Rec if you don't meet them

mkwst: what do we do with WGs who won't change, such as geolocation?

dka: stage an intervention, i.e., call them together in the same room at TPAC
... if the right people aren't at TPAC, find another opportunity

rbarnes: also a difference between existing features like geoloc and new things
... re advocacy, we can send a stronger message if implemtations are coordinated
... so you don't have to look at a chart to know where your feature will be supported

mkwst: inconsistent implementation is problematic for some features
... e.g. nonces, where we have to take care that it doesn't break non-implementations

rbarnes: powerful features, impact on other specs

mkwst: current spec defines secure contexxts
... not much on what a powerful feature is
... that's for TAG, others
... it would be testable if we added an API for "is secure context?"


mkwst: feature detection seems like a reasonable thing to want

slightlyoff: perhaps permissions API
... TAG has reviewed and said it isn't done yet
... could extend to include secure context detection

dbaron: the easiest feature detection should be tied to the feature

rbarnes: I was initially queasy about powerful features because it seemed to imply that some features weren't sensitive
... when we want to encourage the entire web to move to secure context
... I'd like to suggest that *all* new things should start from secure contexts

mnot: give WGs the tools and vocabulary to make good decisions

mkwst: get people in WG on-board

mnot: we have Securing the Web finding, we have security/privacy review, we have Powerful Features/Secure Contexts

rbarnes: IETF will publish a consensus document "things ought to be this way"
... groups can then use that as a starting point

mnot: Securing the Web did put thumbs on the scale
... we're starting to explore fingerprinting

rbarnes: "Secure Contexts" should ref Securing the Web http://www.w3.org/2001/tag/doc/web-https

mkwst: I worry that the privacy/security questionnaire is getting too long
... need to compress, not expand
... haven't yet reviewed feedback from PING in detail
... current structure is ask a question, talk about it from perspective of what other specs have done
... it would be great if someone wanted to provide supplemental information based on answers

mnot: TAG would prefer if people had already reviewed the questionnaire when they come to TAG for review

bhill2: AOB?

JonathanKingston: How's Credential Management?

mkwst: mostly implemented in chrome
... Axel is doing an implementation in firefox

<rbarnes> axel's credentials API: https://github.com/AxelNennker/firefox_credentials

JonathanKingston: Generating password missing

mkwst: sounds reasonable to add. I haven't yet come up with an API

[discussion about passwords' special-characters requirements]

mnot: Mixed content on localhost?

mkwst: browsers currently block mixed content on localhost

mnot: talk of a CORS-dance. is that on the roadmap?
... if you've got attacker code running on your local machine, it's game-over

[discussion adjourned to beer]

<bhill2> rrsagent make logs world

Summary of Action Items

[NEW] ACTION: JonathanKingston to write Same-Origin documentation [recorded in http://www.w3.org/2015/07/14-webappsec-minutes.html#action01]
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.140 (CVS log)
$Date: 2015/08/04 02:20:27 $