<tlr> +tlr
<dveditz> scribenick: dveditz
<inserted> HSTS Priming slides
ckerschb__: [introduction of
himself] The web is migrating to https and more sites are
moving
... mixed content blocker stops content from being upgraded by
HSTS (by current spec).
... A embeds script from B. MCB would block, but instead
perform HSTS priming (HEAD) request
... if site is HSTS we upgrade load and future loads will be
upgraded
... need to cache results for perf reasons
... [presents algorithm on slide]
... if HTTPS -- ok; if HTTP then if in priming cache return
cached value, else send priming request
... we thought this would help a lot but in the end not so
much.
... 0.6% of the time it's in the priming cache. 15.6% of the
time we send a priming request.
... results: HSTS 0.5% of the time; HSTS cached 14.5%; negative
HSTS cache 23.2%; timeout 18.9%; no HSTS found 43%
... [back to the original numbers, 83.8% were in the negative
cache. the previous line were the percent results from the
priming requests]
arturjanc: in this example you'd need the HSTS response on the resource requested? is it possible a site set HSTS on a renderable page but not on the subresource you're requesting?
ckerschb__: we tried different
things, pinging the main page, the resource, -- didn't see a
lot of difference
... success: median 683ms, Failure median 1223ms. Large peak at
2s is the timeouts
... 10 months, 1 engineer, <missed> quite a lot of effort
for what turned out to be not useful, but we learned from
it
mkwst: is that pie chart all requests?
ckerschb__: no, that's the percent of mixed content requests
mkwst: how much mixed content do you see? chrome sees 0.82% of page loads with blockable mixed content
dveditz: firefox see about the same amount, don't remember the exact numbers
mkwst: and very small number <missed> of blocked content allowed by users
estark: proposed mixed content
level 2 roadmap
... what problem are we trying to solve?
... make it easier for site owners to move to https?
... we're in a pretty good place for cross-browser consistency
on MCB
... "optionally-blockable present" state is hard to explain. as
a user what do I do when I see the warning?
... frequency is about 2.4% of page views in chrome -- still
pretty high. how do we drive it down?
<wseltzer> Mixed Content Level 2 Roadmap
estark: We'd like to get rid of
the mixed content shield -- the UI widget that allows you to
enable active mixed content
... chrome doesn't have it on mobile, just desktop. firefox is
the same
johnwilander: safar just drops the padlock
tanvi: firefox doesn't have the shield anymore, the control is buried in the site information dropdown
estark: we'd like to get rid of
the shield one way or another, but we probably want to keep the
ability. I broke it once and people were very angry
... proposals-- UA's should UPGRADE rather than BLOCK requests
that are currently treated "blockable mixed content". more
controversial than we thought.
... worry that it might not be the same resource (we have some
numbers.... basically it's the same resouce)
mkwst: andy investigated. best case it's the same, sometimes not there, sometimes it's a web page saying "this content isn't available". rarely is it a different resouce of the same type (e.g. script)
johnwilander: we should note the fact that we're blocking resources is already breaking stuff.
ckerschb__: instead of upgrading insecure requests (which is opt-in) you're proposing an automatic equivalent. Will there be an opt-out?
johnwilander: but we could be fixing broken sites by doing something like this, too
mkwst: there's a large disparity between shield usage and blocking numbers. possible that people are using the command-line flag to disable the blocker
estark: also to some extent
people don't notice sites are "broken" when it's incidental
things
... proposal 2 -- UAs should treat optionally-blockable content
as blockable by default. if we accept proposal #1 that means
we'd upgrade these, too. There are real usecases such as image
search on Google and Bing where they have chosen not to proxy
the insecure images, making them mixed content
... the assumption is a lot of these are loaded accidentally or
not important to the site's functionality. Requires sites to
opt in to loading of mixed content. will dramatically reduce
the amount of mixed content people will see
angelo: what's the benefit to blocking the insecure images?
estark: depends on the context.
an image can be harmless or dangerous depending on site
... my motivation is not necessarily security but reducing the
amount of this mixed state that users have to try to
understand
tanvi: if we think blocking optional mixed is too much breakage we could just strip cookies from them
mkwst: depends on what our threat
is. if the image is part of the site's UI this could give an
attacker control over your UI (replace a "transfer money"
button with "cancel" or something innocuous)
... puts pressure on ads using mixed images, but not clear that
will improve other sites
johnwilander: loading things in the clear lets people see a little what you're doing on a secure page. eliminating loading mixed content also removes the ability to use HSTS as a super cookie
mkwst: also reduces privacy threats. You do a secure search on google image search, but the loaded images reveal what you were searching for.
angelo: in Edge we did experiments whether users understand https and the security they get out of it and they don't. Surprised the mixed content UI causes confusion because I don't think they notice it
estark: is that an argument for
or against...?
... that's the problem we're trying to solve -- to simplify the
number of states so that when it _does_ matter it's
noticed
... proposals 3&4 are dependent on #1. if we do #1 we don't
need upgrade-insecure-requests.
... proposal 4 remove the shield UX more broadly, perhaps
removing it entirely like Safari
tanvi: the web is already used to the web being broken with mixed active content, maybe we should just break passive images as well unless sites opt-in
mkwst: EFF had a project where they upgraded mixed blockable content, but left optionally blockable stuff alone. they were trying to make the transition to HTTPS sites easier and the former broke sites but the latter was still working
<inserted> HTTPS Everywhere (EFF)
angelo: <couldn't hear>
johnwilander: is there a way to say "upgrade didn't work before, try without"
mkwst: my concern is that if you have a fallback mechanism then an attacker just has to block the secure path and then they force the fallback
johnwilander: can we clear things on the insecure requests? no referrer, cookies, etc
mkwst: what's the feel in the room on these proposals? I'm in favor
tanvi: I think they're too aggressive
johnwilander: i'd like to try speculatively upgrade passive content
angelo: I think the auto upgrade feels like it's not worth the cost
mkwst: it's not a complex cache, it's just automatically doing UIR. the cost might be timeouts and breakage
ckerschb__: might get different content
mkwst: hopefully rare
... hopefully lower cost connections than HSTS priming
angelo: worried auto upgrade will
be a perf hit
... lot of enterprise sites have really old applications
mkwst: if we don't force them they always will
jeffh: when mkwst says he's in favor you mean all four as a bundle?
mkwst: yes
ckerschb__: what things are blocked
mkwst: it's often not scripts --
fonts, and other types of resources. sometimes youtube frames
left over from long ago
... what's a reasonable next step? (obviously continue on the
mailing list, but other than that)
jeffh: the proposal is to update the MCB spec?
danbates: just do it and see if people yell?
johnwilander: we (safari) were
slow on doing blocking, but when we did it we make no way to
override and didn't get a lot of complaints.
... worried some site may be relying on us blocking old http
script resources that no longer exist. If we start upgrading
and then someone will buy that old domain and put a cert on it
it could be an attack on that site
ckerschb__: Let's Encrypt reports https content is 42% in april 2016 to 64% in 2017
mkwst: lgarron is looking for
feedback on the preload list. paper lays out points of
concern
... useful data, and important questions
estark: not prioritizing pruning right now over other things, but feedback welcome
mkwst: next step formalize the proposal, and get some perf data as Mozilla did for hsts priming
johnwilander: broken down by resouce type would be great
ckerschb__: another thing we considered was doing the priming request in the background. block the current request as usual to avoid timeouts, but future requests would be unblocked
mkwst: early stage, soliciting feedback
https://github.com/wicg/trusted-types
mkwst: see the "securing the
tangled web" paper -- really good
... the core idea is google will compile all its javascript and
use that to impose controls on data types
... if you put a raw string into .innerHTML() you're probably
going to execute script
... use the compiler to sanitize strings by putting it into a
type system. "Safe String"
... by default the compiler will stop you from using a raw
string on HTML sinks like .innerHTML and force authors to use
the safe types
... you may have .innerHTML throughout your application, but
only a few places where those strings are put into the safe
types. the sec team can inspect those chokepoints and make sure
they're safe.
... doesn't work for the web (no compilation), but maybe we can
create something that will work
... so create "TrustedHTML" and change .innerHTML() to take a
TrustedHTML object as well as a raw string. Could also give
devs the opportunity to turn off the "raw string"
variants
... we've sketched out a number of these types. maybe 10?
ArturJanc: those are the main ones. in a few contexts we may have a few more but the paper talks about the main ones
mkwst: TrustedType has
escape(DOMString html) and unsafelyCreate(DOMString url)
... TrustedScriptURL, TrustedJavascript
... alter the DOM sinks. there are 100's of sinks. this
document is not exhaustive. if we pursue this we'd want to
change all of them
... want to give developers a better option than strings, and
give them the opportunity to turn off the unsafe variants
... give security auditors the ability to tell whether string
creation is safe rather than having to go to all the places
where they are used
ArturJanc: to me the one
additional benefit is requires explicitly doing something
unsafe to introduce DOM XSS. everything is safe by
default
... if it's explicit to the person writing the code that
they're doing something bad they may avoid it
brent: we do something similar internally at github, homegrown. would be nice to have it in the DOM.
dev: this is purely a DOM XSS mitigation, why can't it be done by devs already?
mkwst: it can -- we've done it at google
ArturJanc: csp tries to mitigate the effect of an injections, this tries to prevent the injection
mkwst: at google we've built it into the compiler. putting it in the client is defense in depth because there's always code that didn't go through the compiler for some reason
dev: this seems to go against the grain of modern applications, where the injection is in a framework like Angular and not in the developers own code
ArturJanc: we've had feedback from angular that they're using their own"safe types" internally, and having it built into the browser could be useful.
dev: angular and react are already doing things like this, so why duplicate it?
mkwst: if we see frameworks doing
useful things we should build them into the platform so
everyone can benefit and not have to inplement it
themselves
... I'm very aware this is generalized from google's experience
and want feedback on this whether it's generalized or not. If
this desn't get adopted by libraries then it will be difficult
for a devloper to use
... we currently have all these DOM sinks. if you do everything
right on the sanitizing then great. but the platform abilty to
make the DOM sinks simply not take raw strings accidentally is
a hardening.
... some can be polyfilled like innerHTML, but location.href
can't be
dev: what attacks does this prevent that CSP can't?
ArturJanc: this prevents the injection. CSP can prevent some script execution
dev: we couldn't really rely on
this because not all clients will support it, so it will hav to
be polyfilled anyway. In which case we rely on libraries which
sanitize
... I'm not too worried about location.href because csp will
block javascript: urls
... what other ones aren't polyfillable
jeffh: this would be changes to the dom parsing specs?
mkwst: yes.
jeffh: and the escaping would have to be specified?
mkwst: basic escaping is trivial,
but it gets complex
... the escaper we have is very blunt. would be nice to use a
more sophisticated filter that people rae using, but it's very
library-specific. DOMPurify has specific rules for <@@>
for instance
... we could provide hooks for a sanitizer, or build one in to
the browser
... need to figure out what it would look like and what the
requirements are.
dev: DOMPurify is a good example of why this shouldn't be in the browser -- they keep updating it because it adds rules for new libraries and library changes
ArturJanc: we could tune it to
the CSP bypasses we're aware of, for example, would be
useful
... say you have CSP strict-dynamic you could use this to
sanitize the script loading so there's no bypass to load an
unintended script
[ coffee break ]
<mkwst> ok.
<jeffh> scribe: jeffh
mkwst: seeing deployment of SRI,
integ protection being used
... current guarantess are all hash-based
... works for a lot of deployment scenarios, but for others not
so much
... particularly for resources that change often or that u dont
control
... for those scenarios, have ack'd there is a deployment
issue. more difficult to use content hashes than they had
thought.
... if move to a sig-based mech, ie make guarantee as to
content creator rather than contect itself
... have proposal for this
... see: https://github.com/mikewest/signature-based-sri
... this is a different guarantee -- ie can verif at runtime
that the code running in browser is what they (the website)
created
... so this in combo with CSP3 is interesting...
... can then have a terse policy that says that all the code
needs to come from a given entity
dev: hashes painful -- have lots
of scripts
... being able to say "just verify with this public key" will
be way easier
... so w/sig-based, can just change key pair and effectively
revoke the signed content in user's browser cache
... thinks this is powerful primitive, would allow dropbox to
put it on their main page
mkwst: wrt impl compexity, put it
into chrome in about two weeks
... relied already having hash-based SRI impl'd
... is in chrome behind a flag
... a "provenance-based model" rather than "content-based
model"
danbates: asks dev why hash-based approach wont work for dropbox
dev: because they have 10's of scripts that load and adding hashes everywhere causes perf hit, is hard to manage, etc....
mkwst: see this as additive to the hash-based SRI v1 -- would not deprecate hash-based SRI
dev: we focused on hash-based first cuz it was easier. there's been a bug open for sig-based since the beginning
dveditz: thx
hadleybeeman: asked ques wrt whether CDNs have issues with this
mkwst: two types of CDNs - first
one where they are just a pipe (?) ...
... 2nd type: if you give them content and they serve it, yes
they can change your resources
hadleybeeman: there's some assumptions in this work and am not sure all the CDNs are ok with those assumptions
mkwst: goog model is the priv key is offline, the build system does all the signing, can do another model where signing occurs on the fly (priv key online)
hadleybeeman: ok, so seems the CDNs revenue model is not at risk...
mkwst: currently SRI addresses
<script> and <style> - both are loaded in whole,
made impl'g easier
... and spec'g
... would be good to look at all the other resource types to
see if we can apply SRI to them
... eg use AGL's merkle-tree approach to things that load
incrementally
... general idea need a verif mech that you apply
progressively
... so far chrome has not looked at this in much detail because
lack use cases -- tho maybe y'all could look at resource types
you care about and whether we need to take a look at
them....
johnw: would the site I'm looking at now load the same if I loaded it from another place in the world -- would be nice to have that guarantee
mkwst: that brings up content-based caching -- the whale browser is doing something along those lines ?
<dveditz> (I think)
?: we alsways download jquery in whole and verif that it is the same as before (sangwhan)
scribe: we can share the content in many scenarios if its SRI hashes match
<dveditz> oops, sorry
<sangwhan> s/sangwhan: we alsway/Hyungook: we always/
mkwst: is there a set of resources you know you can do SRI checks on , or ?
? : they only do it on scripts that have the SRI prooperty
mkwst: u mentioned jquery --
bhill's thoughts on content-based hashing
... have u identified a set of rsrouces athat are amenable for
this?
? right now they are collecting data on common web resources, and based on that data can bake knowledge into browser to make it load faster
mkwst: is curious how you are thinking abouty these problems
sangwhan: thanks :)
mkwst: <points to old version of SRI spec> asks: do you have similar concerns to what's mentioned in that old draft spec?
<dveditz> https://www.w3.org/TR/2014/WD-SRI-20140318/#caching-optional-1
Hyungook: whale based on chromium, interested in upstreaming their work
sangwhan: will help facilitate discussion
mkwst: now discussing: require-sri-for
https://w3c.github.io/webappsec-subresource-integrity/#require-sri-for
mkwst: seems useful, github
asking for this
... wonders if extending it wud be innaresting
... like automatically apply hashes to the page resources
... eg if script-src has 3 hashes, have the bwsr auto-check the
hashes against all loaded scripts
... when go to sig-based, would help with key rotation
dev: likes this. makes it more simple to deploy
mkwst: seems like an alternative
to strict-dynamic
... this model might allow us to say -- here's pubkey, you can
load scripts that are signed with this key
dveditz: maybe goog can do that, but most sites perhaps cant do that...
mkwst: is alternative to
strict-dynamic, not replacement
... does not have origin model, has key model
... wrt malvertising, can give advert provider a priv key and
only their ads get loaded
... applying the integ attr to every script is just overhead,
we ought to be able to automate it and have browser take care
of more of it
patrick: relative to content-hashing - does not see a huge win for sig-based cuz there's not much hashed stuff in his bwsr cache... (?)
mkwst: chicken & egg problem ?
dev: will work on this. curious
about what sig algs should specify
... having help from other browsers and sites would be
great
mkwst:
wicg.github.io/origin-policy/
... for example: some portion of a site can serve STS header
and break site
... so having a manifest/config file for origin at well-known
location
... would have oppty to fold in current things like HSTS. new
stuff like expect-ct
... have a prototype impl of orig manifest as patch for
chrome
... chipping away at landing it piece-by-piece
... because this config is sec-relevant, have to ensure it
applies to the page u are loading
... similar in nature to the policy URI moz had proposed for
CSP
... server asserts that it needs a manifest, UA gets it, and
loads site, UA caches manifest for origin, applies
subsquently
... this is sort of a special case of app manifest
... app manif are somewhat lax sec-wise. scoped to path, etc.
can exist "anywhere"
... msft has a "app store" model where the UAs get it from the
app store when getting app
... this is diff in that it is scoped to an origin
johnw: would gate this on whether user has interacted with site before loading the manif
jeffh: there will be subtle complexities in doing this (just a headsup) -- overall likes the idea -- it harks back to the "cohesive web security policy" paper
mkwst: in modern world, now have
h2 which can push the manifest to the UA
... hopes that h2 push makes it acceptable in perf sense
dev & mkwst: < back & forth wrt impl details of this sort of thing>
mkwst: this is in wicg, doc is not so good, but encourages to look thru the open issues to get sense of functionality folks are looking for
<dveditz> scribenick: dveditz
mkwst: a block on two branches of
the same conversation. Using secure contexts, and CORS
... we've defined a notion of a secure context. Gives other
specs the ability to restrict features to this kind of
context
... chrome started deprecating "powerful" features, can't be
used over HTTP, only in secure contexts
... this was backwards looking, deciding about features we had
already shipped
... when thinking about when to TAKE AWAY a feature we looked
at "powerfulness"
... wanted a higher bar because removing causes pain
... I think powerfulness became too ingrained. Good bar for
taking things away, way too high when considering new features.
We should not ship new features, no matter how powerless, in an
HTTP context. no one is using new features by definition,
preventing it is not causing breakage or pain.
<scribe> ... new features are carrots.
UNKNOWN_SPEAKER: not everyone agrees with me
danbates: I disagree with it --
there's an ulterior motive to a spec, to motivate people to go
to https
... I would love for people to go to https, but I don't think
it's right to use an unrelated spec as a lever to cause
that
<jeffh> JoelW's msg: https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/2LXKVWYkOus/gT-ZamfwAKsJ
danbates: when an API has implications about privacy or security, sure, we can restrict those. But just any algorithm? not the right way to move that conversation
nathan: I agree with [danbates]
mkwst: I'm not saying secure context spec itself needs to change, but should be considered when writing new specs.
danbates: that's differnt than what you said before
mkwst: my personal opinion is that all new specs should restrict features to https. but the actual decision is up to people writing those specs
<jeffh> chromium proposal: https://www.chromium.org/Home/chromium-security/deprecating-powerful-features-on-insecure-origins
estark: my sense is that we should have a security story for why any API is restricted to a secure context. otherwise it's an arbitrary cost. why this one and not another? slowing down http pages is another cost we could hypothetically impose
mkwst: there's no cost (unlike deprecation which clearly is). this new thing doesn't exist, preventing its existence isn't a cost to developers
estark: punishment is a better word? your peers are using a feature and you can't
mkwst: you call it a stick but to me it's a carrot
danbates: why do you want to do that
mkwst: I have an ulterior motive: when the web is over HTTPS it will be more secure
danbates: if that's an unshakable belief then why talk about it if you won't change your mind
mkwst: what google ultimately does is not up to just me
<npdoty> we could recommend case-by-case decision-making but still change the bar to be about security but not necessarily "powerful"
mkwst: the ultimate result for me is to switch the default option. Not "should we restrict this to https" but "why should this feature be unrestricted?"
npdoty: I've heard the concern here and elsewhere about not wanting to have a secret motive. setting that aside I'd like every feature to have a security and privacy consideration. In the privacy group we haven't seen a feature that doesnt ship data around
danbates: example a new cSS pseudo element (I've been working on one). so only secure pages would apply that style? what's the benefit
mkwst: in isolation I think you're correct, pseudo-elements are not powerful and shouldn't be taken away, and any single one may not make sense. But a story that says "new features can't be used on http" is a simple and clear story in aggregate
<npdoty> I think there are lots of CSS examples where a malicious intermediary can abuse them for ways that affect the security of the user, but it might not be significant for every CSS feature
danbates: you're holding developers hostage
mkwst: I'm not saying converting to https is free. when I say there's no cost I mean we're not taking anything away. When we restrict geolocation to https we are breaking stuff and taking it away -- that's a cost
danbates: we should look back at
recent specs and see "does it make sense to restrict these?"
and I think we'll find for lots of them it doesn't
... there's no reason for the restriction, it's just to serve
your own purposes
ArturJanc: a small counter arguemtn for pseudo-elements. Seems definitely unpowerful. but if it allows develpers the ability to provide powerful features in their site, the user's data is now available over an insecure channel
danbates: that seems like bad programming. if the user data is that valuable then they should put their site on https as a feature of their site
mkwst: my suggestion is that TLS is foundational. if you don't have that you don't have integrity
<npdoty> it's not the programmer/developer, but the attacker who inserts malicious code that will transmit user-entered data
dev: I agree with daniel. that
argument leads to "why show warnings -- everything slightly
wrong is an error"
... even think we should allow FIDO on insecure sites
jeffh: no, that's not allowed. TLS is requred to use FIDO
<jeffh> TLS is foundational to the security model of FIDO & WebAuthn
mkwst: concentrating on
individual specs misses the bigger picture.
... if we can do things to make developers provide TLS
guarantees (CIA) we should
estark: why shouldn't we.. <missed it?>
<npdoty> estark: why just new APIs? why not other changes, like performance improvements or any other feature
<npdoty> it seems like maybe there is interest in reviewing particular specs case-by-case, even if there isn't consensus on using SecureContext as an adoption encouragement
mkwst: web apis are a potentially low hanging fruit. make all new ones require a secure context. could switch default and require the exceptions to put "not secure"when appropriate
<jeffh> https://wicg.github.io/cors-rfc1918/
mkwst: one thing floating around
is CORS-1918 -- require CORS to make connections on a local
network
... e.g. home routers are almost always on 192.168.1.1. many
they have CSRF or other problems, because they think of
themselves as running in a safe internal environment.
... attackers can through a simple image request (in the right
cases) change settings on the router
... proposal makes 3 rings. "outside", rfc1918 "private"
ranges, and loopback. This doc suggests requiring CORS not only
cross-origin, but cross-network
... from outside to RFC-1918? use CORS (with preflight) even if
not required by current cors specs
... the server would have to explicitly opt-in to the case of
allowing an outside host to talk to them
... your router, which never considered this possibility,
wouldn't respond to cors and would be safe
... same with loopback. happens all the time. Spotify, dropbox,
etc install local servers.
<npdoty> how many things are broken if 1918-style addresses simply couldn't be referred to from outside-web pages?
mkwst: those would have to reply
with CORS headers, but are actively maintained so they can do
that
... tried an implementation in blink. we make connections on
ipv6 and ipv4 and race them, make harder to implement. moving
this up the stack which will make it easier to implement in the
future
... depends on your network, but for enterprises it's common to
map corporate resources to internal addresses . But they may
have a public facing login/portal thing that then references
internal thing. more common than I'd like
... if we shipped this we'd first announce it, then start
throwing warnings, and only later start blocking
dveditz: what happens in an ipv6 world when you can't tell?
mkwst: punting on that, let's
solve ipv4 where we can because it's going to be around. 2)
give enterprises an ability to specify what's internal and
what's not
... we'll need an enterprise carveout or people will stop
upgrading
... but maybe some kind of server opt-in, which they won't like
because it's work to get back to what you had before, but at
least doable
johnwilander: would it be nice if this were https only?
mkwst: yes. if insecure you can't connect to http rfc1918
estark: but then evil guy just launches the attack from their secure server
mkwst: that's OK -- now you know
who they are. at least an anonymous MITM at your coffeeshop
can't do it
... we want to catch rebinding attacks so we have to use the IP
address and not the hostname
<jeffh> fyi: https://stackoverflow.com/questions/21158036/how-does-port-scanner-portal-works
dev: I'm not sure the amount of breakage is worth the security gains
estark: there are a lot of web
features that are CORS violations. csp reports are one.
expect-CT
... several long threads, mostly on fetch, about these
exceptions. can't turn on cors for many of these because it
will break things
... leaning towards documenting the existing carveouts, and
saying new features need to come to us to avoid CORS
mkwst: are there other things
internal to other browsers? we found stuff in chrome for
example
... if the mime types differ from this list can they be made to
match?
UNKNOWN_SPEAKER: TAG has some comments about it
slightlyoff: serviceworker issue
719 is related to this
... SW reveals what resources a "no-cors" CSS stylesheet is
loading. can include confidential URL leaks
... I can't represent timbl but I can try to summarise our
conversations
... obviously * doesn't mean * because developers don't know
what they're doing, vs. I'm a responsible developer and give me
*
mkwst: the problem with * is it doesn't allow credentials, so they try to work around it
slightlyoff: the TAG hasn't gathered the evidence to figure out whether this is being used wrong, or whether this is an effective roadblock to bad behavior
mkwst: counter example is old flash crossdomain.xml with abuse of "*" -- it was easy, but gave unintended power
slightlyoff: different resources
have different defaults (credentials or not) and sometimes you
can't tell why a load failed to know if a retry could
work
... and sometimes no control totry a different way
... for instance resources requested from a CSS stylesheet --
no way to change the mode
... same difficulty as finding out if all my requests are https
or http. if we had a mode like upgrade-insecure-requests that
does "retry with no credentials on failure" that might kind of
work
mkwst: that puts the onus on
every client rather than on the server that can just fix
things
... retry seems easier than adding a keyword on all the content
that would need it
dev: would like a way to say certain internt resources have certain policies
mkwst: what I hear from you is
that a mode that says "retry w/out credentials on fail" would
be helpful
... how would you spec that?
dev: another CSP directive?
slightlyoff: don't have a strong opinion
mkwst: the frequency this comes
up means we ought to do something about it. don't think Anne is
too happy but we'll find a way to convince him
... is this actually a priority, or something we're doing
because it makes timbl happy?
slightlyoff: like other footguns.
<scribe> ACTION: mkwst to take a stab a specifying a CORS switch "retry without creds on failure"
<trackbot> Created ACTION-222 - Take a stab a specifying a cors switch "retry without creds on failure" [on Mike West - due 2017-11-14].
<scribe> scribenick: mkwst
estark: https://wicg.github.io/isolation/
... Provides an isolation mechanism for particularly sensitive
sites.
... Not intended as a general-purpose mechanism.
... Targeted at apps that are willing to give up some
functionality in exchange for isolation properties.
... The idea is to deliver this via origin manifests.
Origin-wide configuration subjecting the origin to various
restrictions.
... One might be similar to Entry Point Regulation (https://w3c.github.io/webappsec-epr/)
... Restricts navigation to the origin from cross-origin
entry-points.
... Restricted to same-site cookies.
... Setting the isolation flag would give you your own process
in browsers that support that kind of thing.
... (These restrictions documented at https://wicg.github.io/isolation/#strategy)
... We put this proposal together a while ago, but haven't had
a lot of active work since then.
... It would be helpful to understand the priorities from
y'all.
ArturJanc: I think this would be
useful, and we'd use it.
... Not on `www.google.com`, but on high-value
subdomains.
... Account management sub-interfaces, passwords, etc.
... The navigation restrictions are particularly meaningful to
us.
dveditz: Would GitHub use this?
GitHub: Maybe? Need to think about it.
Dev: I find SameSite cookies to
be flexible enough to write my own policies.
... I generally tend not to be a fan of declarative policies
that I can't control.
... SameSite seems like a better, more flexible
primitive.
... That said, I can understand how this is useful if you can't
control cookies.
... If Firefox doesn't have SameSite, I'd prioritize those.
dveditz: Working on it.
Dev: Basic, flexible primitives
are more exciting. This is more of all-in-one, tries to solve
specific problem specifically.
... Dropbox probably won't use this.
ArturJanc: One primitive we could
tease out of this are the navigation restrictions.
... Might be useful in themselves.
Dev: Sure, extracting primitives from this proposal seems like a good plan.
estark: I think the main things
that don't already exist as primitives are
... process isolation and navigation restrictions.
... Process isolation is something everyone would turn
on.
... Also hard to spec out. Some browsers don't have it.
mkwst: Right. Not really part of the platform, not web-exposed.
[Process isolation discussion: Site Isolation is awesome. We should ship it.]
estark: Something of a tragedy of the commons. Finite resources, need a strong signal that an origin should be isolated.
npdoty: What about installed PWAs?
mkwst: I think the site isolation team would appreciate feedback on heuristics.
ArturJanc: For a security reviewer, it gives you a lot of things out of the box.
???: What happens if there aren't resources to spin up a new process when you navigate to the isolated page?
mkwst: We can never guarantee that you get a new process. But tabs die all the time on mobile. Maybe we just unload some other tab. Strong hint for the browser.
???: We could just error out the navigation if you can't get a new process.
scribe: This should just be part of site isolation. The browser should know what's important, and give the important things their own process.
ArturJanc: This proposal is more than process isolation. There are other aspects to the mechanism that are important.
npdoty: Is there value beyond
having these as separate features?
... Helps the security review process?
ArturJanc: It's a little at odds
with what Dev was saying. For me, the ideal model for this is
that it could be composed of primitives that could be used
separately, but there's a single switch to turn them all on at
the same time.
... Know they always apply to the origin.
Dev: Sure, but I'd suggest
starting with the primitives, then building the bundle, not the
other way around.
... I worry that the current approach bundles things up
first.
<npdoty> knowing they apply to the entire origin makes you confident that it applies to every endpoint, not just that some engineer missed something
Dev: but I agree that bundling things up is better for security.
???: It would be unfortunate if folks want to be isolated, but just can't adopt one of the primitives.
dveditz: What happens if a third-party tries to load resources from an isolated origin? Images, etc?
estark: I think that the current
proposal allows you to load cross-origin resources, but the
cookies are SameSite.
... Can't frame it, can load, but without credentials.
dveditz: What happens if the server tries to set a cookie on that response?
estark: It's in origin manifest, so the policy would be sticky.
<tanvi> https://docs.google.com/presentation/d/1dIKMAf3saJraBqQoVx-pnZ1Me1Tc1ATAOhez_u9IMAw/edit#slide=id.p
estark: Like suborigins, this is
a complicated proposal because it introduces a new property of
the origin.
... Implementation would be a big investment unless we had
something like origin attributes.
dveditz: Wait, how could you have isolated and non-isolated version of the site.
<jeffh> npdoty: i think so
estark: It might have data, then
ask for isolation.
... Need to shift from non-isolated to isolated.
tanvi: Origin Attributes are
something we're using at Mozilla to add properties to
origins.
... We have a few things in Firefox to extend past
scheme/host/port
... Firefox containers, first-party isolation, private
mode.
... Could imagine suborigins, isolate-me, other new things
falling into the same category of thing.
... Apply SOP on not just scheme/host/port, but on all of these
properties
... If an attribute doesn't match, the origin doesn't
math.
... Example of containers.
... Logged into Twitter in a personal account.
... Load Twitter in a new tab with a new container with my work
account.
... Tabs would be cut off from each other.
... Containers are great.
... Almost everything that requires SOP is separated.
... Cookies, cache, localstorage indexeddb, http auth, dom
access, tls connections, service workers, broadcast channels,
user certs.
... Haven't (yet) segregated:
... certificate overrides, permissions, locally stored data not
accessible to the web: history, bookmarks, saved search and
form data, saved passwords.
... Depends on origin attribute:
... HSTS and HPKP, separated for private mode, not
containers.
... OCSP responses by first party, but not containers.
... Is this a framework other vendors are interested in
adopting?
npdoty: Is this exposed to the site?
tanvi: Nope.
... But some features (suborigins?) might be exposed in some
way due to their nature.
npdoty: I guess I was asking whether you need to spec this out, or whether it's a browser feature?
estark: We got interested in this because, when looking at suborigins, having something like this as a spec concept would be convinient.
<deian> +q
estark: Otherwise, speccing
suborigins means looking at every spec that does origin
checks.
... It's not clear whether that's the right path if we only
have one or two use cases.
deian: I'm super-excited about
this.
... I realize that it's somewhat painful to adopt.
... Suborigins, COWL, opens up the floor for adding new
security spec work.
... Gives us a hook to tie into from a spec perspective.
[discussion of cache that the scribe missed]
tanvi: For first-party isolation, all storage mechanisms are segregated per first-party origin.
mkwst: Are we really saving work?
It looks like even in the existing examples, we already have
exceptions.
... Perhaps we'd really need to look at all the SOP checks
after all?
<tlr> estark: could also imagine speccing in over-general way
<tlr> ... where every attribute gets to decide which individual feature is segregated
tanvi: From an implementation perspective, I'd rather not have edge cases.
<tlr> tanvi: would rather not have bunch of edge cases
tanvi: Checking against every
origin attriubte to find its policy.
... Error prone.
deian: Would it be possible to
encode these things as attributes?
... And relationships between attributes.
... So the check wouldn't be "equality", but a spec-specific
check for the given attribute.
... Is having a function we could override too much to ask?
<tlr> mkwst: if I can repeat, think he's suggesting that current check by Firefox is equality
<tlr> ... compares each attribute, if all are same, then origins are sane
<tlr> ... dan suggests more granular checks for each type
<tlr> ... delegate to that spec for this check
<tlr> ... suborigin would define comparison, isolation would define comparison algorithm
<tlr> estark: think that is what just talked about
<tlr> tanvi: policy by policy
<tlr> npdoty: suggesting way to implement - functions you override
<tlr> estark: would add complexity
tanvi: We can figure out
implementations.
... Looking for additional use cases.
... Are there other things here that could justify this
concept?
ArturJanc: Coming back to
finer-grained origins.
... Could treat an execution context after a certificate
warning as distinct from the origin with a valid cert.
... Would avoid leaking data to MITM.
... Another possibility (that I don't endorse!) would be EV vs
non-EV.
... Probably a bad idea, but could be exposed as origin
attributes.
estark: We've poked at that a bit
in the past.
... Could also just wipe the browsing data.
battre: I wonder if this offers opportunities for login-mechainsms.
tanvi: You could imagine Twitter
using a login origin attribute.
... For multi-user signin.
npdoty: Some of the things on
this list don't seem like SOP things.
... cache?
Dev: The argument is that we should extend SOP to include these.
npdoty: Some are
orthogonal.
... the cache is the distinct in a container, but it's one
cache per container, right
Dev: Cache is a thing that's
across the full browser.
... Shared.
... Now we're saying that it's not shared.
... This is this origin's cace.
... The cache-key includes the attributes.
npdoty: But the cache doesn't include scheme/host/port, right?
tanvi: Creating more caches.
npdoty: A few of these have similar properties.
ArturJanc: You'd likely want to share a cache between a suborigin and the rest of the browsing profile.
npdoty: I think Containers is a great feature, but if you spec it out, might want to separate some of these things out.
estark: Do you mean that there are some things in the web platform that would be expanded to include origin attributes, and that browsers might choose to do more with particular attributes?
tanvi: This is the framework
built in firefox.
... Not what Jochen specced out in an explainer doc.
[scribe was checking the time, sorry]
scribe: Chrome does this with profiles today. Nothing to do with origin.
estark: the question on the table is whether it's useful to take this concept and define it in a way that's useful for other web features.
<tlr> npdoty: all features suggested in this proposal client-introduced. Others server-introduced
npdoty: The ones in this framework are client-introduced, as opposed to server introduced?
<tlr> tanvi: what we have so far is client-introduced. came together with emily to talk about server-introduced
<tlr> wilander: can site code know?
<tlr> tanvi: no
tanvi: Right now the site has no kowledge of this.
npdoty: Similar to isolate
me?
... Maybe client-side separation?
... These aren't same-origin checks, but maybe the site would
like isolated cache?
tanvi: Not just process
isolation, but everything.
... Nothing talks to me, I talk to noone.
... Separates everything, seemed to fit nicely into this
model.
... Process isolation on top of that, if supported.
npdoty: Would we expect servers that opt into this to want cache isolation, etc?
tanvi: Probably. Not even a spec at this point...
wilander: WebKit partitions caches and storage today.
tanvi: Sounds like we're unsure if it's useful to spec this, and won't come to a decision right now.
<tlr> mkwst: agree. also think that this is a conversation we should move html folks into.
<tlr> ... need to talk to them about kinds of checks doing today, kinds of checks need for origin attributes
<tlr> ... annevk wasn't enthusiastic when talked a couple weeks ago
<tlr> ... need to touch all entry points anyway
<tlr> ... maybe do feature-by-feature, unify later when worth it (?)
<tlr> ... think need to make decisions about the various checks; not sure equality always the right answer
<tlr> ... haven't looked at all the places with origin checks
<tlr> ... don't have firm reaction, vague feeling is that this is a good primitive that doesn't necessarily save work
<tlr> ... but might be the right primitive to put into specs to make it easier to make changes in future
<tlr> tanvi: yesterday, didn't really come to conclusion on path forward for suborigins
<tlr> ... discuss this and suborigins together, and how model would fit together?
<tlr> mkwst: conversation would be valuable
<tlr> arturjanc: think this kind of framework enables really important things in the platform
<tlr> ... don't know what all of those would be
<tlr> ... separation could be useful in the future, even if for things we don't know yet
<tlr> ... based on discussion yesterday, don't know whether suborigins without this framework that makes origin more flexible would go very far
<tlr> ... this might enable suborigins and other features
<tlr> ... don't think we have conclusion, but think it's a useful primitives.
<deian> I agree with arthurjanc on this
<tlr> ... needs more work, but like a lot
<deian> https://presefy.com/#/channels/deian
<tlr> ScribeNick: tlr
<deian> presefy.com/deian
deian: with information sharing
(postMessage) assume other end behaves itself
... also, developers want to privilege separate
... also, drop privilege / least privilege patterns
... prevent bugs where you might be leaking data
<tanvi> anyone know how to dial in?
deian: use cases - password checkers, encryption services, etc
<tanvi> thanks
deian: run code with least
privilege or no privilege
... COWL framework: data sensitive to finer grained
entity
... explicit control over particular namespaced-origin
... changes against last year: only confined iframes
... within iframes, treat code as potentially untrusted
... feature policy used to disable a bunch of unsafe APIs
... still have postMessage, XHR, fetch
... shouldn't be able to send data to a.com when you have seen
data from b.com
... if you've seen data from a.com and b.com, can't
communicate
... privileges -- a.com iframe that has a.com privilege can
communicate arbitrarily when it has read a.com data
... when it has b.com data, can only communicate to b.com
because doesn't have that privilege
... developers: sec-cowl http response header
... labeledObjects in the browser
... privileges: sec-cowl header (which can include privilege of
origin); or JS objects
... eg, password checker iframe that can only talk to
network
... implementation in chrome close to done
... firefox needs more work
... issues -- disabling APIs ad hoc -- would be nice to specify
things in feature policy
mkwst: feature policy folks are
interested in covering lots of things
... if have suggestions for things feature policy should cover,
please file bugs
... they are likely to add on the side of restricting more
things
... main concern, level of granularity - becasue automatically
extends into child frames and control other originm, want to
make sure granularity isn't fine
... want large features to be disabled -- break pages, don't
expose security problems
deian: most finegrained is
message(scribe didn't get)
... currently using iframe sandbox flags - would like to avoid
those
arturjanc: there is proposal for
document.domain = 'null'
... intended to disallow dom access
mkwst: that proposal didn't meet
with a whole lot of enthusiasm
... nice thing was: trivial to implement, based on extant
infrastructure
... using document.domain in purely destructive fashion
... it's behind a flag in chrome
... submitted patch to html; that requires more extensive than
anticipated discussion
dveditz: csp sandbox?
mkwst: this seemed easy.
deian: protectedObjects as
promise?
... final - monkey-patching fetch is easier according to
avk
... want MIME types to match when dealing with sensitive
things
... maybe can just expose fetch for now because easier to
fetch, figure out XHR later?
mkwst: focus on fetch seems reasonable
Abdul: have been working on
chrome implementation -- pretty much done
... building on existing webapps to see how to
compartmentalize, use cowl
... thinking about how much can automate through frameworks -
make easier for people
... think there might be people in google security team who
might be interested
arthur: isolated realms
ideas
... interested in examples of applications. This is different
from other things on the web
... how to apply to existing framework, or example application
that resembles other things on the web
... could help clarify how others might use it
<deian> I have to run, thanks all!
palmer: blink-dev post about deprecating and removing key pins
wilander: nod
Hakai: yeah
wilander: never thought it was really good
palmer: as primary causer of this
foolishness, can only agree
... key thing is it's a thing in the web platform
... but it's managed outside the web platform
... we in this room do not have power over what CAs are up
to
... we do not have power over what OS / platform vendors will
include
... too many moving parts and too many different
interests
... cannot present to developers a serious, stable API
... we cannot write a unit test for it
... bad deal for web developers
... they haven't adopted it
... numbers are low - and that's good
... nip it in the bud
... it does cause some protection now
... protection is exactly the same as the risk: you can break
your site
... market has spoken
... what we want instead -- certificate transparency provides
not the same hard-fail runtime protection
... it does provide a lot of protection
... lots of faith, have seen it work
... less of a footgun, easier to reason
... better ecosystem interest alignment -- eg, "we will monitor
your certs"
... who will write the shell script to do the grepping?
... lots of people, for $$$
... most of the safety for close to none of the rest
... I want to kill my baby.
Angelo: reason why edge doesn't have it is that chrome was talking about deprecating
dev: confused. Isn't this
IETF?
... think it's fine to kill
typo. Huakai (for later minute cleanup, sorry)
[ ... ]
estark: expectct doesn't have subdomain opption - have to add that
dev: there's some time lag in
there
... what is chrome's response time when mis-issue
... need to get coherent logical story about how it's
secure
... also ok with deprecating
... don't know we have answer that's better than HPKP
jeffh: not a bad idea, but not
practical in practice
... key pinning works well in mobile apps
palmer: if you can update your
app!
... there might be things that get into your way
jeffh: it's a good dea, but footgun in some contexts
wilander: was excited about PKP
when google did for its own services
... found terrible CAs doing things
... is every kind of pinning going out?
palmer: proposing to also get rid
of preloads at some future date
... launch CT more, turn down dynamic pins, then turn down
preloads
... [ interpretive dancing ]
wilander: preloads are something you can have control over
palmer: even then, we have had mishaps
<npdoty> (are there preloads of pinned certs, not just preloads of HSTS?)
estark: don't think of CT as
drop-in replacement for same security properties of
pinning
... CT deters mis-issuance
... hoping to drive down ecosystem wide incidence of
mis-issuance
dev: deprecating on PKP - risk is
taken on by app authors, risk is born by CAs
... agree ecosystem needs more investment
... think long-term is good plan
estark: interest in expect-CT has
10xed
... since announcing intent to deprecate PKP
dev: PKP is key pinning. lots of
the gotchas are around pining key
... CA can still be useful
mkwst: doesn't caa solve that?
dev: dns - not really
secure
... and mis-issuing CA an ignore caa
mkwst: doesn't solve malicious; does soplve accidental misissuance
dev: feel like lots of places where people put self in hole was key, not CA pinning
mkwst: dangers to CA pinning as well
palmer: always had that as
guidance. people didn't follow or understand
... helps, but doesn't get you to level of proper API
guarantee
... neither does naming....
... things get revoked
... [ Mars landers, legs, sand ]
... not hearing internal screaming
dev: only internal
wseltzer: anybody interested in
that feature from smaller site perspective? somebody not likely
exposed in way that CT would help detect MITM?
... self-published sites?
palmer: expect-CT header would,
in beautiful future, make CT mandatory
... couldn't get a mis-issued cert without noticing
... CT is level playing field
npdoty: is it that small site would have to keep up with CT logs?
wseltzer: have auditors ...
npdoty: what do you do when you get a notification?
palmer: call CA to ask for
revocation?
... in some cases, CAs might call you
... future is moving toward automated tooling, alerts -- got
one last week
... expect all providers will start catching up
dev: browsers stopped doing CA
crls, no?
... revoke should mean revoke
palmer: hmmmm.
huakai: revocation code is nasty
palmer: once get increasing
alerts, updates - that goes hand in hand with shortlived
certs
... somewhat mitigates revocation nightmare
mkwst: would like to transition to broader deprecation topic
<npdoty> I agree that revoking/upgrading your cert is easy when you're notified, but I'm still not sure what the small dev does when a different CA publishes a cert for your domain
mkwst: wondering if can extract
some principles about what should consider when deprecating
features
... blink - have done deprecations successfully, and some that
were painful and maybe not worth it
... deprecations are important part of maintaining platform -
remove something that was bad
... also, potentially painful - developer cost, in particular
when features are used
... worst example of deprecation in blink was
showModalDialog
... was bad feature - caused nested execution contexts that
depended on each other
... getting rid of it was security & understandability win
- good for platform, good for browser
... missed lots of enterprise usage, caused a lot of pain for
enterprises relying on it
... tried to take away from that - need to be careful about how
to examine thresholds / usage
... there are areas of the web that are metrics misses
... no data sharing - or looks like a small subset, but it's
the entirety of a small set
... try to have some balance between value of deprecation and
look at what deprecation does to web
... wondering if others have thought about same problem, and
what approach they take
wilander: had a couple
discussions with chrome devrel. set of ideas - maybe joint blog
posts, communicating in developer tools / web inspector
... didn't really reach consensus.
... example - plugins. both chrome and safari were trying to
deprecate flash; were chatting about silverlight; had troubles
with quicktime
... it was hard
... open to trying to do it
... think joint blog post could be a way
mkwst: for sth like flash, that makes a lot of sense
wilander: synchronous XHR.... might have fringe cases that we don't expect
huakai: appreciated chrome giving heads-up about turning down smaller features -- eg ftp
mkwst: when blink deprecates
things, sending intent to deprecate to blink-dev
... two releases with deprecation warning
... maybe have a list
... like to find people who care - how to find?
wilander: floated idea that, if user enabled developer mode, inform about intent to deprecate in browser UI?
mkwst: in blink, dev tool focused. lots of warning sin dev console
estark: wanted that mechanism to exist....
arhtur: webmaster tools?
estark: about to run experiment using search console to get site owners to fix cert problems
wilander: similar things on native platform -- telling users that 32bit apps won't work
mkwst: does exert pressure on developer, but fairly indirect
dveditz: do it; it's hard and tricky to know you've told everyone and then you find out you haven't
mkwst: concrete example -- would
like for appcache to not exist
... feature policy should be mechanism by which site should
turn off features
... would like you to not have to choose not to have
feature
... it's actively dangerous and a bad thing
dveditz: make sites slower when
they use it?
... to deprecate, actively harm performance of feature rather
than killing
palmer: insanity wolf for the web!
wilander: people want it to be around in safari until have service workers
??: if have service worker, won't have app cache
mkwst: didn't mean to talk
specifically about appcache
... if we are to deprecate a feature, what do we do as a
group?
... somebody gets to say "it's friday and I hate synchronous
XHR"?
Patrick: is meta issue developer awareness, or updates?
mkwst: awareness. some sites will never update; if below some threshold, should accept breakage
[...]
wilander: willitbedeprecated.w3.org?
mkwst: deprecation messages? report observer API? If you listened for these messages, would get deprecation messages
??2: earlier discussion - what's right way to contact us?
Patrick: intent to deprecate
twitter..... maybe have some way to circulate intent to
depreate
... doesn't feel like large enough a problem that have to
engineer solution
... yet another list?
mkwst: take that to dinner
mkwst: anything to make monthly calls better?
dev: used to go - but
incentivizes decisions by those who make the calls
... have call only when ML thread is getting so insane it needs
resolution?
... once stopped attending, couldn't tell how and why decisions
were made
mkwst: ml traffic has died off.
tried to move stuff to github - that was intentional
... experiment was successful, but is it actual success or
failure in disguise?
wilander: like mailing list
... lower threshold for people to engage
... github feels like working in bugs; I do that all day
arthurjanc: +1
dev: webauthn subscribes list to github
mkwst: find that annoying, but also follow the github issues.....
dev: their pattern is more welcoming to newcomers; experienced people can filter
mkwst: maybe pause calls; move
traffic from github to list for some discussions
... delegating details to staff contact
wseltzer: happy to help
mkwst: -> mailing list
wseltzer: keep very occasional calls as checkpoints?
mkwst: not opposed, but if have
calls, must have a purpose
... -> mailing list
dveditz: let's cancel next week's meeting
jeffh: thanks mike, dan
mkwst: want to talk less
... also, thanks!
This is scribe.perl Revision: 1.152 of Date: 2017/02/06 11:04:15 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: i|[introduction|HSTS Priming slides Succeeded: s/edit#/view#/ Succeeded: s|https://github.com/mikewest/webappsec-mixed-content/blob/master/proposed-level-2-roadmap.md|https://github.com/mikewest/webappsec-mixed-content/blob/master/proposed-level-2-roadmap.md"Mixed Content Level 2 Roadmap"| Succeeded: s/lets encrypt/Let's Encrypt/ Succeeded: s/content is 46% in april 2015/content is 42% in april 2016/ Succeeded: s/estart/estark/ Succeeded: s/...: /... /g Succeeded: s/...: /... /G Succeeded: i|couldn't hear|HTTPS Everywhere (EFF) Succeeded: s/--coffee break--/[ coffee break ]/ Succeeded: s/scribenick/scribe/ Succeeded: s/david:/danbates:/ Succeeded: s/?:/hadleybeeman:/ Succeeded: s/?:/hadleybeeman:/g Succeeded: s/the ? browser/the whale browser/ Succeeded: s/?/sangwhan/ FAILED: s/sangwhan: we alsway/Hyungook: we always/ Succeeded: s/?: relative/patrick: relative/ Succeeded: s/slightlylate: s/slightlyoff: s/ Succeeded: s/nick: sug/npdoty: sug/ Succeeded: s/this assumes that the polycom is still dialed in....// Succeeded: s/ths/this/ Succeeded: s/?1/Abdul/ Succeeded: s/??ms/Angelo/ Succeeded: s/??ms/Hakai/g Succeeded: s/ignore ca/ignore caa/ Succeeded: s/sound/soplve/ Succeeded: s/??/Patrick/ WARNING: Replacing previous Present list. (Old list: (no, one), dveditz) Use 'Present+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Present+ dveditz Present: dveditz weiler ckerschb__ jeffh wseltzer estark mkwst johnwilander ArturJanc tlr natasha engelke tarawhalen battre Sangwhan Found ScribeNick: dveditz Found Scribe: jeffh Inferring ScribeNick: jeffh Found ScribeNick: dveditz Found ScribeNick: mkwst Found ScribeNick: tlr ScribeNicks: dveditz, jeffh, mkwst, tlr Agenda: https://docs.google.com/document/d/1MBU9X0qzCHhFCskvZn43-SosxslQisf6lHRTjKsONR4/view# WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: mkwst WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]