Permissions and User Consent Workshop

26 Sep 2018



weiler, jnovak, Andrew, Hughes, https://www.w3.org/Privacy/permissions-ws-2018/papers.html


<Ted_Drake> here ted_drake

<tomlowenthal-unsafe> 👋🏻

<wseltzer> github issues link

<wseltzer> [introductions from the program committee]

<wseltzer> https://www.w3.org/Privacy/permissions-ws-2018/cfp.html#program-committee

<wseltzer> https://www.w3.org/Privacy/permissions-ws-2018/papers.html

Context, Jo Franchetti

<ParLannero> Hi, I'm Pär Lannerö, joining remote from Stockholm, worked on meaningful consent since 2010 in the CommonTerms.org project, and BiggestLie.com. Not sure if you can hear me.

<npdoty> scribenick: npdoty

jo: values of the Web, including new technologies and information, including APIs for hardware, sensors and other data
... being blocked from this data until permission is essential
... trust users have in the Internet is important, can devalue the Internet as a whole
... how and why we should ask for permissions is unclear, no standard, Permissions API just provides a query
... specs typically require how much data is being collected, for how long, how it's being used, how to delete it, etc.
... a "crisis for permissions" frustrates users and reduces trust in the Web as a whole
... bad examples could be because of misunderstanding or malice
... mistrust in popups discourages adoption of tools that could be really helpful
... "are we the baddies?
... look at some of the bad examples, a group therapy session for sufferers of bad permissions
... immediate permissions requests on first load

<wseltzer> [excellent cat gif for "jump on the user as soon as they've landed"]

<mt_____> we see >90% rejection for geolocation prompts

jo: often don't even know what the website is about at that point, but location and notifications permissions often happen
... instead, we could ask for permission only when in use
... ambiguous wording (double negatives, confusing language) in some permission prompts
... long lists of partners or permissions, so easy to accept everything, but hard to select what they want
... sometimes "I Agree" might mean 300+ checkboxes, in one example
... I would feel cheated, wouldn't trust the app that I was using
... frustration of full-page takeovers, with all content blocked until permission is granted
... landing page includes a full page of text
... asking for unnecessary permissions (maybe for future use?)
... GDPR requires consent with certain information, for Web permissions I don't know how long they're granted or have easy way to manage
... no explanation is given for why information is being requested
... do users know where to go to change a setting after the fact?
... dark patterns about misuse of sensor data

<mt_____> did you know that with microphone and speaker access, you can build a sonar and locate someone's finger?

jo: let's gather examples / patterns on the whiteboard

wendell: what is required by law and what is chosen by the design team?

moneill: GDPR requires it be as easy to withdraw consent as to give it

serge: Context. a user might agree in one particular setting, but not want it to be used later / other purposes

jo: case-by-case basis on picture geotagging, for example

Frauke: users might read but not actually understand text. are there good examples?

tomlowenthal: I think users do read and understand, but if it's especially complicated, users can't understand because of the complexity of the system

Ted Drake: using inaccessibility, low-contrast, small-font text, issues that have been addressed in WCAG

scribe: applying a11y to these permissions could help

Christine_Utz: research on consent notifications in EU, are typically from IAB framework, a lot of information provided and also not enough information
... might display all possible vendors, not the vendors actually in use

signal/noise ratio

aleecia: users do not trust us. Web is a hostile environment users try to navigate
... more users assumed an opt-out was actually a malware/spam vector

serge: Android/iOS have settings to opt out of long-term tracking -- all same persistent identifiers are sent, along with a Do Not Track preference
... have to take it on faith that the data recipient is respecting that

robin: unless you're doing something extremely simple/specific, could simply be engaged in fraud
... took a team a long time in order to actually understand what is being done and how to demonstrate it's being used correctly
... transparency into how data is flowing

jnovak: sensors (without permissions) being used for browser fingerprinting, surprises users
... that might put more things behind prompts, yet another prompt
... what should we be prompting for?
... how do the first and third parties prompt on a particular web page?

<wseltzer> npdoty: concerned about problems where users throw up their hands

<wseltzer> npdoty: because they don't see impact from making choices, give up

<wseltzer> ... permissions fatalism

<jnovak> +1 to the phrase "permissions fatalism"

hta: mixing up too many topics -- where website is interacting with the user, and the website interacting with the browser which itself interacts with the user
... be clear about which space, or move questions from the browser to the site or vice versa

<jnovak> https://sensor-js.xyz/webs-sixth-sense-ccs18.pdf < Paper on sensors referenced by me earlier

giri: in Geolocation WG, there was a suggestion that a site be able to explain how they were using a permission in chrome
... a concern about how honest vs dishonest brokers would use that functionality
... but can a browser evaluate the intentions of a site

[yes, that was definitely me, npdoty, that suggested that years ago]

moneill: well-known resource in Do Not Track to provide transparency information, that a browser could use to inform users

<mt_____> the presumption that the same .well-known information is presented to every requester is a little laughable

moneill: where to opt-out, etc.

<Zakim> achughes, you wanted to discuss notifying users about breaches and keeping track

achughes: if there's a breach that I as a user care about, I'd want to know whether I gave permission to that site somehow

jo: should the settings for what permission you've given be in the website or in the browser?

achughes: just want to know if something bad happened after the fact, a breach notification

Bobby_Richter: is there a list of good design patterns? good examples of what has been done on the web so far
... show other people how it's been done so far

Kantara: trying to learn from websites that are working

<wseltzer> npdoty: privacypatterns.org

daly: re needing college-level education, even using plain language doesn't necessarily describe all the potential implications of how data (geolocation, camera)
... patterns and inferences from things that might seem innocuous

mt: a problem with the premise that we need to obtain permission: do we need to have these features/functionality at all?

robin: should browsers be sharing it in the first place?

mt: turning off (or limiting) access to storage in third-party contexts might be a good place to end up, discourages asking altogether

[some discussion of Storage Access API]

mt: what features do we believe the web platform needs?

robin: and should there be a baseline as to what the browsers don't share?

<tomlowenthal> +q

<achughes> has a data permissions patterns catalog here https://catalogue.projectsbyif.com

nell: enabling augmented/virtual reality experiences, we can build a restricted API based on user consent, but developers may polyfill using the open-ended camera

<achughes> projects by if has the catalog...

nell: all just computer vision, such that camera access implies these other capabilities
... working on a privacy threat model document (what camera gives, what depth gives, etc.)

<mt_____> and my response to nell (in the interests of time), maybe MR doesn't belong on the web

nell: how to communicate what is implicitly given to sites

<bobby> NellWaliczek: where is that risk analysis for computer vision? can we read it?

<mt_____> hober-ws: ++

hober-ws: regarding good examples, I think the best features are implicit, like input-file and drag-and-drop

<mt_____> even clipboard access

<mt_____> mostly

hober-ws: added features to the Web where we know users want to do that because they are trying to do it

Thomas_N: I think we should do all these things, and many more things in the future
... thinking of different groups: 1) saints, always doing the right thing; 2) devils, just trying to corrupt; 3) everyone else, providing a legitimate service but also want to grab data for financial purposes
... just websites trying to make a profit

robin: not necessarily that we want to get as much data as possible, but that we are stuck in an advertising ecosystem that requires getting as much data as possible
... would rather that browser vendors just made it impossible anyway

tomlowenthal: at Brave trying to make an advertising ecosystem that is targeted without requiring as much data collection
... if data collection weren't needed for advertising, a lot of this would be less of a problem
... wouldn't need to embed lots of third parties as well
... want the Web to do everything, because otherwise it'll just be local software on the machine that might not have a functional permissions model
... have guardrails that we can enforce

jo: have considered implementing things like micropayments as an advertising alternative

bradkulick: Web has grown through organized chaos, business models that we built on that
... companies aren't trying to do malicious things, we shouldn't let bad examples prevent us seeing when companies are trying to build a user experience

<Zakim> wseltzer, you wanted to discuss "web platform"

bradkulick: reputation matters in how users choose where to go

wseltzer: W3C talks about the "web platform", to what extent is the platform a consistent set of expectations vs. just a free-for-all that are just portals onto the wild web
... in the latter, are we giving up too much on helping users have a consistent way to navigate

ryo: regarding features in the Web platform, should we give access to sensors and devices, discovery of devices
... discovering devices might give out other data (like where you are)
... how to ask for permissions for devices that don't have user interfaces in the first place (Internet of Things, Web of Things)

<mt_____> for "things" we first need to address the question: for whom is this device an agent?

ryo: should have some standardized vocabulary about what a device might be sending
... can make i18n easier

harjot: even if permissions are clear and readable, still hard to capture the downstream uses
... especially if it's not clear from the vendors

wendell: who owns this problem?
... consumer's device/software; publishers (app, media, web) often copyrighted media for sale; third-party ecosystem
... each will provide a completely different set of solutions if they own the problem
... governments move slowly, but it's hard to get past those requirements once set

aleecia: study of user perception of responsibility (government, browsers, advertisers in that order?)
... dynamism makes it more complicated, so that no party actually knows the complete set of recipients

<wseltzer> aleecia: "consent is a lie"

aleecia: users considered Cambridge Analytica a data breach, don't realize that the Web is designed in that way
... this problem is an economic problem more than a technical challenge. if advertising went away, there would still be value in surveillance/data

<Zakim> mt_____, you wanted to discuss role of apps

mt: what is inherently good about the Web and how to translate to the Internet of Things
... the Web is inherently *casual*

<tomlowenthal> +q

mt: the ceremony of installing an application (downloading and permissions and running) is important, apps still have a place in that

jnovak: regarding who owns the problem: law, regulations, standards, browsers all move at different speeds
... when GDPR enforcement has a particular date, if other solutions aren't available, then people will come up with their own way of compliance
... not sure if the level of coordination that's needed to hit those deadlines is possible
... distinction between ads and tracking, ads will still be valuable

wendell: if standards are to be a part of the solution, something needs to happen there

tomlowenthal: happy to talk about most effective ways to develop standards, but as a given, we are going to keep building more stuff on the Web, and we should try to do permissions well

jo: transparency mentioned by a lot of people

aleecia: problem of w3c not doing UI?

wseltzer: we are a group of people gathered under those auspices, so let's just see what we can do, don't have to be limited to w3c standards

weiler: best features gather consent implicitly
... limited UI devices could be an interesting topic for a breakout

[snacks, coffee, break]

<mt_____> slides for the next session: https://docs.google.com/presentation/d/1-NBjGTXU42A6q9S1yTXFQYltUDVyZs4POklw7c1kscw/edit?usp=sharing

<ParLannero> Hi, hope you are having great coffee break discussions! I believe the challenge of meaningful consent in digital environments needs to be split up and adressed in several separate ways. Some of those could be coordinated by a standards body such as the W3C, others by regulators, yet others by browser developers, others by content publishers, educators and so on. Perhaps the W3C could publish a model of the problem space, to facilitate splittin[CUT]

<ParLannero> See two page elaboration at: http://www.meaningfulconsent.org/reports/mcde_report_lannero01.pdf I need to log off for a while, will re-join in an hour or so.

Accountability and Provenance, Martin Thomson

<weiler> scribenick: achughes

Session: to whom do sites answer

<wseltzer> scribenick: achughes

mthompson: Accountability is the topic

<wseltzer> Martin's slides: Making Sites Accountable

mthompson: wikipedia definition: accountability practices - governance focus
... actions need to be reported and misconduct needs to be punished (redress) - a feedback loop
... slides run through accountability aspects at different web eras

<wseltzer> mt_____: how many people run noscript routinely? [~6 hands]. Oh, that crowd.

<wseltzer> wendell: difference between features of the out-of-box browser, and plugins?

<wseltzer> mt_____: that gets us into question of agency

<wseltzer> serge: fundamental problem with click-wrap or blink-wrap claims to permission

<wseltzer> wendell: there's a Conde Nast webpage claiming your visit authorizes them to run JS

mt_____: contracts of adhesion exist - for example to run javascript - by loading the page with scripts, the person's redress is only to leave the site

<wseltzer> aleecia: GDPR differentiates between implied consent and explicit consent

<wseltzer> ... GDPR says implicit consent is insufficient

<wseltzer> ... in the US, contracts of adhesion are still viable

<wseltzer> jnovak: is the user-agent the user's agent?

jn: is the browser/user agent actually the user's agent and it can do whatever it wants to do - or should the user agent be a faithful renderer of the content?
... either model requires permissions in some form

mt_____: <input type=file> - uses implied permission because it directly relies on a user action. There's no redress (but probably not necessary)
... the user interaction model changes as browser/html features are invented and lessons are learned and the interaction models are evolved
... for example popup cookie permissions for every single cookie - that user interaction has been optimized away for the most part

<wseltzer> doorhanger

mt_____: 'doorhanger' straddles the browser chrome and content area - asks a question and puts indicators in the chrome

<npdoty> I think it's not differential privacy, technically, and I think there's low granularity that isn't fuzzed precise lat/lon, but maybe we don't need to repeat that old conversation immediately

comment: 'accountability' in the slides and 'redress' is relative to the person (not other actors)

Diane: another issue with geolocation - if you are not moving and suddenly revoke geolocation permission, it has no effect - the data is already gone & there is some peristence

npdoty: this might be a characteristic of sensors

Diane: in the realm of Mixed Reality, this issue of location has more significance

Nell: cameras and VR have similar issues as with MR/location - the images of your living room are already out there and cannot be clawed back

tom: note that sensor data does not have to leave the web app to be acted on. Many of us are concerned about sharing information - but the sharing is not always required

mt_____: the current state is that once a permission has been granted to a script, the permission cannot be clawed back (there might be complex tech available, but it is complex and not prevalent)
... redress can happen at legal, technical, web, server, and other places
... notifications are a doorhanger condition - and have an extended temporal characteristic - the typical configuration is that the notification setting is supposed to be long-term but doing UX for this is hard and complex - hard to figure out what the right behaviour is supposed to be
... push has similar characteristics
... the users cannot be put on the spot to be entirely responsible for calling users to account
... Princeton - has started crawling the web looking for bad behaviour - to help make sites accountable.
... but tying that to actual consequences is tricky for the obvious reasons

<wseltzer> [mt's general template: feature: condition, accounting, redress]

gmandyam: why not take a malware prevention approach?

tom: malware prevention is abackwards strategy - reactive not proactive. This is a big issue when the first act is the act that needs to be stopped.

gmandyam: but isn't reactive better than no strategy?

<Zakim> wseltzer, you wanted to discuss individual vs ecosystem

tom: sure, but overall it doesn't work very well

wseltzer: interesting thing about the Princeton work is that using the crowd to find and shine light on the bad actors, users that follow are able avoid getting hit as well

Harald: in security, 0-day exploits are by definition, not preventable - the first observation triggers action - trying to prevent all up front might lead to analysis paralysis

tom: there are design choices like memory unsafe languages that cause vulnerability - better choices can prevent lots

Harald: with WebRTC address harvesting - needed to see usage patterns to be able to spot the problem and then fix it

tom: note that the design of WebRTC includes disclosure of IP addresses - that design choice has consequences

<Zakim> npdoty, you wanted to comment on notifications and annoyance

Nick: does the accountability analysis format let us figure out when to use which mitigation path?
... e.g. preventative, redress, condition, could help decide

<Zakim> avadacatavra, you wanted to discuss memory safe languages

Nick: which things can we get effective redress by doing pattern analysis? and which things are immediately harming the user and need proactive

avadacatavra: notes that there is an overlap between web vulnerabilities and unsafe memory languages - but that's not the only source of vulnerabilities

<npdoty> what is the "visited" case? CSS rendering or something else?

avadacatavra: note for 'visited' - lots of the problems would have been avoided by just using Rust 6 years ago when it was first written :) but new vulns arose because of newly missed security checks

tom: break things fast and patch is a bad approach

<mt_____> Visited rendering differences are just nasty. The approach is to ask whether we need the feature.

tom: because the user interface design choices on the web platform are going to be pervasive, we need to be thoughtful and careful

<npdoty> xfq, thanks, I think that's right, various browser sniffing vulnerabilities related to CSS :visited

jnovak: reinforcing diane's point - must do security and privacy design/engineering during specification design

<avadacatavra> mt_____: yes visited is terrible

jnovak: if redress is pushed to the law, it will not be satisfactory

mt_____: 'the internet lacks mechanisms for small justice' - to handle the small impacts that fall through
... but at some point lots of small impacts/losses add up and add friction / burden to the system

<wseltzer> [that's what class actions do, much as we may hate their implementation]

achughes: is the call to work on design patterns and language for accountability?

<Zakim> achughes, you wanted to discuss design patterns for accountability

<jnovak> re: "redress is pushed to the law, it will not be satisfactory", I think the issue is less "satisfactory" and more "timely"

npdoty: wonders if we have learned the lessons described in the slide analysis - e.g. window.open
... some lessons have been learned - for example the storage api is gated on engagement

mt_____: a challenge with engagement - it is hard to know what was 'going on behind the click' - what if the click was accidental?

npdoty: is OnLoad really a good time to ask for permissions?

tom: can't disable useful apis because people will just code workarounds

<npdoty> I wonder whether a five-second timeout that people would abuse wouldn't still be an improvement on onload terrors

wbaker: perhaps there is already a design language for this - spans the whole stack - tied up in establishment of businesses & the browser is a very small piece

<npdoty> wbaker, I think it would be helpful to document that, where you see non-technical design measures that are effective

<avadacatavra> what is the lunch situation?

<npdoty> is "brightly lit" a known category of website?

wbaker: maybe look at 'brightly lit sites' and copy the patterns that are implemented there

harald: engagement is problematic - if the browser infers that a commonly visited site should have certain permissions, havoc ensues

<<end of session>>

<npdoty> engagement (of regular visits) vs. gesture of engagement (click or tap)

<inserted> scribenick: wseltzer

mt_____: some post-lunch thoughts, role of Electron applications


mt_____: electron is chrome-in-a-box
... looks like the web, for development, and like an app, to user

<npdoty> embedded Web views on Android / iOS might have similar questions about asking for permission

robin: installable web as distinct from casual web

Thomas_N: and how that incorporates powerful capabilities
... do we want to expose some of those to the Web? which *not*?

Ted_Drake: electron could be exposing that someone is using assistive technologies

wbaker: TSA discussion emerged from conversation about "manufactured consent"
... can we come up with alternatives, how consent can be done right?

Helen Nissenbaum on consent

<ParLannero> Practical question: The Webex teleconference is quiet. Did you resume the workshop?

serge: airport is not the right analogy; even reading the privacy policies doesn't tell us what will happen

Frauke: meaning it's too complex?

serge: and there's ambiguity that even the complexity doesn't tell users what will happen

<avadacatavra> tea consent -- https://www.youtube.com/watch?v=oQbei5JGiT8

<mt_____> please, borrow my USB charger... https://twitter.com/_MG_/status/949684949614907395/video/1

<weiler> scribenick: weiler

mt: what should browsers do re: persistence of permissions?
... some browsers remember forever. others are 'til you close browser. others are 'navigate away from page'.
... unclear to users.

duration and timing

mt: notifications make a good example because permission and consequences are disconnected.

<moneill> +q

thomas: we might be showing prompts too much; which is separate from persistence.
... can browsers lean on OS?

<tomlowenthal> +q

<tomlowenthal> -q

wendell: previous proposal to simplify ... private browsing... there's some signal. Could elaborate on that.

<tomlowenthal> +q

<tomlowenthal> -q

<npdoty> transparency about which origin and ambient notice about when it's happening seems important for permissions-until-revoked-later

<mt_____> private browsing is so fantastically well understood: https://www.blaseur.com/papers/www18privatebrowsing.pdf

moneill: need to make info available to user to review later... 2 hr / N days should be part of original request.

mt: cookies have that: all cookies expire in 2035.

moneill: then at least users see it.

mt: i agree in principle: could sign a contract (a privacy policy), but they're not as effective as we need

<tomlowenthal> +q

serge: "we MAY use the info...."

[digression into 'we'll do anything the law doesn't prohibit]

npdoty: notifications is a good example because of distance in time. maybe we're getting toward one output I want: guidance to new APIs re: which category you fall into.
... Tess mentioned file input. implicit permissions: one-time, distinct.
... geolocation, as implemented now, is subscription model... must be duration.
... notifications must be duration.
... important pieces: transparency: need to know what sent the notification. ambient notice: e.g. crosshairs for location
... decribed 'duration' as a special category

jason: duration has to be proportional to value. geolocation in safari is 24 hours. storage access: is user gesture and 30 days since access @5.
... should we have a list of assets & timeframes? or a way to evaluate assets.

nick: needs to be proportionate.

jason: can we leverage OS level controls?

<mt_____> here's some reading material: https://docs.google.com/spreadsheets/d/1xWK4uf5O3v7xTo85U3X0gGVNfJLD8_W8Zt93BO37we4/edit#gid=0

jason: what makes web broswer unique is that most people have one OS installed but multiple browsers.

<mt_____> not that I agree with the assessment there, but the tabulation is valuable

<Hana> +q

wendy: another aspect: users get to understand these features as their interact with them. they might be better able to understand it after they use it for a while. being asked a week later "are you happy with this" might do a better job of capturing people's wishes.

tom: i don't think OS is likely to provide useful signals to browsers for same reason that I don't think browser can provide useful signals to web sites.
... broswer should treat all websites as malicious. i don't see value to that sort of signaling.
... if I'm in do-not-disturb mode, I still might want to set permissions for notifcations for when I'm not in do-not-disturb.

<jnovak> Expectations re: private browsing mode: https://www.blaseur.com/papers/www18privatebrowsing.pdf

<npdoty> signals of OS mode (like Do Not Disturb) not a good indication for granting permissions, but maybe the OS-level interaction of wanting to block notifications of a certain type would be responsive

tom: instead, we'll assume the prompt is a problem and default to "no"

<npdoty> frequently-denied permissions a good indication that a prompt might not be necessary at all -- can just deny

<serge> Presumably the sites know exactly how often their users decline these permission requests?

<serge> Are any of the browser vendors collecting anonymized statistics on this?

<tomlowenthal> I think that all permissions are frequently-denied.

<serge> It would be interesting to see actual data on this

<tomlowenthal> But they're still useful for the things for which they're useful.

hana: context: I'd expect a map site to ask; I'm not sure I'd expect a newspaper to ask. when we consider timing, expectations play a role.

<mt_____> I note here the recent trend from big sites to offer all their "applications" on the same origin, meaning that multiple properties share permissions state.

Ted Drake: if you're constantly turning off xyz, having the browser default to off ... seems like it actually violates user consent. shouldn't it ask the person?

tom: Diane

Jason: in case where standard is implemented, then a problem found, then we realize it needs to be behind prompt.... is there a difference between @2 and users making an active choice?
... there's a trend that platforms should take actions on behalf of user, which might be in conflcit with ideas re: transparency and control.

<frauke> Tom: decision fatigue is real

<npdoty> someone else can read these telemetry dashboards better than me, but maybe this suggests that notification permission is granted more often than rejected? https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&end_date=2018-08-30&include_spill=0&keys=__none__!__none__!__none__&max_channel_version=beta%252F62&measure=WEB_NOTIFICATION_PERMISSIONS&min_channel_version=null&processType=*&product=Firefox&sanitize=1&sort_keys=submissions&start_dat[CUT]

<mt_____> npdoty: This is not entirely surprising. People have been trained to click the blue button.

<ryo-k> maybe people who reject permission often might not have opted in for the telemetry in the first place

gmandyam: jeffrey's spreadsheet doesn't include EME. connection between CDM and license can be opaque.
... the reason we can tolerate that is because, in some cases, it's from the same source.

<tomlowenthal> +q

<tomlowenthal> Sounds like an argument against DRM engines 🤔

<tomlowenthal> -q

gmandyam: overprompting for normal access (e.g. media playback ) will not ...

<mt_____> our DRM engine runs in a very restricted sandbox, though I might concede that it is impossible to stop exfiltration of information

<wseltzer> gmandyam: can the browser pre-vet some feature-access?

plinss: we're talking about the browser mediating these requests - there is an opportunity to mess up. when user is trying to make something happen and the browser isn't even letting app ask. that's doing a disserve.

<mt_____> there have been various attempts at fixing this problem with form autofill

<gmandyam> Clarification: certain API access (e.g. EME/DRM) can be unprompted if the browser already has a trust relationship with the external entity (e.g. Chrome Browser with Widevine DRM engine)

<npdoty> plinss, that seems like a bug that maybe we can fix, if we more often gave them the option to provide location

<Thomas> Apologies for this being late breaking, but I just chatted with Alex Russell and he realized he forgot to post the position paper he wrote for this workshop: https://docs.google.com/document/d/1Vp-7N5PiBq9mCFTbSLdR-aLGnkiAForxZ4b-rB3TI40/edit#

plinss: example: user trying to take a photo and browser says no....

tom: icon informing user of request - discretely, ut discoverably.

<Thomas> should be be accessible now :)

<Thomas> https://docs.google.com/document/d/1Vp-7N5PiBq9mCFTbSLdR-aLGnkiAForxZ4b-rB3TI40/edit#

aleecia: I think tom's point was "if users are frequently saying 'no'", then the number of cases of brokenness are less...
... there are both false positives and negatives, but you can do a good job of understanding user expectations.
... I don't think it's a huge problem.

<Thomas> try now

<Thomas> <permissions for documents are hard....>

plinss: it may not be #1 problem in this room. if we don't solve it, we've killed a big use case for web apps.

Permissions in New Contexts, Nell Waliczek

<inserted> scribenick: gmandyam

<wseltzer> Nell's slides

Nell Waliczek: (presenting on "The Immersive Web")

NellWaliczek: ... defines VR ...
... ... describes different types of VR HW ...
... Tethered VR - no onboard compute
... Mobile VR - smartphone based. Can only track head turns; 3 degrees-of-freedom
... Standalone VR - fully mobile, 6 degrees-of-freedom

<mt_____> Thomas: weiler: I've opened a PR for that paper: https://github.com/w3c/permissions-ws-2018/pull/63

<npdoty> like all great acronyms, it doesn't stand for anything

NellWaliczek: ... defines XR ...
... XR requires environment awareness
... "Hit testing" - can shoot a virtual ray and determine where it hits in the virtual world

<wbaker> https://en.m.wikipedia.org/wiki/X_Reality_(XR) has an origin derivation story set

NellWaliczek: XR continued ..., User Awareness - 3D location, eye tracking, facial expression
... ... defines Immersive Web ...
... gives examples of browser-enabled XR e.g. shopping for a couch and determining if it fits in a room

<xfq> https://www.w3.org/immersive-web/

NellWaliczek: WebXR is now a Working Group; Community Group will go on to incubate new ideas. CG has a repo for privacy/security concerns

<wseltzer> https://www.w3.org/immersive-web/

<wseltzer> https://github.com/immersive-web

<wseltzer> https://github.com/immersive-web/privacy-and-security

NellWaliczek: WebXR API is imperative (not declarative). Can discover available VR/AR devices, query capabilities, render on devices.
... considerations for privacy/permissions: fingerprinting during bootstrapping, real world geometry, camera access, object/image identification

<npdoty> I like the progressive enhancement model, and it seems hard to not have developers claim that they really really need all the permissions, or a particular device

NellWaliczek: fingerprinting during bootstrapping - mitigations include consent, inline vs exclusive data restrictions with user intervention,HW bootstrapping occurring last after consent provided

<avadacatavra> npdoty: right now they definitely are just asking for everything

<scribe> scribe: gmandyam

NellWaliczek: real world geometry - can determine PII such as specific location, size of user's home or determining user demographic, credit card #'s, gait analysis

<wseltzer> "you can estimate the size of a user's house by creating a game that makes them run from room to room"

<serge> credit cards are the least of the concerns here...

NellWaliczek: camera access and associated user perception. Issues are see-through/pass-through, inferring real-world geometry, polyfill implementation

<npdoty> +1, privacy of non-users of the device, we haven't tried to provide permissions for that

NellWaliczek: NellWaliczek: Object/image identification. Can be used to profile users, even blackmail users
... XR Permissions: how are they bundled?, can they be bundled with non-XR permisions?, can they be obtained upfront vs. JIT?, what should the permission duration be?

<scribe> scribe: gmandyam

<wseltzer> thinking of "synthetic permissions" or "synthetic information"

<avadacatavra> https://www.nytimes.com/2017/07/25/technology/roomba-irobot-data-privacy.html

avadacatavra: Legal frameworks lag tech developments. All the data mentioned is not covered legally.

rnovak: To what extent is bystander privacy considered in XR?

<npdoty> bystanders seem relevant both on camera but also the depth/other environmental sensors

<avadacatavra> there is a mention of bystander privacy in the repo, i think in the real world objects issue

<tomlowenthal> Isn't all the info about your home protected in the EU?

<wseltzer> gmandyam: why are issues of geometry, etc. different for XR than getusermedia?

NellWaliczek: Bystander privacy not being considered currently

<npdoty> I think it's not necessarily that gUM was done badly, just that AR has the potential for functionality while requiring *less* permissions, which is a good minimization

<moneill> +q

"CV" = computer vision

tdrake: Passerby privacy - accessibility may impact this, e.g. vision app to inform user about person entering room

<avadacatavra> tomlowenthal: i'm not sure if gdpr covers it--home data might not be considered personal data? especially since it can be transferred/leased/etc. it's unclear how to manage home transfer and home data afaik

NellWaliczek: (in response to tdrake) role-playing apps such as app to teach person how to use a cane. Bad experiences may also endanger user.

<aleecia_> Other applications: http://www.abc.net.au/news/2018-09-18/china-social-credit-a-model-citizen-in-a-digital-dictatorship/10200278

tdrake: Certain XR experiences can result in memories.

<Zakim> avadacatavra, you wanted to discuss tensions between problem and feature

NellWaliczek: harassment in VR can be percieved worse than in real world

avadacatavra: future versions (e.g. implants) may result in people being unable to fully leave experience

<npdoty> yeah, there are non-data-protection privacy issues that VR illustrates better than a lot of examples we've considered

avadacatavra: Desired experiences could work against privacy, e.g. shared AR clouds.

NellWaliczek: (in response) for every concern I mentioned, there is a valid use case

<frauke> how to communicate consequences of choices people make with respect to their data

<npdoty> in theory, user permissions is the response to that kind of problem: that some people genuinely want something and other people really don't

NellWaliczek: (cont.) I don't have a clear sense of how to communicate to the average user the consequences of sharing their data. Permission fatigue/fatalism can result.

<Thomas> Is the topic of "permission fatigue / fatalism" on the agenda as its own discussion topic?

<npdoty> Thomas, I don't think it is, but we have open space on the agenda tomorrow

<avadacatavra> nearly forgotten thought--i've been told by some of the user researchers that people don't respond well to the worst case horrible scenarios that security/privacy people tend to talk about

<avadacatavra> which doesn't help the whole communication thing

<aleecia_> Thomas, next discussion takes it up somewhat, if the slides are to be believed

<frauke> writing quex down instead: what theoretical framework can help us to sort through these use cases? Has anyone seen useful frameworks to measure perceived privacy?

<npdoty> avadacatavra, that's useful to know, thanks. maybe we should make a practice of considering both worst case and more typical case so we can plan for both

NellWaliczek: Linking immersive worlds - a use case where a user can navigate from one virtual world to another. This will be addressed in the WG/CG in the future, but same origin issues may apply.

Permission Bundling, Harald Alvestrand

<npdoty> scribenick: tomlowenthal

<jnovak> let me google this for ... myself? https://www.w3.org/2008/04/scribe.html

<aleecia_> avadacatavra, in studies I've done users tell me common practices are so unreasonable as to disbelieve they happen. "You sound just like my paranoid friend" was one comment that stayed with me. Her paranoid friend was a web programmer.

<wseltzer> Harald's slides

<ParLannero> Thanks for very interesting talk! It is now midnight in Stockholm and I need to go to bed. (I did not consent to nightmares.) Hope to tune in again tomorrow!

<avadacatavra> aleecia_: i've been looking at a lot of user privacy studies (generally) and concluded that we need a lot more in the xr space

<aleecia_> avadacatavra, this is relevant to my interests :-) let's talk more? am40@andrew.cmu.edu

<NellWaliczek> I was asked to post the github org for Immersive Web: https://github.com/immersive-web/

Harald: is proposing a mechanism to combat permission fatigue maybe
... existing permissions are disconnected, and are satisfying neither to people who use the web, nor web developers

<ryo-k_> question - Is the wording/terms for what permissions to ask for agreed upon developers and users? We had different ideas on what XR is, and non-expert users may not know what allowing "6dof" "real-world geometry" even means. I also wanted to ask if the consequences of these technical terms can even be agreed on, but it was mentioned in response to the last question (nice point!).

Harald: [digression about E911]

<NellWaliczek> No, wording is not finalized. We haven't published our first Working Draft yet, so there's plenty of time to figure out how to make it clear. We are starting the conversations early so that we haven't accidentally designed ourselves into a corner

Harald: proposing a new prompt: "I'm a video-phone, can I do video-phone things?"
... consent involves three parties: the user (axiomatically good), the platform (which we have to rely on), and the application (here be dragons).
... apps do not start out having a trust relationship with the person browsing the web
... option: standardize several roles with specific capabilities
... sadly, this prevents apps from going above and beyond
... option: let app ask for a bunch of permissions, and the browser guesses what's needed
... this doesn't give a lot of detail to the person
... this probably poorly serves applications which are unusual
... this requires asking upfront, which creates a channel for annoying people
... option: you decide your general "trust level" in a page, and your browser manages the permissions

<Leonard> q

leonard: this sounds like an earlier question about *standardized vocabulary* and it relies on rapidly adding new nouns

<Zakim> npdoty, you wanted to comment on progressive enhancement

Nick: This makes things harder for developers to progressively enhance / degrade. Perhaps that's the right way to go.

<Zakim> achughes, you wanted to discuss about progressive permission grants

achughes: What about progressively asking for permissions over time? Combine that with time-based expiry?

<aleecia_> One of the questions is what a "video phone" should be expected to do. Are data for ads part of the bundle? Showing ads? Data for "research and development?" Fraud prevention & security? Analytics? &c.

<aleecia_> Being able to define what is part of a standard video phone turns out to have a lot of tensions built in

Harald: Let us consider camera permissions. Developers hated non-persistent permissions, and some people found those frustration. This militated towards stickier permissions.

achughes: Stickiness makes sense when you're sure about the choice the first time. Expiry is a more human nuance.

Harald: Once the data leaves the browser, it's out of control…

achughes: But maybe it doesn't need to leave the browser.

Leonard: Just-in-time doesn't work in immersive environments where there's no trusted chrome.

NellWaliczek: That's not how it works in XR.
... The browser still has control of everything.

<achughes> Data flow analysis needs some emphasis in the privacy discussion

NellWaliczek: What you *can* spoof is navigation/chrome. So you can phish, but not force clicks.

<wseltzer> but you often can't guarantee that users can distinguish non-trusted from trusted UI

<aleecia_> (fwiw, users currently do not particularly distinguish browser chrome and space under publisher control.)

NellWalliczek: Progressive enhancement doesn't really work for XR, as articulated in the previous session.
... and progressive requests… y tho? If you know you need some permission, why wait to ask for it?

Martin: Will articulate the history of bundling.
... The unwritten contract is that browsers will prompt whatever they want however they want. Sites have an obligation to ask for what they need when they need it.

<NellWaliczek> Specifically the progressive enhancement thing... if you want eye tracking and surface meshes, why wait to ask for them?

Martin: But sites reasonably need several permissions, and tend to ask for those upfront. Example: a video meeting app which asks for camera, microphone, and notifications, way before the call actually starts.
... There are great dark UI paradigms for getting people to interact with doorhangers. Like lightboxing the whole page with an arrow at the hanger.

<npdoty> would browsers be interested in active measures for discouraging the intrusive, dark permissions requests?

Martin: We have a permission policy framework which are mutable, and it includes a transparency dashboard where we'd like to show the *purpose* for which a permission was requested

<mt_____> npdoty: I'm sure that we'd be interested, modulo limited resources

<npdoty> tomlowenthal, mt -- great! if we have agreement on what is good/bad, that might help with what can/needs to be discouraged

<Zakim> wseltzer, you wanted to discuss abstract intents and to discuss high-level/low-level

achughes: Sensor streams… ask for the microphone and use it for something unexpected. Unlimited secondary uses is ⚠️

Wendy: Bundling — if it makes for more intuitive prompts for people 👍🏻 And could we abstract the things sent over the wire so that some of the most granular info never leaves the browser.

<npdoty> we've talked sometimes about using capabilities, rather than particular sensors, to help with the problem of what can be inferred

jnovak: With respect to dark patterns for prompting. On iOS, we have seen every variation of approach for forcing people to answer questions "yes". And we offered an easy API to let apps get to the config pane for those settings.

Harald: There are three types of users. If given a "privacy slider", 90% will leave it alone, 5% will throw it to private, and 5% will throw it to public.

Serge: How do we tell people about all the possible consequences of a permission. Let's talk about **_~~inference_~~**. We normally see what info is collected, not *what it can be used for*. Tell people the latter!

<npdoty> +1 on inference (or "capability", I've been calling it)

Harald: Sites can't say why they want permissions.

Serge: If sites are going to lie, that sounds enforceable.

<mt_____> in response to hta's claim that we don't give them a chance to explain themselves: they have the whole viewport on which to make their case

Robin: Yes, but those backend systems are difficult to audit.

<npdoty> mt, but we don't force them, or even prompt them to do so, like with a required explanation parameter

Robin: at corporate scale, the uses of data get… complex

<mt_____> npdoty: the only way to manage that is to give them space in Chrome, which is a disaster

Robin: so tl;dr; might not be enforceable in practice.

Serge: GDPR! Minimization!

Robin: In practice, that's still hard.

<npdoty> mt_____, I remember that problem, but is the current situation less disastrous?

Aleecia: Enforcement is still hard, providing purposes only makes enforcement easier.
... Bundling gets dark fast. Here is an example of some dark bundling (describes a UI).

<npdoty> that's a bundling of purposes, rather than bundling of sensors/capabilities

Aleecia: How about we use bundling for *purposes*, rather than just capabilities. We could even standardize!

<npdoty> scribenick: jnovak

research presentations on permissions and context

Primal: Postdoc at UC Berkeley. Talking about custom permission model for Android

goal of project was to build a permission model that meets user expectations

this permission model should only let an app access a sensitive resource when expected by the user

when we talk about privacy expectations is that context, helen nissenbaum reference.

Influence of context in decision making in different domains (mobile, etc.)

How to incorporate context back into the platform so that can protect user's data?

Before building custom model, how many people know the existing model of "ask on first use"?

Whenever an app tries to access sensitive data, OS mediates and presents a prompt w/ Allow and Deny

This is an improvement over ask on Install where the permissions were listed before install

Problems with this model: (1) don't know how often they are accessing this information; (2) can use a resource (e.g. location) when the app is not running in the foreground. Android permission model doesn't capture that

Ran four different studies. General model was built three android versions. Can see what the applications are doing when they access a sensitive resource.

Prompt a user while doing the study -- this application accessed this resource, if you were prompted, what would you have done, allow or deny?

Collected 2m+ data points

30k hours of Android usage

During the four studies, found that 80% of participants wanted to block at least one request. Some were allowing everything and some were denying everything but majority were being selective

Two factors that came into decision making: visibility of the app (e.g. foregrounded / other visual cues or not) and foreground application (the application being used at the time of the ask)

Frequency of decision making was ridiculously high -- cannot ask every time sensitive resource is accessed because requests are made 4 requests a minute in 2014, now, 6 requests a minute

(Had 24 sensitive resources that were observed)

(if you look at all resources, then dealing with 30? requests a minute)

Given the above, wanted to see if could predict user decisions

Based on privacy decisions collected, found that ask on first use had an error rate of 15.4% but that an ML model had an error rate of 3.2%

tomlowenthal: What was the source of truth for error rate?

<npdoty> is the "ML model" just an approximation of "deny all background requests"?

Primal + Serge: During the study, people would see prompts a couple of times a day and sometimes the prompt would be shown for same app and asset as previously

Primal: APp, permission, visibility is the tuple used for tracking

the responses

Serge: On the ask on first use system would only see the request once; in the study, saw that prompt multiple times to see if every decision was consistent or not

tomlowenthal: If an app prompts you in context with reasons why it is asking for the data, going to give a different answer than if I am asked later in a different context. Doesn't seem like it means that they would have answered incorrectly?

Serge: Every time the app accesses the data, sample, and on some frequency prompt.

If the first time hit allow would always be allowed or denied.

Martin: Not controlling for the priming

jnovak: differentiation between app prompt for access and the apps accessing the data. does that challenge the model?

tomlowenthal: Would show a different thing on the screen when prompting the user for permission

Thomas: Currently I'm primed for microphone access.

Primal: If you account for context, have to ask for fewer questions and increase privacy protection

npdoty: Is the learning per user?

Primal: system starts with bootstrapping, ML is trained based on 130 users' collected previously.
... When we collect the user's privacy decisions in different scenarios, then get peronsalized models

npdoty: Would be different from what we've learned about permissions prompts?

Primal: Yes

npdoty: helpful for further efforts

?: Where does the model run?

Primal: On the phone.

Serge: Next up: Hana Habib, who has been working on other permission related stuff.

Then Aleecia for forward thinking things

Hana: An Empirical Analysis of Website Data Deletion & Opt-Out Choices (CMU and UMich)

Challenges of new online permissions model is that have to accommodate for different types of permissions: data deletion, email opt outs, and marketing opt-outs

all of these all under pre-GDPR regulations (CANSPAM, etc.)

With GDPR, explicit consent is now required for data collection along with the requirement to withdraw consent at any time and request deletion of information. GDPR also explicitly puts an emphasis on usabiity

through intelligible language and visualzation

Goal of the work is to design a new permissions model / come up with recommendations for one that is more user friendly than what is currently out there.

Methodology: (1) figure out where websites are offering permissions and what sort of permissions they are; (2) are these realistic permissions; (3) identify best practices

Pilot sample from Alexa's list of top 50 US site. Full analysis will be global and include the tail.

For each website, collect: Location; Shortest Path; and description in privacy policy

<wseltzer> whether or not there was a link to an advertising policy, and whether that link actually worked

Based on preliminary work already identified a few things that would be beneficial: standardize terminology so users can find it consistently across policies;

standardize functionality: simplify options that a user has / decisions to make (e.g. opting out of email marketing from one marketer required: finding the opt-out email control page, select the correct of 76 boxes to uncheck, and then see that there's an opt-out of all 76 option at the bottom);

standardize placement: websites offer different choices on different pages. For example, to opt-out of targeted advertising, there was a choice in the account settings, but, on opt-out page, there was a five choice page, and, in privacy policy had two of the five. Unclear how to make sure you checked them all.

What is an ideal permission model?

1. Needs to be user friendly;

2. One click permissions;

3. Centralized location

Robin: one way to make the

make this easier is to put it into the browser

Hana: Yes, alternatively, can use a privacy template (e.g. financial services industry)

<tomlowenthal> +q to reply about browser-mediated permissions

<tomlowenthal> +q to reply about browser-mediated permissions

<Zakim> tomlowenthal, you wanted to reply about browser-mediated permissions

jnovak: Financial notices standardize because of incentives

tomlowenthal: Browser mediated permission requests sound great but can imagine a browser that says users never want to be tracked, so sites will never use API to determine status

not convinced that will resolve in the way that it will have liked to.

Aleecia: Talked about other ways that this has been done before was layered policies: there was seven specific choices, even if you had the titles the same, what you were looking for migrated. User comprehension went down -- if not in one of the seven boxes assumed that company not doing data collection

SarahSquire: When implementing RTBF, have to deal with sync and backups so repopulating after restores / syncs so had to create a 'graveyard' list to remember not to repopulate.

Hana: Have heard workarounds like encrypting data or hashing it to handle.

<mt_____> hashes FTW - then put them in a bloom filter

Serge: Aleecia is going to talk about forward looking issues to law

Aleecia: Self-plagerization

<npdoty> I'm curious about counting/measuring the usability regarding deletion .. because I would think the best practice would be deleting the data in the interface where you post it, rather than through a policy document page

is the topic / sldies

PICS - used to strike down much of the Communications Decent Act

P3P - PICS for privacy

DNT - five bills for DNT at the time when W3C took it up

DNT did something interesting -- could say yes it is fine to track me, no please don't track me, and the 'unset'. The unset was going to mean something different in the US and EU -- US would be okay to track, EU would not track because lack of consent. Effectively swapping opt-in and out based on where you were

California AB 375 (DNT, is dead, long live DNT)

scribe: history of AB 375 follows

Ballot measure put forward by Alastair Mactaggart et. al. at cost of 3mm with 80% public support, companies responded.

Ballot measure replaced by a bill.

Ballot measure -> Bill transition included weaker on enforcement yet broader on what is covered.

Ballot measure would have amended the California constitution, which means it would have been harder to change. Something slightly more changeable might be preferable

Law comes in to force in 2020

<wseltzer> "Mr. McTaggart did not spend $3M just to burn it"

There may be federal law that preempts the California law

Giri: what about foreign companies?

Aleecia: last part on this slide

What's in the bill?

<mt_____> "consumer" is perhaps the only word worse than "user"

Applies to any company world wide that is big (couple of definitions) or is a data broker

jurisdiction based on the consumer being in california

Robin: Seen a lack of clarity around the in california point. Suggestion that it is california taxpayer?

?: GDPR is the same

Robin: CCPA is different in a different way from GDPR As it is not as clear

Aleecia: That is a good candidate for the cleanup bill?

Giri: Getting back to federal preemption, why wouldn't the preemption be covered under multilateral trade agreements?

Aleecia: Haven't seen anything yet under trade, but, there are people thinking about it. No one is working down that path yet to best of knowledge. More likely to be straight up federal law

CAN-SPAM Was a federal law the day before a stronger California law to give more rights to people to limit the unsolicited email that came to them

Wiretapping is another example. Federal Preemption is "okay if you do two party consent"

<wseltzer> preemption can be a floor or a ceiling on state laws

Robin: The three page report that was provided to Senator was that should do something that preempts GDPR applicability and maybe do something via WTO?

Aleecia: what is anticompetitive .... not surprising if doesn't go down that road at some point.

get into human rights versus trade, would be messy

about the rights that would be in there: know what is collected; know whether sold or disclosed; say no to the sale; access; equal service and price; deleting data and data portability.

over 16, opt-out, under 16 opt-in

under-16 the parent can have a role.

Non-discrimination concept in that if you say no to the data being used for secondary purpose must be able to use the service

E.g. Amazon gets to share data with Amex and UPS to purchase and ship book but cannot not let you opt-out of selling information that you bought book about cancer to healthcare company

however, can ask the consumer to pay for the value of the data that you cannot use

(e.g. if the value is $17, can ask for $17 but not $1017)

In terms of what is personal information, there is a european-ish approach

Aleecia's question is What is a verifiable request from a consumer?

answer in the law is "The AG will figure that out":

Robin: The way that has been treated from GDPR is to look at data protection perspective -- if it is low risk request, handle with low level of identification. If it is a high risk request, do a high level of verification (e.g. an HR request triggers a call to you etc.)

Aleecia: Sounds reasonable, but, may not be what we get here.

Catalog Choice was an option to stop receiving junk mail -- they would send out information to companies sending mail to consumers and opt-them out; the mailing companies said they had no way to know that a human was behind the request and opted out

<wseltzer> https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375

Anti-competitive nightmare of having company X getting a list of company Y customers and then going to company Y and opting them all out.

don't know where it is going.

exemptions from DNT-land

Enforcement is a minimal private right of ACTION: theft of unencrypted data where you have to show that it was removed and have 30 days to fix it, AG has a chance to enforce

Everything else goes through AG office

20% go to AG's office to offset enforcement costs

Initial requests: 57 positions and ~11m dollars

<achughes> achughes: one of the use cases we are thinking about for Kantara Consent Receipts is that the receipt is some evidence that you are the consumer that granted permission in the first place. There are issues that would need to be worked out in that model, but we are discussing in the WG.

Problems the NGO folks saw: wanted dot opt-in to data collection (likely wouldn't survive US court challenges); stronger right of private action; something more like GPDR with more rights, penalties, and limits.

Before and after GDPR, third party cookies have dropped ~20% in European news sites

room: 👏

Summary of Day 1

<wseltzer> scribenick: wseltzer

npdoty: Jo and I will offer some takeaways
... and then some planning for tomorrow's breakouts

Jo: "trust is dead"
... three levels: "lawful good", "chaotic evil", and "neutral"
... three levels of management: user, browser, app/website
... transparency of data
... and its use
... what counts as engagement?
... should the browser be prompting user to remind them of grants?
... should browser be intermediary, sometimes blocking requests?
... Nell's amazing intro to immersive web
... points about accessibility raised
... Harald re bundled permissions; should bundles be specified
... economics of data collection

moneill: storage access API, opportunities to standardize there?

mt_____: Feature policy, and how that interacts here

tomlowenthal: disuniformity of permission APIs and extent to which that's right or wrong

NellWaliczek: status of permissions API

ted: Blockchain!

wseltzer: browser uniformity or differentiation in permissions-handling

SarahSquire: OAuth and OpenID Connect are offering unique opportunities and ropes by which users can hang themselves; what do we think?

Thomas_N: How do we explain scary permissions understandably?

npdoty: We currently have a scheduled session re platforms, and another re policy.

<jnovak> platform discussion will be Apple + Google + Mozilla + Brave

npdoty: The "Who" question has been coming up a lot
... who's going to own, fix?
... spllit the discussion about what sites, browsers, policy, users can do

Christine: Purpose

npdoty: any other requests for breakouts?

Thomas_N: make the request for permission itself require a permission
... so users can stop sites from spamming them with permission prompts

tomlowenthal: sending a signal from user to say when to send permission prompt

Ted: hear more from Consumer Reports re position paper

wseltzer: @@

<xfq_> wseltzer: from a W3C perspective I'd like to discuss the next steps for our conversations, not only about standardization, but @@

hta: when do you need to trust other users, not only websites
... e.g. webrtc
... what does that do to your trust model?

jnovak: 1st and 3d parties

npdoty: use cases or domains: XR, automotive, OAuth

weiler: with which should we begin tomorrow?

Jo: Transparency keeps coming up


Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2018/09/27 00:05:12 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/@@:/Frauke:/
Succeeded: s/@@@@/Christine_Utz/
Succeeded: s/@@@/Ted Drake/
Succeeded: s/@@@@@/Bobby_Richter/
Succeeded: s/thomas/Thomas_N/
Succeeded: s/comment:/npdoty:/
Succeeded: s/comment2:/Diane:/
Succeeded: s/comment3:/Nell:/
Succeeded: s/comment4/gmandyam/
Succeeded: s/wseltzer_screen/wseltzer/G
Succeeded: s/@@/permission and consequences/
Succeeded: s/@4/EME/
Succeeded: s/native apps/web apps/
Succeeded: i|Nell's slides|scribenick: gmandyam
Succeeded: s/gmandyam_/gmandyam/G
Succeeded: i|some post-lunch thoughts|scribenick: wseltzer
Succeeded: s/@@: That's not how it works in XR./NellWaliczek: That's not how it works in XR./
Succeeded: s/@@/Diane/
Succeeded: s/Diane/NellWaliczek/
Succeeded: i/Primal:/Topic: research presentations on permissions and context
Succeeded: s/??? for privacy policy/description in privacy policy/g
Succeeded: s/XR/OAuth/
Present: weiler jnovak Andrew Hughes https://www.w3.org/Privacy/permissions-ws-2018/papers.html
WARNING: No scribe lines found matching ScribeNick pattern: <gmandyam> ...
Found ScribeNick: npdoty
Found ScribeNick: achughes
Found ScribeNick: achughes
Found ScribeNick: wseltzer
Found ScribeNick: weiler
Found ScribeNick: gmandyam
Found Scribe: gmandyam
Inferring ScribeNick: gmandyam
Found Scribe: gmandyam
Inferring ScribeNick: gmandyam
Found ScribeNick: tomlowenthal
Found ScribeNick: jnovak
Found ScribeNick: wseltzer
ScribeNicks: npdoty, achughes, wseltzer, weiler, gmandyam, tomlowenthal, jnovak
Agenda: https://www.w3.org/Privacy/permissions-ws-2018/schedule.html

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)

[End of scribe.perl diagnostic output]