<wseltzer> scribenick: wseltzer
[radioactive snakes]
NellWaliczek: Topics I'd love
feedback on
... * to what degree do we need to worry about fingerprinting
concerns at session setup?
... * directions to prior art, other specs that might have
common mental models
... to support good consistency or avoiding bad examples
... * immersive mode
... as an exclusive mode, what that means for permissions
... We're pro progressive enhancement
... notion that you can embed 3D experience in a page
... when you click the "please use my hardware" button, XR
becomes exclusive experience, like fullscreen
... a number of reasons, including how cameras work, how
headsets work
... current thinking, our permissions and security model is
wrapped around exclusive mode
... when you're inline, very restricted in data
... not even orientation without interaction
... but and no other info
... but once you consent to immersive,
... then think about what it means to bundle permissions, e.g.
eye tracking, room geometry
... "AR permission"
... since once you have camera or geometry, lots of it is
inferrable
... bundle it as a single "AR permission"
... experimental "AR lite" mode
... can we enable the hit-testing on a synthetic AR? @@
... making sure we've got the actual security model in-mind
hana: could you clarify orientation?
NellWaliczek: 3 degrees of
freedom, vs 6 degrees of freedom
... pixels that come from the camera are never exposed to the
website
... but in order to reposition objects, need hit testing
... a whole set of features unavailable in inline mode
... but inline mode is where you get the button
hta: We had a similar experience
in WebRTC
... wanted to let users select the camera they wanted
... without exposing all the cameras
... "constrainable"
... Section 11 of the media capture main spec.
NellWaliczek: pass ordered list of constraints when you request access
<xfq> https://www.w3.org/TR/mediacapture-streams/#constrainable-interface
NellWaliczek: can specify whether your request is exact, range
hta: you get back a media stream with camera that fits what you asked for, or a failure if no fit
NellWaliczek: that's the pattern
we're looking for, the only question is "do you support
AR"?
... Support session: do you support? then Request session
hta: we generated device list,
because if there's no camera, it makes no sense to
request
... you just get a list of identifiers
NellWaliczek: we've chosen to remove names, because people were misusing them massively
npdoty_: re prior art, consider
fullscreen, as well as webrtc
... fullscreen has lots of security issues, e.g. user deception
and lack of context
... without URL bar, how do you know who's asking for
permissions?
hta: latest change, if you get a popup for any reason, break out of fullscreen mode
NellWaliczek: that doesn't work
for XR, so we'll need a different solution
... blur and focus; browsers talking about secure UI, and how
they could make that mode something websites can't spoof
... it doesn't work well in e.g. bank websites
... how do you communicate secure contexts in XR?
... Navigation environment
... tunnel
npdoty_: who's asking for permission, how will data be used?
avadacatavra: people in immersive web community would like to be able to traverse links without exiting immersive mode
wseltzer: any talk of hardware channel for trusted UI?
Primal: opportunities for overlay, spoofing
NellWaliczek: the hardware can assure that the composition engine, browser has the last word
npdoty_: lots of attacks, "punch the clown"
avadacatavra: re trusted UI,
mozilla has been talking about a sigil
... but making that non-spoofable opens up new problems
... e.g. getting a side-channel on the renderer to extract and
spoof the sigil
NellWaliczek: I'll add clickjacking example to the considerations doc
npdoty_: does the user even know where they are when they're being asked these questions?
NellWaliczek: if we say XR is
just one bundled permission, what does that mean for other
capabilities that have been permissioned differently?
... e.g. geographic alignment.
... or if they're not granted at the start, how do you get them
later?
... 2 things that come up a lot: Geolocation and Media
Capture
<npdoty_> maybe a separation between compass orientation and geolocation
NellWaliczek: 1st round of XR
won't have image recognition, so people will want to use the
camera to polyfill
... Data streams
... enumerate everything; or say entering AR turns on all the
sensors
... or something in between
... v1 won't have mesh data
... eye tracking, environmnetal meshes, object recognition,
image recognition
... trying to structure flow so as to be extensible
Primal: some could be annoyances;
some could raise real privacy concerns
... can we get default settings right?
NellWaliczek: eye tracking
avadacatavra: consider what
kindle unlimited is doing with pageview tracking
... authors were paid based on pages read
... gaming ensued
... 1st, authors created 6-page pamphlets
... then when pageview tracked, they turned to massive
tomes
[adversaries are infinitely creative]
NellWaliczek: we know there's lots of data, that's why we're restricting to immersive mode
Primal: when they approve a prompt, is it 1st party or 3d party?
wseltzer: in immersive mode, how many parties have access to your experience?
NellWaliczek: there are lots of
different examples, some good, some questionable
... there will be third-party providers of 3D components
avadacatavra: ARlite, where page
doesn't get the data
... while it's tempting to push consent onto the user, we
shouldn't
... "do you consent to having your eyes tracked?"
<xfq> https://github.com/immersive-web/privacy-and-security
NellWaliczek: Thanks! We have a
repo for webxr
... and will invite people to continue this conversation
there
... We will be meeting at TPAC
ted: want to raise accessibility
concerns before we leave
... if I'm blind, I want eye tracking to help read what I'm
looking at
... if I'm deaf, I want access to stereoscopic microphones to
know where sound is coming from
Leonard_: in prior art look at Niantic re Pokemon Go
npdoty_: that seems a counter-example, you had a black screen with "please grant all these permissions"
<inserted> scribenick: xfq
Primal: do the users say
"allow"
... prompt the users
... if you're an evil developer, how do you get the users
approve your permissions?
<npdoty> wb: make permission required for the functionality that the user is looking for right now
Alisa: level of engagement, trust
wb: if i'm allowed to do this to
you, then blah blah
... it can only be based on reputation etc.
nick: see what permissions the
apps are using right now
... the puporse of the app is important
<npdoty> people can have a mental model of what the purpose is, if they see a map and are asked for their location, a purpose is clear to the end user (even if later it might be used differently)
?: formulating policies
scribe: servers define the policies
<inserted> scribenick: xfq
wb: the scope of time sounds good
<npdoty> to Alisa's point, asking people after the fact (asynchronous notice, or retrospective auditing) might help confirm the relevance of consent later on, because upfront duration is a challenge
wb: we talk about "in the
internet, nobody knows you're a dog" kind of things
... but in Auto, people must have IDs
@@: when data can be processed and when it can't be
scribe: @@
wb: do you have a vocabulary for that?
@@: there's a rule processor
scribe: e.g., we're interested in the total mileage
ryo: what brand of devices you
have
... temperature etc.
... easily fingerprintable
... mostly done with a web service
... the question is how you can trust the web service is using
the data correctly
hober: here's an example for
media
... on the tv, @@
... the way web apis works is @@
... it does not get the device list
alisa: to make things difficult,
two questions
... 1) there might be conflict between different
permissions
... 2) when it's a shared use, how to handle?
... e.g., coffee machine
gm: it's not always easy to
identify shared use
... you don't even always have a UI
... you mean for IoT devices, you need a way to identify
users?
[Martin presenting Privacy and Transparency Interoperability, Standards and Vocabbularies]
<weiler> scribenick: weiler
<scribe> scribenick: christine_utz
mk: need a common language to
enable communication between all of the different parties
... -> semantic stack
... previous work has focused on levels 2-4
... once we have a common vocabulary, we can proceed to do
automatic compliance checking
@@@@@@@@@@ can this vocab be used by advertisers?
mk: interesting question [no idea
how to sum up the answer, sorry]
... components of personal data processing. we need vocabulary
/ policies for all of these
... use case once we have a common vocabulary: regulatory
compliance for big companies
... problem: purpose of data processing may be different for
different departments. Deutsche Telekom's (big company) use
case: keep track of user's consent
... smaller companies often lack privacy staff, need for a tool
set to automatically check policies for compliance
<weiler> mk: goal is being able to automatically prove gdpr compliance.
mk: report about w3c workshop on
data privacy controls and ?? (vienna)
... most important topic: taxonomy of regulatory privacy terms
(including GDPR terms)
... another outcome: w3c community group -> collect use
cases, existing vocabularies & aligning them
... points to use cases and vocabularies lists on slide
... use case: sharing of location data
... slide: use cases/vocabularies from SPECIAL example.
sentence with multiple highlighted aspects
... call for ACTION: join group, it's open for
everyone
... need to agree on a common language, the base of which is a
common vocabulary
wendell: is your language rich enough to capture gdpr concepts and match it with a set of purposes?
mk: yes, the language is powerful
enough. still working on how to do automatic consent
checking
... currently consent can be presented to the user in
transparency tool. policy check doesn't work yet
wendell: is there a way to combine two expressions and get the residual?
mk: yes, there is a combination operator
wendell: are there tools for enforcement yet?
mk: it's on the roadmap. tools
are mainly written by one industry partner + university
partners. this is a long-term EU project
... tool is not stable enough yet to give it to the general
public
primal: how do you intend to deal with purpose?
mk: service provider must provide purpose. "use the data" / "making business" is too general. "providing personalized ad" is concrete enough. it's still up to the humans to provide valid purposes
brad: there has already been work
on a classification of purposes (IAB etc.)
... legal framework: no bundling of purposes; suggests joining
forces to work on this
wendell: these were developed by industry in an emergency situation (looming GDPR deadline). there still is room for improvement and the industry is open to improving their frameworks / taxonomy
robin: IAB is aware of their framework being not very good and there's lots of room for improvement, will be happy for support
@@@@@@@@@@@@@@@@gmandyam: could include consent vocabulary into requests for ads
wendell: clarification what IAB
is (there are 3 different IABs)
... each region has their own IAB
... ex: IAB EU
gmandyam: has been wondering whether streaming media should require a different set of permissions
wendell: there's video-oriented
standards for this (ads in videos???). consent will happen
outside of the video on the page
... industry has to continue operation despite new legal
requirements. is open to improvements to short-term
solutions
... don't want consent info to show up in the middle of a
video
... need to figure out when to best ask for consent without
interrupting user experience
... standards need to find out what they wanna do and figure
out how to do it in the minimally intrusive way
mk: wraps up session
<weiler> scribenick: weiler
thomas: risks: 1) fingerprinting, 2) malicious actors
leonard: these options give
tremendous access to the device/content. very scary. should we
give these?
... what are we giving up?
thomas: some of these might not
be very explainable.
... camera might be. clipboard might not be explainable.
weiler: and we might not understand them either
thomas: we're looking at
engagement scores.
... e.g. fonts. exposing list of fonts behind an engagement
check
nick: maybe we can do something
in capabilites rather than resources.
... if we're explicit about process, we might find
similarities.
... maybe users undertsnd re: filesystems.
... maybe this will allow persistent rtacking... describe the
capabilities, not the resource.
sarah: ping has talked about this
as worse-case scenarios. if you're giving access to files...
"this is your tax return". "your family's health info".
... try to explain worst case.
primal: there was a study re: android re: how to frame the text. worst case scenario will push people away.
weiler: i worry that we won't get it right - e.g. vehicle case.
thomas: malicious actors and good
actors that act badly/wrongly fingerprinting is feasible with
just allowed permissions. e.g. fonts. users can't reason about
that.
... a news site has no need for that. but they could use it for
fingerprinting since it passes the engagement check.
nick: i'm less willing to give up
on fingerprinting. for fingerpritnting... we're thinking re:
mitigations, not solutions...
... need to think about whether it's available in a drive-by
scenario. if anyone can get it, 99% case it will be used for
f-p.
nell: we've been thinking about it in terms of permissions... does average person think about it in terms of "do I trust the site"?
sarah: just because I go to site often doesn't mean I want to give them more ability to track me.
[***]
nell: can I just have a "i trust
this site"?
... do we wind up designing systems that fit the model of the
privacy conscious and not the rest?
Alisa: why should we expect them
to care? we expect a product to not kill us.
... I would love it if permissions died in 10 years; if we
didn't have to ask.
sarah: as an individual, I want to be able to right software. so I don't want a gatekeeper on who can write software.
thomas: [intro to progressive web
apps - installing an app]
... ceremony of installing
plinss: they expect that an app store has vetted it. and an "install" link doesn't give them that.
sarah: ultimate indicator is :have I already told this site who I am?
primal: bigger Q: do they have different mental models for apps and websites?
nick: danger in "trusting the site" model. @@ this doesn't capture the concerns I hear.
tess: when I navigate to an article sent to me, and they want location... I trust them for reading article, but why would they need to know geolocation?
thomas: I wonder if there are
capabilities that could not be misused by good actors.
... take-away: different permissions can be misused
differently.
nick: danger on "good site" model: perverse incentives ... anti-competitive practice... if a site becomes well-known, now it's easier for them to get scarier permissions.
plinss: 'crowd deny' might be more useful than 'crowd allow'
ryo: if doing dangerous site model, should be transparent. might lead to wrong optimization [scribe: gaming?] .
nell: might end up with too many sites marked bad
tess: well, if most are bad.
plinss: I get that UA's need to protect users from malicious sites.. if you get put in the same bucket and have to fight to get capability...
sarah: this is why transparency is important.
weiler: I wonder if these trust issues would be better addressed through policy, contracts, whitelisting rather than automated technical enforcement
mt: the mere existence of these features has already eroded trust in the web. geolocation...
thomas: we should discuss whether we should be doing things. web v. native.
leonard: how do you treat a chromebook.
thomas: these are now interesting
because they're running android.
... I worry about this [walled gardens and applications]
happening to entire web
tess: i think in the rush to add capabilities, we're destroying the village in order to save it.
nell: I think there's unique
value of the web that aren't in native platform. I don't want
to install every hotel's app to view their room in VR.
... we do ourselves a disservice by not enabling features. and
leaving this to only native.
plinss: PWA have blown native apps out of the water.
jo: an easier to implement
nick: we have safe browsing. you can go to a website and not be afraid.
tess: I can go to a website and not fear.
nick: maybe this will happen in native?
mt: this is starting to happen in native.
[discussion of Microsoft]
mt: web/casual... can just go to any site. maybe there's a way to download a VR file SAFELY. it's not "VR web" - it's "web as transport for VR"
nell: imperative v. declarative
API.
... both tool sets need to exist on the web.
mt: disagree.
[Thomas doing take-aways]
sarah: oath enables massive data
sharing. dhhs project to an app store
... they wanted non-gov't to build these apps, and allowing
anyone to build an app, and let medicare patients import data
into app.
... then cambridge analytica happened. app had used oath to
connect to fb. users had given some data. data was sold and
used to target them for ads.
... news called this a 'breach'. identity community said 'they
got consent to get the info they got'.
thomas: it was data OF YOUR FRIENDS
sarah: identity community trying
to grapple with this mismatch of user expectations. how to make
it clearer; make better decisions; prevent them from doing
this. (I don't support the latter - treating them as children.
I think you should do as
... much informed consent as you can. but not let them share
re: others. by law or tech.)
... q: how do we prevent this mismatch of expectations given
how hard it is to get people to understand what they're
sharing?
... how do we prevent cambridge analytica w/o treating people
as children.
frauke: as an academic, i would think that when I tell users I'm doing academic research, I'm not allowed to sell data.
sarah: as FB implemented it, they
were allowed to sell.
... 'academic' was self-identified.
frauke: who owns the problem?
sarah: do we want to allow academics more latitude?
tom: problem is that you can't
determine in advance who will misuse data.
... would be good to criminalize it. attach legal
penalties.
... user of data can make promise...
... to data subjects
weiler: distinguish this from other consent prompts?
sarah: more powerful. can act as
the user.
... can be hard to explain the application. ex: an app that
pretended to be google drive.
... consent fatigue.
... was actually malicious. got access to mail, and spread to
contacts that way.
weiler: oauth allows constrained tokens.
tom: if I get a token from you,
does oauth allow me to communicate about me? or my
purposes?
... could do slick UI things.
sarah: that string does not exist
in the protocol.
... we have connflated login/sign in with "act on my behalf".
it was designed for the latter, but used for the former.
ryo: is there a way to communicate the purpose? no human readable text?
sarah: no/right.
tom: maybe do standards work to communicate promises and
sarah: might be hard because .e.g they're a bank that does MANY things.
tom: list the worst things that can happen.
dlieberman: no standardized vocabulary?
<ryo-k> OpenID's standard vocabulary about profile information https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims
sarah: there is a standard vocab for application permissions but not to DESCRIBE, as user string.
weiler: [medical consent as context]
<ryo-k> (as far as I know, there is no vocabulary that specifies what a bearer token does, because the capability depends on what API the service has
thomas: @3
sarah: [OIDC]. if there is a
secure key store, device can act as a yubikey... site doesn't
know anything about user. CTAP & WebAuthn.
... [this was a 'what can browsers do' answer]
... we try to avoid going through the browser as much as we
can.
thomas: can the browser tell when someone is logged in?
sarah: logout is another
issue.
... is it about breaking the link, or logging out from the
identity provider, or.....
... half of US shares devices.
plinss: if I use FB to log into fitbit, if I log out of FB....
sarah: you stay logged into fitbit
ryo: so single sign-on, but not sign-out?
sarah: yes.
... take-away: should attach a human readable string to the
scope
dlieberman: should/could also be machine parseable.
<wseltzer> Meeting: Permissions Workshop day 2, breakout room
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: s/inof/info/ Succeeded: s/wseltzer_screen/wseltzer/G Succeeded: s/approve your permissions/approve your permissions?/ Succeeded: s/wb: @@// FAILED: i|scribenick: xfq|Primal: do the users say "allow" Succeeded: s/noody/nobody/ Succeeded: s/ut in Auto/but in Auto/ Succeeded: i|Primal: do the users say "allow"|scribenick: xfq Succeeded: s/i|scribenick: xfq|Primal: do the users say "allow"|// Succeeded: s/@@@@@@@@@@@@@@@@@@@@@@@@@@/gmandyam/ Succeeded: s/@@@@@@@@@@/gmandyam/ Succeeded: s|and... @@|and good actors that act badly/wrongly| Succeeded: s/Alise/Alisa/ Succeeded: s/ted drake:/plinss:/ Succeeded: s/that could be misused/that could not be misused/ Succeeded: s/@@/if these trust issues would be better addressed through policy, contracts, whitelisting rather than automated technical enforcement/ Succeeded: s/to build/sarah: oath enables massive data sharing. dhhs project to/ Succeeded: s/user/user of data/ Succeeded: s/@@/dlieberman/ Succeeded: s/juse/use/ Succeeded: s/@@/dlieberman/ Succeeded: s/christine_utz_/christine_utz/G Succeeded: i|Primal: do the users say "allow"|scribenick: xfq Succeeded: s/Meeting: XR and permissions/Meeting: Permissions Workshop day 2, breakout room/ Present: NellWaliczek avadacatavra Thomas Ted Primal hta xfq plinss Leonard Hana Martin npdoty hober wseltzer Alisa Found ScribeNick: wseltzer Found ScribeNick: xfq Found ScribeNick: xfq Found ScribeNick: weiler Found ScribeNick: christine_utz Found ScribeNick: weiler Inferring Scribes: wseltzer, xfq, weiler, christine_utz Scribes: wseltzer, xfq, weiler, christine_utz ScribeNicks: wseltzer, xfq, weiler, christine_utz WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]