W3C

– DRAFT –
Voice Interaction

10 April 2024

Attendees

Present
debbie, dirk, gerard, hugues
Regrets
Dirk
Chair
debbie
Scribe
ddahl

Meeting minutes

debbie: explains interoperability

hugues: this is dangerous because your personal assistant knows a lot of information that is private
… I don't know how to teach a machine the different layers of privacy

gerard: need to anonymize the question

hugues: I love the idea

hugues: I am developing two things. I am developing a simple, rule-based dialog system in the interests of getting something to work rapidly
… when I get the answer from the person I will analyze it in an embedded AI
… from that I build a Knowledge Graph of the person, then I can query the KG about the person to build the life of the person to be able to write my bio
… you can tell things to the ghost writer that you don't want to be disclosed to certain people. The human ghost writer can understand that but I don't know how to tell that to the machine.
… if I give my companion my credit card PIN, it can buy something, but I don't want anyone else to get my PIN
… people are not supposed to access my companion without authorization, or understand different levels of privacy
… you can't be sure that people will respect different levels of confidence

gerard: we have to trust our major-dome "personal assistant"
… like Knowledge Navigator

hugues: how to define rules to keep information secure
… the KG can mark information as private by adding links to each vertex to indicate level of privacy
… from 1-10
… the issue is in a dialog not everything is private, and it's hard to say "this is private"
… the issue is that you might develop too much confidence in the companion
… this happens with humans, you might say things that you forget are private.

debbie: link?

hugues: no, but arxiv paper to be presented next week

gerard: presented this work at Conversational Interaction two years ago with Speech Morphing

hugues: what is new is that now we are taking advantage of embedded AI
… now we are asking for weather, Wikipedia information
… my own data are not public
… I've been thinking about this kind of interaction for a long time
… there is semantics attached to privacy, for example, I know some things in the defense industry, but they don't want to talk about at home

https://hal.science/hal-04378982/document

hugues: this is a very complex subject
… for example, in the defense industry, they compartementalize information
… this is a recurring question

https://dumas.ccsd.cnrs.fr/TELECOM-SUDPARIS/hal-04392089v1

debbie: two problems how to designate private information in the KG, and how to tell companion that some information is private

hugues: should companion ask the user if information is private

debbie: sometimes you want the companion to buy things for you
… for example, Alexa can buy things for you
… could you just ask the user to tell the companion to keep certain information private?

hugues: yes, within a session
… there is a big problem with the definition of privacy

debbie: a lot of people want to tell you about AI and privacy, but I don't know if they have a clear idea

hugues: you can keep all information in a closed box

hugues: the simple fact that your companion connects to something means something
… I'm now working on a closed system, that doesn't connect to the cloud, everything is kept inside the box
… mostly I query the system myself
… from that information I can build a biography of the user, when the system writes a biography of the person certain information needs to be kept out
… some information can be provided to my wife, children

dirk: do you label the information?

hugues: the different kinds of information are recorded in the KG
… in a graph you have many roads to a certain point, you might get to the information from an unprotected path
… the user has to say "this information has to be protected"
… I used to work for the defense industry, we knew that everything in a room has to be confidential, and it's still confidential after we leave the room
… how can a machine understand that?

https://hal.science/hal-00611090

dirk: also have to consider trusted environment

hugues: in a trusted system we can make rooms so that the system cannot disclose certain things

hugues: say I want to buy trousers, the seller will recommend trousers based on my skin color, eye color, information that I would like to keep confidential

dirk: you can derive confidential information from other information that you already know
… privacy is a lot more than on/off
… very interesting work

https://hal.science/hal-04181551

https://www.conversationalinteraction.com/_files/ugd/dbc594_69b6e727f5c64020afdc801291a133fc.pdf

debbie: we should take a few weeks to take a look at these papers and invite hugues back

dirk: how can we make use of this work?

hugues: it is not because we don't have the answers that we shouldn't do the work
… it is a matter of semantics

debbie: can send hugues the link to subscribe

hugues: this is the most important subject in AI

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Succeeded: s/*personal assistant"/"personal assistant"

All speakers: debbie, dirk, gerard, hugues

Active on IRC: ddahl