W3C

- DRAFT -

Virtual Assistants Breakout

25 Sep 2016

Attendees

Present
Remy_Francois(Microsoft), Kaz_Ashimura(W3C), Milan_Patel(Huawei), David_Costa(Thomson_Reuters), Jason_White(ETS), Mark_Hakkinen(ETS), Sabrina_Kirrane(WU), John_Kirkwood(IE), Bruno_Javary(Oberthur_Technologies), Debbie_Dahl(IE), Lisa_Seeman(IBM), Chat_Hage(Nielsen), JP_Abello(Nielsen), Kazuaki_Nimura(Fujitsu), Ted_Guild(W3C)
Regrets
Chair
Debbie
Scribe
Kaz

Contents


<scribe> Scribe: Kaz

dahl: mentions there is a CG on voice interaction

bruno: any APIs available?

dahl: Amazon provides methods to access

david: you can

dahl: there was some standard format in 90s
... very constrained model
... but now N-gram LM is available
... vocabularies

david: 3 years ago LM for financial service was nightmare

bruno: tools to fill sentences available these days

dahl: there are various viewpoints for virtual asistant

david: you'll lose context very quickly
... very disconnected experience

mark: sometimes there is no response
... depending on the design

david: very narrow experience
... maybe you get

dahl: Alexa is an app on mobile

sabria: have been working on ARIA
... speech recognition is difficult

bruno: defining some keywords, e.g., "where am I", would be useful

david: sounds like accessibility discussion

lisa: there is no standard way to handle keypads, etc.

Flipchart:

+ Best practices in voice interaction design

+ AVIxD

dahl: will find a link and put it on the summary

bruno: it's kind of becoming standards
... keywords for virtual assistant

lisa: semantics
... and GPS
... next turn left or right
... terms make sense to users

Flipchart:

+ Customize vocabulary

+ screen reader techiques

json: there is a best practice
... interest to standardize the grammar
... other areas to standardize

dahl: there is no standardization about avators yet
... don't know if there is possibility
... several markup languages done not by W3C
... high-level concept like smile
... BML (Behavior Markup Language), etc.

lisa: robots are one of the early adapter

dahl: what's the difference betwen virtual assistants and robots

mark: Siri, etc., are user-driven

Flipchart:

+ Robot assistants

david: push model initiated by user

mark: we have modes for announcement, e.g., polite, in WAI-ARIA

dahl: guideline for safety, security and privacy for cars

Flipchart:

+ Safety, privacy, Security

jason: standard formats representing information

david: Eriza had a model on conversation

dahl: Eriza appeared in 1963

david: is there any chance to consider context capture?

dahl: there are many opportunities these days

david: context includes location, motion, ...

dahl: other people as well

sabrina: there is something in the semantic web area

dahl: some of tools have ideas of "context"
... context is very useful

Flipchart:

+ adaptation standards

david: emotion action, etc.
... not so complex on the abstraction layer

lisa: using RDF tripples

sabrina: semantic

lisa: if you want to

Flipchart:

+ context

dahl: concept vs context
... getting back to the topic of virtual assista

david: some push model might be useful

dahl: we're getting out of time
... good discussion on virtual assistants
... there is a CG named voice CG

lisa: voice authentication would be possible

dahl: users may get benefit from a virtual assistant for accessibility purposes

Flipchart:

+ accessibility

mark: some kind of language for experiments would be interesting
... interesting to see how tolerant a virtual assistant would be
... Amazon recently released a virtual assistant service whith listens to the user

jason: error tolerance is another point

dahl: use Dragon speech which uses deep learning and it works well with my accent
... Microsoft also does deep learning
... there are not standard but conventional machine learning systems used by researchers

[adjourned ]

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.147 (CVS log)
$Date: 2016/09/25 14:44:50 $