Virtual assistants support voice and/or natural language interaction. The most well-known are the generic virtual assistants, like Siri, Cortana, Google Now, and Alexa, which are typically used for everyday informationn-- weather, news, sports scores, etc. Now we're starting to see development tools for enterprise or application-specific agents -- for example Alexa Skills Kit, Nuance Mix, Microsoft LUIS, wit.ai (Facebook) api.ai (Google), as well as text-based chatbots often encountered on a website and used for customer support or shopping.
Do standards have a role?
This session asked the question of what standards might have a role in this ecosystem, for example, in promoting interoperability and generic development tools. What would be required to use an application developed for one of these platforms on another one?
Some questions we discussed
- Best practices in voice interaction design, mentioning the AVxID professional voice user interface designer's organization, which has published some guidelines.
- How to incorporate common web idioms like "contact us"
- How to customize vocabulary like product names?
- What about avatars?
- Does interaction with robots add requirements, for example, the need to model physical space?
- What if systems engage in more system-initiated interactions? Then they need information about when it is appropriate to interrupt the user.
- Virtual assistants adapt to their users, is there a role of standards in adaptation?
- Context (location, time, application context, context-based access control, product usage) are all important. How to represent context? There are some tools available in the Semantic Web work
Voice Interaction Community Group
This group just started -- join if you're interested in this topic. https://www.w3.org/community/voiceinteraction/