Target audience

The target audience of the Multimodal Interaction Working Group (member only link) are vendors and service providers of multimodal applications, and includes a range of organizations in different industry sectors like:

Mobile and hand-held devices
As a result of increasingly capable networks, devices, and speech recognition technology, the number of existing multimodal applications, especially mobile applications, is rapidly accelerating. Multimodal Voice Search in particular is a relatively new and compelling use case, and has been implemented in applications by a number of companies, including Apple, Google, Microsoft, Yahoo, Nuance, SpeechCycle, Novauris, AT&T, Openstream, Vocalia, Metaphor Solutions and Sound Hound. Speech offers a welcome means to interact with smaller devices, allowing one-handed and hands-free operation. Users benefit from being able to choose which modalities they find convenient in any situation. The Working Group is of interest to companies developing smart phones, tablets, and personal digital assistants or who are interested in providing tools and technology to support the delivery of multimodal services to such devices.
Please note that a related effort has recently been completed in the W3C by the HTML Speech Incubator Group (HTML Speech XG). The focus of the XG was developing proposals for accessing speech recognition and speech synthesis from HTML5 browsers, and Voice Search and Speech Command Interfaces are possible use cases for these technologies in the browser. However, the XG did not attempt to address modalities other than speech, such as handwriting, emotion, or the wide variety of present and future input modalities. Similarly, it didn't attempt to address non-browser contexts. In contrast, the Multimodal Architecture provides a generic framework for modality integration and control. Speech in the browser can be seen as a special case of the kind of modality integration covered by the MMI Architecture.
Home appliances, e.g., TV, and home networks
Multimodal interfaces are expected to add value to remote control of home entertainment systems, as well as finding a role for other systems around the home. Companies involved in developing embedded systems and consumer electronics should be interested in W3C's work on multimodal interaction.
Enterprise office applications and devices
Multimodal has benefits for desktops, wall mounted interactive displays, multi-function copiers and other office equipment which offer a richer user experience and the chance to use additional modalities like speech and pens to existing modalities like keyboards and mice. W3C's standardization work in this area should be of interest to companies developing client software and application authoring technologies, and who wish to ensure that the resulting standards live up to their needs.
Intelligent IT ready cars
With the emergence of dashboard integrated high resolution color displays for navigation, communication and entertainment services, W3C's work on open standards for multimodal interaction should be of interest to companies working on developing the next generation of in-car systems.
Medical applications
Mobile healthcare professionals and practitioners of telemedicine will benefit from multimodal standards for interactions with remote patients as well as for collaboration with distant colleagues.