The capabilities of mobile devices are increasing at a remarkable rate. This is true not only with respect to basic hardware capabilities, but also in terms of the ways that we can interact with these devices. User interaction through speech, touch, gesture, camera and swipe are ubiquitous on smartphones and tablets, and newer forms of input are becoming available at a rapid pace. In addition to capabilities built into devices, innovative forms of add-on hardware that further expand device capabilities are also continually appearing. The opportunities are clear, but there are also challenges. How can application developers manage the constantly increasing variety and number of capabilities to create natural, seamless, compelling user experiences?
The W3C Multimodal Interaction standards are designed to enable developers to manage this complexity through a generic, high-level, modality-independent architecture. The core Multimodal Architecture concepts of encapsulated modality components, interaction management, and life-cycle events provide a high level, extensible interface to an extremely diverse range of device capabilities and forms of input.
Learn more about the Multimodal Interaction standards at a 90 minute webinar on “Developing Portable Mobile Applications with Compelling User Experience using the W3C MMI Architecture”. The webinar will be of interest to both developers of multimodal applications as well as platform and device providers. The webinar will be presented by the Multimodal Interaction Working Group at 11:00 Eastern time on January 31, 2013. It will focus on leveraging W3C standards such as HTML5, the W3C Multimodal Architecture, and State Chart XML (SCXML), to develop compelling multimodal applications.
For more information, and to register.