W3C

Multimodal Architecture and Interfaces

W3C Working Draft 14 April 2008

This version:
http://www.w3.org/TR/2008/WD-mmi-arch-20080414/
Latest version:
http://www.w3.org/TR/mmi-arch/
Previous version:
http://www.w3.org/TR/2006/WD-mmi-arch-20061211/
Editor:
Jim Barnett, Aspect Software
Authors:
Deborah Dahl, Invited Expert
Ingmar Kliche, Deutsche Telekom AG, T-Com
Raj Tumuluri, Openstream
Moshe Yudkowsky, Invited Expert
Michael Bodell (until 2006, while at TellMe)
Brad Porter (until 2005, while at TellMe)
Dave Raggett (until 2007, while at W3C/Volantis)
T.V. Raman (until 2005, while at IBM)
Andrew Wahbe (until 2006, while at VoiceGenie)

Abstract

This document describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is the 14 April 2008 Working Draft of "Multimodal Architecture and Interfaces". The main difference from the previous draft is a more thorough specification of the events sent between the Runtime Framework and the Modality Components, including both schemas for the individual messages and ladder diagrams showing message sequences. Future versions of this document will further refine the event definitions, while related documents will address the issue of markup for multimodal applications. In particular those related documents will address the issue of markup for the Interaction Manager, either adopting and adapting existing languages or defining new ones for the purpose.

This document is the fourth Public Working Draft for review by W3C Members and other interested parties, and has been developed by the Multimodal Interaction Working Group of the W3C Multimodal Interaction Activity.

Comments for this specification are welcomed and should have a subject starting with the prefix '[ARCH]'. Please send them to www-multimodal@w3.org, the public email list for issues related to Multimodal. This list is archived and acceptance of this archiving policy is requested automatically upon first post. To subscribe to this list send an email to www-multimodal-request@w3.org> with the word subscribe in the subject line.

For more information about the Multimodal Interaction Activity, please see the Multimodal Interaction Activity statement.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

Table of Contents

1 Abstract
2 Overview
3 Design versus Run-Time considerations
    3.1 Markup and The Design-Time View
    3.2 Software Constituents and The Run-Time View
    3.3 Relationship to Compound Document Formats
4 Overview of Constituents
    4.1 Run-Time Architecture Diagram
5 The Constituents
    5.1 The Runtime Framework
        5.1.1 The Interaction Manager
        5.1.2 The Delivery Context Component
        5.1.3 The Data Component
    5.2 Modality Components
    5.3 Examples
6 Interface between the Runtime Framework and the Modality Components
    6.1 Event Delivery Mechanism
        6.1.1 Event and Information Security
        6.1.2 Multiple Protocols
        6.1.3 System and OS Security
    6.2 Standard Life Cycle Events
        6.2.1 NewContextRequest
            6.2.1.1 NewContextRequest Properties
        6.2.2 NewContextResponse
            6.2.2.1 NewContextResponse Properties
        6.2.3 PrepareRequest
            6.2.3.1 PrepareRequest Properties
        6.2.4 PrepareResponse
            6.2.4.1 PrepareResponse Properties
        6.2.5 StartRequest
            6.2.5.1 StartRequest Properties
        6.2.6 StartResponse
            6.2.6.1 StartResponse Properties
        6.2.7 DoneNotification
            6.2.7.1 DoneNotification Properties
        6.2.8 CancelRequest
            6.2.8.1 CancelRequest Properties
        6.2.9 CancelResponse
            6.2.9.1 CancelResponse Properties
        6.2.10 PauseRequest
            6.2.10.1 PauseRequest Properties
        6.2.11 PauseResponse
            6.2.11.1 PauseResponse Properties
        6.2.12 ResumeRequest
            6.2.12.1 ResumeRequest Properties
        6.2.13 ResumeResponse
            6.2.13.1 ResumeResponse Properties
        6.2.14 ExtensionNotification
            6.2.14.1 ExtensionNotification Properties
        6.2.15 ClearContextRequest
            6.2.15.1 ClearContextRequest Properties
        6.2.16 ClearContextResponse
            6.2.16.1 ClearContextResponse Properties
        6.2.17 StatusRequest
            6.2.17.1 Status Request Properties
        6.2.18 StatusResponse
            6.2.18.1 StatusResponse Properties
7 Open Issues

Appendices

A Contributors
B Examples of Life-Cycle Events
C Event Schemas
D Ladder Diagrams
    D.1 Creating a Session
    D.2 Processing User Input
    D.3 Ending a Session
E Glossary
F Use Case Discussion
G References


1 Abstract

This document describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents.

2 Overview

This document describes the architecture of the Multimodal Interaction (MMI) framework [MMIF] and the interfaces between its constituents. The MMI Working Group is aware that multimodal interfaces are an area of active research and that commercial implementations are only beginning to emerge. Therefore we do not view our goal as standardizing a hypothetical existing common practice, but rather providing a platform to facilitate innovation and technical development. Thus the aim of this design is to provide a general and flexible framework providing interoperability among modality-specific components from different vendors - for example, speech recognition from one vendor and handwriting recognition from another. This framework places very few restrictions on the individual components or on their interactions with each other, but instead focuses on providing a general means for allowing them to communicate with each other, plus basic infrastructure for application control and platform services.

Our framework is motivated by several basic design goals:

Even though multimodal interfaces are not yet common, the software industry as a whole has considerable experience with architectures that can accomplish these goals. Since the 1980s, for example, distributed message-based systems have been common. They have been used for a wide range of tasks, including in particular high-end telephony systems. In this paradigm, the overall system is divided up into individual components which communicate by sending messages over the network. Since the messages are the only means of communication, the internals of components are hidden and the system may be deployed in a variety of topologies, either distributed or co-located. One specific instance of this type of system is the DARPA Hub Architecture, also known as the Galaxy Communicator Software Infrastructure [Galaxy]. This is a distributed, message-based, hub-and-spoke infrastructure designed for constructing spoken dialogue systems. It was developed in the late 1990's and early 2000's under funding from DARPA. This infrastructure includes a program called the Hub, together with servers which provide functions such as speech recognition, natural language processing, and dialogue management. The servers communicate with the Hub and with each other using key-value structures called frames.

Another recent architecture that is relevant to our concerns is the model-view-controller (MVC) paradigm. This is a well known design pattern for user interfaces in object oriented programming languages, and has been widely used with languages such as Java, Smalltalk, C, and C++. The design pattern proposes three main parts: a Data Model that represents the underlying logical structure of the data and associated integrity constraints, one or more Views which correspond to the objects that the user directly interacts with, and a Controller which sits between the data model and the views. The separation between data and user interface provides considerable flexibility in how the data is presented and how the user interacts with that data. While the MVC paradigm has been traditionally applied to graphical user interfaces, it lends itself to the broader context of multimodal interaction where the user is able to use a combination of visual, aural and tactile modalities.

3 Design versus Run-Time considerations

In discussing the design of MMI systems, it is important to keep in mind the distinction between the design-time view (i.e., the markup) and the run-time view (the software that executes the markup). At the design level, we assume that multimodal applications will take the form of multiple documents from different namespaces. In many cases, the different namespaces and markup languages will correspond to different modalities, but we do not require this. A single language may cover multiple modalities and there may be multiple languages for a single modality.

At runtime, the MMI architecture features loosely coupled software constituents that may be either co-resident on a device or distributed across a network. In keeping with the loosely-coupled nature of the architecture, the constituents do not share context and communicate only by exchanging events. The nature of these constituents and the APIs between them is discussed in more detail in Sections 3-5, below. Though nothing in the MMI architecture requires that there be any particular correspondence between the design-time and run-time views, in many cases there will be a specific software component responsible for each different markup language (namespace).

3.1 Markup and The Design-Time View

At the markup level, an application consists of multiple documents. A single document may contain markup from different namespaces if the interaction of those namespaces has been defined (e.g., as part of the Compound Document Formats Activity [CDF].) By the principle of encapsulation, however, the internal structure of documents is invisible at the MMI level, which defines only how the different documents communicate. One document has a special status, namely the Root or Controller Document, which contains markup defining the interaction between the other documents. Such markup is called Interaction Manager markup. The other documents are called Presentation Documents, since they contain markup to interact directly with the user. The Controller Document may consist solely of Interaction Manager markup (for example a state machine defined in CCXML [CCXML] or SCXML [SCXML]) or it may contain Interaction Manager markup combined with presentation or other markup. As an example of the latter design, consider a multimodal application in which a CCXML document provides call control functionality as well as the flow control for the various Presentation documents. Similarly, an SCXML flow control document could contain embedded presentation markup in addition to its native Interaction Managment markup.

These relationships are recursive, so that any Presentation Document may serve as the Controller Document for another set of documents. This nested structure is similar to 'Russian Doll' model of Modality Components, described below in 3.2 Software Constituents and The Run-Time View .

The different documents are loosely coupled and co-exist without interacting directly. Note in particular that there are no shared variables that could be used to pass information between them. Instead, all runtime communication is handled by events, as described below in 6.2 Standard Life Cycle Events .

Furthermore, it is important to note that the asynchronicity of the underlying communication mechanism does not impose the requirement that the markup languages present a purely asynchronous programming model to the developer. Given the principle of encapsulation, markup languages are not required to reflect directly the architecture and APIs defined here. As an example, consider an implementation containing a Modality Component providing Text-to-Speech (TTS) functionality. This Component must communicate with the Runtime Framework via asynchronous events (see 3.2 Software Constituents and The Run-Time View ). In a typical implementation, there would likely be events to start a TTS play and to report the end of the play, etc. However, the markup and scripts that were used to author this system might well offer only a synchronous "play TTS" call, it being the job of the underlying implementation to convert that synchronous call into the appropriate sequence of asynchronous events. In fact, there is no requirement that the TTS resource be individually accessible at all. It would be quite possible for the markup to present only a single "play TTS and do speech recognition" call, which the underlying implementation would realize as a series of asynchronous events involving multiple Components.

Existing languages such as XHTML may be used as either the Controller Documents or as Presentation Documents. Further examples of potential markup components are given in 5.3 Examples

3.2 Software Constituents and The Run-Time View

At the core of the MMI runtime architecture is the distinction between the Runtime Framework and the Modality Components, which is similar to the distinction between the Controller Document and the Presentation Documents. The Runtime Framework interprets the Controller Document and provides the basic infrastructure which the various Modality Components plug into. Individual Modality Components are responsible for specific tasks, particularly handling input and output in the various modalities, such as speech, pen, video, etc. Modality Components are black boxes, required only to implement the Modality Component Interface API which is described below. This API allows the Modality Components to communicate with the Framework and hence with each other, since the Framework is responsible for delivering events/messages among the Components.

Since the internals of a Component are hidden, it is possible for a Runtime Framework and a set of Components to present themselves as a Component to a higher-level Framework. All that is required is that the Framework implement the Component API. The result is a "Russian Doll" model in which Components may be nested inside other Components to an arbitrary depth. Nesting components in this manner is one way to produce a 'complex' Modality Component, namely one that handles multiple modalities simultaneously. However, it is also possible to produce complex Modality Components without nesting, as discussed in 5.2 Modality Components .

The Runtime Framework is itself divided up into sub-components. One important sub-component is the Interaction Manager (IM), which executes the Interaction Manager markup. The IM receives all the events that the various Modality Components generate. Those events may be commands or replies to commands, and it is up to the Interaction Manager to decide what to do with them, i.e., what events to generate in response to them. In general, the MMI architecture follows a 'targetless' event model. That is, the Component that raises an event does not specify its destination. Rather, it passes it up to the Runtime Framework, which will pass it to the Interaction Manager. The IM, in turn, decides whether to forward the event to other Components, or to generate a different event, etc. The other sub-components of the Runtime Framework are the Device Context Component, which provides information about device capabilities and user preferences, and the Data Component, which stores the Data Model for the application. We do not currently specify the interfaces for the IM and the Data Component, so they represent only the logical structure of the functionality that the Runtime Framework provides. The interface to the Device Context Component is specified in [DCCI].

Because we are using the term 'Component' to refer to a specific set of entities in our architecture, we will use the term 'Constituent' as a cover term for all the elements in our architecture which might normally be called 'software components'.

3.3 Relationship to Compound Document Formats

The W3C Compound Document Formats Activity [CDF] is also concerned with the execution of user interfaces written in multiple languages. However, the CDF group focuses on defining the interactions of specific sets of languages within a single document, which may be defined by inclusion or by reference. The MMI architecture, on the other hand, defines the interaction of arbitrary sets of languages in multiple documents. From the MMI point of view, mixed markup documents defined by CDF specifications are treated like any other documents, and may be either Controller or Presentation Documents. Finally, note that the tightly coupled languages handled by CDF will usually share data and scripting contexts, while the MMI architecture focuses on a looser coupling, without shared context. The lack of shared context makes it easier to distribute applications across a network and also places minimal constraints on the languages in the various documents. As a result, authors will have the option of building multimodal applications in a wide variety of languages for a wide variety of deployment scenarios. We believe that this flexibility is important for the further development of the industry.

4 Overview of Constituents

Here is a list of the Constituents of the MMI architecture. They are discussed in more detail in the next section.

  • the Runtime Framework, which provides the basic infrastructure and controls the communication among the other Constituents.
  • the Device Context Component, which is a sub-component of the Runtime Framework. It provides information about platform capabilities.
  • the Interaction Manager, which is a sub-component of the Runtime Framework and coordinates the different modalities. It is the Controller in the MVC paradigm.
  • the Data Component, which is a sub-component of the Runtime Framework. It provides the common data model and represents the Model in the MVC paradigm.
  • the Modality Components, which provide modality-specific interaction capabilities. They are the Views in the MVC paradigm.

5 The Constituents

This section presents the responsibilities of the various constituents of the MMI architecture.

5.1 The Runtime Framework

The Runtime Framework is responsible for starting the application and interpreting the Controller Document. More specifically, the Runtime Framework must:

  • load and initialize the Controller document
  • initialize the Component software. If the Component is local, this will involve loading the corresponding code (library or executable) and possibly starting a process if the Component is implemented as a separate process, etc. If the Component is remote, the Runtime Framework will load a stub and possibly open a connection to the remote implementation.
  • generate the necessary lifecycle events
  • handle communication between the Components
  • map between the asynchronous Modality Component API and the potentially synchronous APIs of other components (e.g., the Delivery Context Interface)

The need for mapping between synchronous and asynchronous APIs can be seen by considering the case where a Modality Component wants to query the Delivery Context Interface [DCCI]. The DCCI API provides synchronous access to property values whereas the Modality Component API, presented below in 6.2 Standard Life Cycle Events , is purely asynchronous and event-based. The Modality Component will therefore generate an event requesting the value of a certain property. The DCCI cannot handle this event directly, so the Runtime Framework must catch the event, make the corresponding function call into the DCCI API, and then generate a response event back to the Modality Component. Note that even though it is globally the Runtime Framework's responsibility to do this mapping, most of the Runtime Framework's behavior is asynchronous. It may therefore make sense to factor out the mapping into a separate Adapter, allowing the Runtime Framework proper to have a fully asynchronous architecture. For the moment, we will leave this as an implementation decision, but we may make the Adapter a formal part of the architecture at a later date.

The Runtime Framework's main purpose is to provide the infrastructure, rather than to interact with the user. Thus it implements the basic event loop, which the Components use to communicate with one another, but is not expected to handle by itself any events other than lifecycle events. However, if the Controller Document markup section of the application provides presentation markup as well as Interaction Management, the Runtime Framework will execute it just as the Modality Components do. Note, however, that the execution of such presentation markup is internal to the Runtime Framework and need not rely on the Modality Component API.

5.1.1 The Interaction Manager

The Interaction Manager (IM) is the sub-component of the Runtime Framework that is responsible for handling all events that the other Components generate. Normally there will be specific markup associated with the IM instructing it how to respond to events. This markup will thus contain a lot of the most basic interaction logic of an application. Existing languages such as SMIL, CCXML, SCXML, or ECMAScript can be used for IM markup as an alternative to defining special-purpose languages aimed specifically at multimodal applications. In a future draft of this specification, we may define the interface between the IM and the Runtime Framework, with the goal of making it easy to plug in different IM languages into a given Framework. However, the current draft does not specify such an API so that the Runtime Framework and IM appear as a single unit to the Modality Components.

The IM fulfills multiple functions. For example, it is responsible for synchronization of data and focus, etc., across different Modality Components as well as the higher-level application flow that is independent of Modality Components. It also maintains the high-level application data model and may handle communication with external entities and back-end systems. In the future we may split these functions apart and define different components for each of them. However, for the moment, we leave them rolled up in a single monolithic Interaction Manager component. We note that state machine languages such as SCXML are a good choice for authoring such a multi-function component, since state machines can be composed. Thus it is possible to define a high-level state machine representing the overall application flow, with lower-level state machines nested inside it handling the the cross-modality synchronization at each phase of the higher-level flow.

Due to the Russian Doll model, Components may contain their own Interaction Managers to handle their internal events. However these Interaction Managers are not visible to the top level Runtime Framework or Interaction Manager.

If the Interaction Manager does not contain an explicit handler for an event, any default behavior that has been established for the event will be respected. If there is no default behavior, the event will be ignored. (In effect, the Interaction Manager's default handler for all events is to ignore them.)

5.1.2 The Delivery Context Component

The Delivery Context [DCCI] is intended to provide a platform-abstraction layer enabling dynamic adaptation to user preferences, environmental conditions, device configuration and capabilities. It allows Constituents and applications to:

  • query for properties and their values
  • update (run-time settable) properties
  • receive notifications of changes to properties

Note that some device properties, such as screen brightness, are run-time settable, while others, such as whether there is a screen, are not. The term 'property' is also used for characteristics that may be more properly thought of as user preferences, such as preferred output modality or default speaking volume.

5.2 Modality Components

Modality Components, as their name would indicate, are responsible for controlling the various input and output modalities on the device. They are therefore responsible for handling all interaction with the user(s). Their only responsibility is to implement the interface defined in 6 Interface between the Runtime Framework and the Modality Components . Any further definition of their responsibilities must be highly domain- and applicaton-specific. In particular we do not define a set of standard modalities or the events that they should generate or handle. Platform providers are allowed to define new Modality Components and are allowed to place into a single Component functionality that might logically seem to belong to two different modalities. Thus a platform could provide a handwriting-and-speech Modality Component that would accept simultaneous voice and pen input. Such combined Components permit a much tighter coupling between the two modalities than the loose interface defined here. Furthermore, modality components may be used to perform general processing functions not directly associated with any specific interface modality, for example, dialog flow control or natural language processing.

In most cases, there will be specific markup in the application corresponding to a given modality, specifying how the interaction with the user should be carried out. However, we do not require this and specifically allow for a markup-free modality component whose behavior is hard-coded into its software.

5.3 Examples

For the sake of concreteness, here are some examples of components that could be implemented using existing languages. Note that we are mixing the design-time and run-time views here, since it is the implementation of the language (the browser) that serves as the run-time component.

  • CCXML [CCXML]could be used as both the Controller Document and the Interaction Manager language, with the CCXML interpreter serving as the Runtime Framework and Interaction Manager.
  • SCXML [SCXML] could be used as the Controller Document and Interaction Manager language
  • In an integrated multimodal browser, the markup language that provided the document root tag would define the Controller Document while the associated scripting language could serve as the Interaction Manager.
  • XHTML [XHTML] could be used as the markup for a Modality Component.
  • VoiceXML [VoiceXML]could be used as the markup for a Modality Component.
  • SVG [SVG] could be used as the markup for a Modality Component.
  • SMIL [SMIL]could be used as the markup for a Modality Component.

6 Interface between the Runtime Framework and the Modality Components

The most important interface in this architecture is the one between the Modality Components and the Runtime Framework. Modality Components communicate with the Framework via asynchronous events. Components must be able to raise events and to handle events that are delivered to them asynchronously. It is not required that components use these events internally since the implementation of a given Component is black box to the rest of the system. In general, it is expected that Components will raise events both automatically (i.e., as part of their implementation) and under mark-up control. The disposition of events is the responsibility of the Runtime Framework layer. That is, the Component that raises anevent does not specify which Component it should be delivered to or even whether it should be delivered to any Component at all. Rather that determination is left up to the Framework and Interaction Manager.

6.1 Event Delivery Mechanism

We do not currentlyspecify the mechanism used to deliver events between the Modality Components and the Runtime Framework, but we may do so in the future. We do place the following requirements on it:

  1. Events must be delivered reliably. In particular, the event delivery mechanism must report an error if an event can not be delivered, for example if the destination endpoint is unavailable.
  2. Events must be delivered to the destination in the order in which the source generated them. There is no guarantee on the delivery order of events generated by different sources. For example, if Modality Component M1 generates events E1 and E2 in that order, while Modality Component M2 generates E3 and then E4, we require that E1 be delivered to the Runtime Framework before E2 and that E3 be delivered before E4, but there is no guarantee on the ordering of E1 or E2 versus E3 or E4.

6.1.1 Event and Information Security

Events will often carry sensitive information, such as bank account numbers or health care information. In addition events must also be reliable to both sides of transaction: for example, if an event carries an assent to a financial transaction, both sides of the transaction must be able to rely on that assent.

We do not currently specify delivery mechanisms or internal security safeguards used by the Modality Components and the Runtime Framework. However, we believe that any secure system will have to meet the following requirements at a minimum:

The following two optional requirements can be met by using the W3's XML-Signature Syntax and Processing specifiction [XMLSig].

  1. Authentication. The event delivery mechanism should be able to ensure that the identity of components in an interaction are known.
  2. Integrity. The event delivery mechanism should be able to ensure that the contents of events have not been altered in transit.

    The remaining optional requirements for event delivery and information security can be met by following other industry-standard procedures.

  3. Authorization. A component should provide a method to ensure only authorized components can connect to it.
  4. Privacy. The event delivery mechanism should provide a method to keep the message contents secure from any unauthorized access while in transit.
  5. Non-repudiation. The event delivery mechanism, in conjunction with the components, may provide a method to ensure that if a message is sent from one constituent to another, the originating constituent cannot repudiate the message that it sent and that the receiving constituent cannot repudiate that the message was received.

6.2 Standard Life Cycle Events

The Multimodal Architecture defines the following basic life-cycle events which must be supported by all modality components. These events allow the Runtime Framework to invoke modality components and receive results from them. They thus form the basic interface between the Runtime Framework and the Modality components. Note that the 'Extension' event offers extensibilty since it contains arbitrary XML content and be raised by either the Runtime Framework or the Modality Components at any time once the context has been established. For example, an application relying on speech recognition could use the 'Extension' event to communicate recognition results or the fact that speech had started, etc.

The concept of 'context' is basic to these events described below. A context represents a single extended interaction with one (or possibly more) users. In a simple unimodal case, a context can be as simple as a phone call or SSL session. Multimodal cases are more complex, however, since the various modalities may not be all used at the same time. For example, in a voice-plus-web interaction, e.g., web sharing with an associated VoIP call, it would be possible to terminate the web sharing and continue the voice call, or to drop the voice call and continue via web chat. In these cases, a single context persists across various modality configurations. In general, we intend for 'context' to cover the longest period of interaction over which it would make sense for components to store state or information.

For examples of the concrete XML syntax for all these events, see B Examples of Life-Cycle Events

6.2.3 PrepareRequest

An optional event that the Runtime Framework may send to allow the Modality Components to pre-load markup and prepare to run. Modality Components are not required to take any particular action in reponse to this event, but they must return a PrepareResponse event.

6.2.5 StartRequest

The Runtime Framework sends this event to invoke a Modality Component. The Modality Component must return a StartResponse event in response. If the Runtime Framework has sent a previous Prepare event, it may leave the contentURL and content fields empty, and the Modality Component will use the values from the Prepare event. If the Runtime Framework includes new values for these fields, the values in the Start event override those in the Prepare event.

6.2.5.1 StartRequest Properties
  • Context. A unique URI designating this context. Note that the Runtime Framework may re-use the same context value in successive calls to Start if they are all within the same session/call.
  • ContentURL Optional URL of the content that the Modality Component should execute. Includes standard HTTP fetch parameters such as max-age, max-stale, fetchtimeout, etc. Incompatible with content.
  • Content Optional Inline markup for the Modality Component to execute. Incompatible with contentURL. Note that it is legal for both contentURL and content to be empty. In such a case, the Modality Component will either use the values provided in the preceeding Prepare event, if one was sent, or revert to its default hard-coded behavior, which could consist of returning an error event or of running a preconfigured or hard-coded script.
  • Data Optional additional data.

If the Interaction Manager sends multiple StartRequests to a given Modality Component before it receives a DoneNotification, each such request overrides the earlier ones. Thus if a Modality Component receives a new StartRequest while it is executing a previous one, it should cancel the execution of the previous StartRequest, producing a suitable DoneNotification, and begin executing the content specified in the most recent StartRequest. If it is unable to cancel the execution of the previous StartRequest, the Modality Component should reject the new StartRequest, returning a suitable failure code in the StartResponse.

7 Open Issues

A Contributors

The following people contributed to the development of this specification.

  • Brad Porter
  • T.V. Raman

B Examples of Life-Cycle Events

In this specification we use elements from a fictional "dcont" namespace in some examples. The W3C Ubiquitous Web Application Working Group (UWA-WG) is developing such an ontology and expects to define a "dcont" namespace. The examples below are informative only and may, unintentionally, be incompatible with the work of the UWA-WG. For authoritative information on a (future) "dcont" namespace, please consult the Delivery Context Ontology specification.

1. newContextRequest (from MC to IM)

(The definition of "media" and the details of the media element will be discussed in the next draft.)

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:newContextRequest source="someURI" requestID="request-1">
		<media id="mediaID1>media1</media>
		<media id="mediaID2">media2</media>
   	<mmi:data xmlns:dcont="http://www.w3.org/2008/04/dcont">
     		<dcont:DeliveryContext>
   		... 
   		</dcont:DeliveryContext >
   	</mmi:data>
   </mmi:newContextRequest>
</mmi:mmi>
  

2. newContextResponse (from IM to MC)

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:newContextResponse source="someURI" requestID="request-1" status="success" context="URI-1">
   	<media>media1</media>
   	<media>media2</media>
   </mmi:newContextResponse>
</mmi:mmi>
 

3. prepareRequest (from IM to MC, with external markup)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:prepareRequest source="someURI" context="URI-1" requestID="request-1">
   	<mmi:contentURL href="someContentURI" max-age="" fetchtimeout="1s"/>
	</mmi:prepareRequest>
</mmi>

4. prepareRequest (from IM to MC, inline VoiceXML markup)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:prepareRequest source="someURI" context="URI-1" requestID="request-1" >
   	<mmi:content>
   		<vxml:vxml version="2.0">
   			<vxml:form>
   				<vxml:block>Hello World!</vxml:block>
   			</vxml:form>
   		</vxml:vxml>
   	</mmi:content>
   </mmi:prepareRequest>
</mmi:mmi>
  

5. prepareResponse (from MC to IM, success)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:prepareResponse source="someURI" context="someURI" requestID="request-1" status="success"/>
</mmi:mmi>
 

6. prepareResponse (from MC to IM, failure)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:prepareResponse source="someURI" context="someURI" requestID="request-1" status="failure">
   	<mmi:statusInfo>
   		NotAuthorized
  	    </mmi:statusInfo>
   </mmi:prepareResponse>
</mmi:mmi>
 

7. startRequest (from IM to MC)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:startRequest source="someURI" context="URI-1" requestID="request-1">
   	<mmi:contentURL href="someContentURI" max-age="" fetchtimeout="1s">
  </mmi:startRequest>
</mmi> 

8. startResponse (from MC to IM)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:startResponse source="someURI" context="someURI" requestID="request-1" status="failure">
   	<mmi:statusInfo>
   		NotAuthorized
   	</mmi:statusInfo>
   </mmi:startResponse>
</mmi:mmi>

9. doneNotification (from MC to IM, with EMMA result)

This requestID corresponds to the requestID of the "startRequest" event that started it.

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:doneNotification source="someURI" context="someURI" status="success" requestID="request-1" >
  		<mmi:data>
   		<emma:emma version="1.0"
  				<emma:interpretation id="int1" emma:medium="acoustic" emma:confidence=".75" emma:mode="voice" emma:tokens="flights from boston to denver">
   				<origin>Boston</origin>
   				<destination>Denver</destination>
   				</emma:interpretation>
   			</emma:emma>
   	</mmi:data>
   </mmi:doneNotification>
</mmi:mmi>
 

10. doneNotification (from MC to IM, with EMMA "no-input" result)

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:doneNotification source="someURI" context="someURI" status="success" requestID="request-1" >
  	 <mmi:data>
   		<emma:emma version="1.0"
 				<emma:interpretation id="int1" emma:no-input="true"/>
   		</emma:emma>
    </mmi:data>
   </mmi:doneNotification>
</mmi:mmi>
   

11. cancelRequest (from IM to MC)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:cancelRequest context="someURI" source="someURI" immediate="true" requestID="request-1">
   </mmi:cancelRequest>
</mmi>
 

12. cancelResponse (from MC to IM)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:cancelResponse source="someURI" context="someURI" requestID="request-1" status="success"/>
   </mmi:cancelResponse>
</mmi:mmi>
 

13. pauseRequest (from IM to MC)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:pauseRequest context="someURI" source="someURI" immediate="true" requestID="request-1"/>
</mmi>

14. pauseResponse (from MC to IM)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:cancelResponse source="someURI" context="someURI" requestID="request-1" status="success"/>
</mmi:mmi> 

15. resumeRequest (from IM to MC)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:resumeRequest context="someURI" source="someURI" requestID="request-1"/>
</mmi>
   

16. resumeResponse (from MC to IM)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:resumelResponse source="someURI" context="someURI" requestID="request-2" status="success"/>
</mmi:mmi>
   

17. extensionNotification (formerly the data event, sent in both directions)

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0"> 
   <mmi:extensionNotification name="appEvent" source="someURI" context="someURI" requestID="request-1" >
   	<applicationdata/> 
   </mmi:extensionNotification>
</mmi:mmi>
    

18. clearContextRequest (from the IM to the MC)

<mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0">
   <mmi:clearContextRequest source="someURI" context="someURI" requestID="request-2"/>
</mmi:mmi>
   

19. statusRequest (from the IM to the MC)

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0"> 
   <mmi:statusRequest requestAutomaticUpdate="true" source="someURI" requestID="request-3"/>
</mmi:mmi>
   

20. statusResponse (from the MC to the IM)

<mmi:mmi xmlns="http://www.w3.org/2008/04/mmi-arch" version="1.0"> 
	<mmi:statusResponse automaticUpdate="true" status="alive" source="someURI" requestID="request-3"/> 
</mmi:mmi>  

C Event Schemas

mmi.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.w3.org/2008/04/mmi-arch">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 NewContextRequest schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="NewContextRequest.xsd"/>
	<xs:include schemaLocation="NewContextResponse.xsd"/>
	<xs:include schemaLocation="ClearContextRequest.xsd"/>
	<xs:include schemaLocation="ClearContextResponse.xsd"/>
	<xs:include schemaLocation="CancelRequest.xsd"/>
	<xs:include schemaLocation="CancelResponse.xsd"/>
	<xs:include schemaLocation="CreateRequest.xsd"/>
	<xs:include schemaLocation="CreateResponse.xsd"/>
	<xs:include schemaLocation="DoneNotification.xsd"/>
	<xs:include schemaLocation="ExtensionNotification.xsd"/>
	<xs:include schemaLocation="PauseRequest.xsd"/>
	<xs:include schemaLocation="PauseResponse.xsd"/>
	<xs:include schemaLocation="PrepareRequest.xsd"/>
	<xs:include schemaLocation="PrepareResponse.xsd"/>
	<xs:include schemaLocation="ResumeRequest.xsd"/>
	<xs:include schemaLocation="ResumeResponse.xsd"/>
	<xs:include schemaLocation="StartRequest.xsd"/>
	<xs:include schemaLocation="StartResponse.xsd"/>
	<xs:include schemaLocation="StatusRequest.xsd"/>
	<xs:include schemaLocation="StatusResponse.xsd"/>
	<xs:element name="mmi">
		<xs:complexType>
			<xs:choice>
				<xs:sequence>
					<xs:element ref="mmi:newContextRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:newContextResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:clearContextRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:clearContextResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:cancelRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:cancelResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:createRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:createResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:doneNotification"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:extensionNotification"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:pauseRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:pauseResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:prepareRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:prepareResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:resumeRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:resumeResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:startRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:startResponse"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:statusRequest"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element ref="mmi:statusResponse"/>
				</xs:sequence>
			</xs:choice>
			<xs:attributeGroup ref="mmi:mmi.version.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

mmi-datatypes.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" targetNamespace="http://www.w3.org/2008/04/mmi-arch">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 general Type definition schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:simpleType name="versionType">
		<xs:restriction base="xs:decimal">
			<xs:enumeration value="1.0"/>
		</xs:restriction>
	</xs:simpleType>
	<xs:simpleType name="mediaContentTypes">
		<xs:restriction base="xs:string">
			<xs:enumeration value="media1"/>
			<xs:enumeration value="media2"/>
		</xs:restriction>
	</xs:simpleType>
	<xs:simpleType name="mediaAttributeTypes">
		<xs:restriction base="xs:string">
			<xs:enumeration value="mediaID1"/>
			<xs:enumeration value="mediaID2"/>
		</xs:restriction>
	</xs:simpleType>
	<xs:simpleType name="sourceType">
		<xs:restriction base="xs:string"/>
	</xs:simpleType>
	<xs:simpleType name="targetType">
		<xs:restriction base="xs:string"/>
	</xs:simpleType>
	<xs:simpleType name="requestIDType">
		<xs:restriction base="xs:string"/>
	</xs:simpleType>
	<xs:simpleType name="contextType">
		<xs:restriction base="xs:string"/>
	</xs:simpleType>
	<xs:simpleType name="statusType">
		<xs:restriction base="xs:string">
			<xs:enumeration value="success"/>
			<xs:enumeration value="failure"/>
		</xs:restriction>
	</xs:simpleType>
	<xs:simpleType name="statusResponseType">
		<xs:restriction base="xs:string">
			<xs:enumeration value="alive"/>
			<xs:enumeration value="dead"/>
		</xs:restriction>
	</xs:simpleType>
	<xs:simpleType name="immediateType">
		<xs:restriction base="xs:boolean"/>
	</xs:simpleType>
	<xs:complexType name="contentURLType">
		<xs:attribute name="href" type="xs:anyURI" use="required"/>
		<xs:attribute name="max-age" type="xs:string" use="optional"/>
		<xs:attribute name="fetchtimeout" type="xs:string" use="optional"/>
	</xs:complexType>
	<xs:complexType name="contentType">
		<xs:sequence>
			<xs:any namespace="http://www.w3.org/2001/vxml" processContents="skip" maxOccurs="unbounded"/>
		</xs:sequence>
	</xs:complexType>
	<xs:complexType name="emmaType">
		<xs:sequence>
			<xs:any namespace="http://www.w3.org/2003/04/emma" processContents="skip" maxOccurs="unbounded"/>
		</xs:sequence>
	</xs:complexType>
	<xs:complexType name="anyComplexType" mixed="true">
		<xs:complexContent mixed="true">
			<xs:restriction base="xs:anyType">
				<xs:sequence>
					<xs:any processContents="skip" minOccurs="0" maxOccurs="unbounded"/>
				</xs:sequence>
			</xs:restriction>
		</xs:complexContent>
	</xs:complexType>
	
</xs:schema>

mmi-attribs.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" targetNamespace="http://www.w3.org/2008/04/mmi-arch" 
				attributeFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 general Type definition schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:attributeGroup name="media.id.attrib">
		<xs:attribute name="id" type="mmi:mediaAttributeTypes" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="mmi.version.attrib">
		<xs:attribute name="version" type="mmi:versionType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="source.attrib">
		<xs:attribute name="source" type="mmi:sourceType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="target.attrib">
		<xs:attribute name="target" type="mmi:targetType" use="optional"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="requestID.attrib">
		<xs:attribute name="requestID" type="mmi:requestIDType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="context.attrib">
		<xs:attribute name="context" type="mmi:contextType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="immediate.attrib">
		<xs:attribute name="immediate" type="mmi:immediateType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="status.attrib">
		<xs:attribute name="status" type="mmi:statusType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="statusResponse.attrib">
		<xs:attribute name="status" type="mmi:statusResponseType" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="extension.name.attrib">
		<xs:attribute name="name" type="xs:string" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="requestAutomaticUpdate.attrib">
		<xs:attribute name="requestAutomaticUpdate" type="xs:boolean" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="automaticUpdate.attrib">
		<xs:attribute name="automaticUpdate" type="xs:boolean" use="required"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="group.allEvents.attrib">
		<xs:attributeGroup ref="mmi:source.attrib"/>
		<xs:attributeGroup ref="mmi:requestID.attrib"/>
		<xs:attributeGroup ref="mmi:context.attrib"/>
	</xs:attributeGroup>
	<xs:attributeGroup name="group.allResponseEvents.attrib">
		<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
		<xs:attributeGroup ref="mmi:status.attrib"/>
	</xs:attributeGroup>
	
</xs:schema>

mmi-elements.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" targetNamespace="http://www.w3.org/2008/04/mmi-arch" 
				attributeFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 general elements definition schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	
	<!-- ELEMENTS -->
	<xs:element name="statusInfo" type="mmi:anyComplexType"/>
	<xs:element name="media">
		<xs:complexType>
			<xs:simpleContent>
				<xs:extension base="mmi:mediaContentTypes">
					<xs:attributeGroup ref="mmi:media.id.attrib"/>
				</xs:extension>
			</xs:simpleContent>
		</xs:complexType>
	</xs:element>
</xs:schema>

NewContextRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 NewContextRequest schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	<xs:import namespace="http://www.w3.org/2008/04/dcont" schemaLocation="dcont.xsd"/>

	<xs:element name="newContextRequest">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:media" maxOccurs="unbounded"/>
				<xs:element name="data">
					<xs:complexType>
						<xs:sequence>
							<xs:element ref="dcont:DeliveryContext"/>
						</xs:sequence>
					</xs:complexType>
				</xs:element>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

NewContextResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 NewContextResponse schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="newContextResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:media" minOccurs="0" maxOccurs="unbounded"/>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

PrepareRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 PrepareRequest schema for MMI Life cycle events version 1.0. 
			 The optional PrepareRequest event is an event that the Runtime Framework may send 
			 to allow the Modality Components to pre-load markup and prepare to run (e.g. in case of 
			 VXML VUI-MC). Modality Components are not required to take any particular action in 
			 response to this event, but they must return a PrepareResponse event.
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	
	<xs:element name="prepareRequest">
		<xs:complexType>
			<xs:choice>
				<xs:sequence>
					<xs:element name="contentURL" type="mmi:contentURLType"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element name="content" type="mmi:anyComplexType"/>
					<!-- only vxml permitted ?? -->
				</xs:sequence>
				<!-- data really needed ?? -->
				<xs:sequence>
					<xs:element name="data" type="mmi:anyComplexType"/>
				</xs:sequence>
			</xs:choice>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

PrepareResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 CreateResponse schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="prepareResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element name="data" minOccurs="0" type="mmi:anyComplexType"/>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
	
</xs:schema>

StartRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 StartRequest schema for MMI Life cycle events version 1.0. 
			 The Runtime Framework sends the event StartRequest to invoke a Modality Component 
			 (to start loading a new GUI resource or to start the ASR or TTS). The Modality Component 
			 must return a StartResponse event in response. If the Runtime Framework has sent a previous
			 PrepareRequest event, it may leave the contentURL and content fields empty, and the Modality
			 Component will use the values from the PrepareRequest event. If the Runtime Framework includes 
			 new values for these fields, the values in the StartRequest event override those in the 
			 PrepareRequest event.
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	
	<xs:element name="startRequest">
		<xs:complexType>
			<xs:choice>
				<xs:sequence>
					<xs:element name="contentURL" type="mmi:contentURLType"/>
				</xs:sequence>
				<xs:sequence>
					<xs:element name="content" type="mmi:anyComplexType"/>
					<!-- only vxml permitted ?? -->
				</xs:sequence>
				<!-- data really needed ?? -->
				<xs:sequence>
					<xs:element name="data" type="mmi:anyComplexType"/>
				</xs:sequence>
			</xs:choice>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

StartResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 CreateResponse schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="startResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element name="data" minOccurs="0" type="mmi:anyComplexType"/>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

DoneNotification.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 DoneNotification schema for MMI Life cycle events version 1.0. 
			 The DoneNotification event is intended to be used by the Modality Component to indicate that
			 it has reached the end of its processing. For the VUI-MC it can be used to return the ASR
			 recognition result (or the status info: noinput/nomatch) and TTS/Player done notification. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="doneNotification">
		<xs:complexType>
			<xs:sequence>
				<xs:element name="data" type="mmi:anyComplexType"/>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
	
</xs:schema>

CancelRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 CancelRequest schema for MMI Life cycle events version 1.0. 
			 The CancelRequest event is sent by the Runtime Framework to stop processing in the Modality 
			 Component (e.g. to cancel ASR or TTS/Playing). The Modality Component must return with a 
			 CancelResponse message. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	
	<xs:element name="cancelRequest">
		<xs:complexType>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
			<xs:attributeGroup ref="mmi:immediate.attrib"/>
			<!-- no elements -->
		</xs:complexType>
	</xs:element>
</xs:schema>

CancelResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 CancelResponse schema for MMI Life cycle events version 1.0. 
			 The CancelRequest event is sent by the Runtime Framework to stop processing in the Modality 
			 Component (e.g. to cancel ASR or TTS/Playing). The Modality Component must return with a 
			 CancelResponse message. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="cancelResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

PauseRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 PauseRequest schema for MMI Life cycle events version 1.0. 
			 The PauseRequest event is sent by the Runtime Framework to pause processing of a Modality 
			 Component (e.g. to cancel ASR or TTS/Playing). The Modality Component must return with a 
			 PauseResponse message. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	
	<xs:element name="pauseRequest">
		<xs:complexType>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
			<xs:attributeGroup ref="mmi:immediate.attrib"/>
			<!-- no elements -->
		</xs:complexType>
	</xs:element>
</xs:schema>

PauseResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema"
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 PauseResponse schema for MMI Life cycle events version 1.0. 
			 The PauseRequest event is sent by the Runtime Framework to pause the processing of
			 the Modality Component (e.g. to cancel ASR or TTS/Playing). The Modality Component 
			 must return with a PauseResponse message. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="pauseResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

ResumeRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ResumeRequest schema for MMI Life cycle events version 1.0. 
			 The ResumeRequest event is sent by the Runtime Framework to resume a previously suspended 
			 processing task of a Modality Component. The Modality Component must return with a 
			 ResumeResponse message. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	
	<xs:element name="resumeRequest">
		<xs:complexType>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
			<xs:attributeGroup ref="mmi:immediate.attrib"/>
			<!-- no elements -->
		</xs:complexType>
	</xs:element>
</xs:schema>

ResumeResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ResumeRequest schema for MMI Life cycle events version 1.0. 
			 The ResumeRequest event is sent by the Runtime Framework to resume a previously suspended 
			 processing task of a Modality Component. The Modality Component must return with a 
			 ResumeResponse message. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="resumeResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

ExtensionNotification.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" 
				targetNamespace="http://www.w3.org/2008/04/mmi-arch" attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ExtentionNotification schema for MMI Life cycle events version 1.0. 
			 The extensionNotification event may be generated by either the Runtime Framework or the 
			 Modality Component and is used to communicate (presumably changed) data values to the 
			 other component. E.g. the VUI-MC has signaled a recognition result for any field displayed 
			 on the GUI, the event will be used by the Runtime Framework to send a command to the 
			 GUI-MC to update the GUI with the recognized value. 
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	
	<xs:element name="extensionNotification">
		<xs:complexType>
			<xs:sequence>
				<xs:element name="data" type="mmi:anyComplexType"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
			<xs:attributeGroup ref="mmi:extension.name.attrib"/>
		</xs:complexType>
	</xs:element>
	
</xs:schema>

ClearContextRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.w3.org/2008/04/mmi-arch" 
				attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ClearContextRequest schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>

	<xs:element name="clearContextRequest">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:media" minOccurs="0" maxOccurs="unbounded"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

ClearContextResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.w3.org/2008/04/mmi-arch" 
				attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ClearContextResponse schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="clearContextResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:media" minOccurs="0" maxOccurs="unbounded"/>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

StatusRequest.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.w3.org/2008/04/mmi-arch" 
				attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ClearContextRequest schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>

	<xs:element name="clearContextRequest">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:media" minOccurs="0" maxOccurs="unbounded"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

StatusResponse.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:dcont="http://www.w3.org/2008/04/dcont" xmlns:mmi="http://www.w3.org/2008/04/mmi-arch" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.w3.org/2008/04/mmi-arch" 
				attributeFormDefault="qualified" elementFormDefault="qualified">
	<xs:annotation>
		<xs:documentation xml:lang="en">
			 ClearContextResponse schema for MMI Life cycle events version 1.0
		</xs:documentation>
	</xs:annotation>
	<xs:include schemaLocation="mmi-datatypes.xsd"/>
	<xs:include schemaLocation="mmi-attribs.xsd"/>
	<xs:include schemaLocation="mmi-elements.xsd"/>
	
	<xs:element name="clearContextResponse">
		<xs:complexType>
			<xs:sequence>
				<xs:element ref="mmi:media" minOccurs="0" maxOccurs="unbounded"/>
				<xs:element ref="mmi:statusInfo" minOccurs="0"/>
			</xs:sequence>
			<xs:attributeGroup ref="mmi:group.allResponseEvents.attrib"/>
			<xs:attributeGroup ref="mmi:target.attrib"/>
		</xs:complexType>
	</xs:element>
</xs:schema>

D Ladder Diagrams

D.1 Creating a Session

The following ladder diagram shows a possible message sequence upon a session creation. We assume that the Runtime Framework and a Interaction Manager session is already up and running. The user starts a multimodal session for example by starting a web browser and fetching a given URL.

The initial document contains scripts which providing the modality component functionality (e.g. understanding XML formatted life cycle events) and message transport capabilities (e.g. AJAX, but depends on the exact system implementation).

After loading the initial documents (and scripts) the modality component implementation issues a mmi:newContextRequest message to the Runtime Framework. The Runtime Framework may load a corresponding markup document, if necessary (could be SCXML), and initializes and starts the Interaction Manager.

In this sceneario the Interaction Manager manager logic issues a number of mmi:startRequest messages to the various modality components. One message is sent to the graphical modality component (GUI) to instruct it to load a HTML document. Another message is sent to a voice modality component (VUI) to play a welcome message.

The voice modality component has (in this example) to create a VoiceXML session. As VoiceXML 2.1 does not provide an external event interface a CCXML session will be used for external asynchronous communication. Therefore the voice modality component uses the session creation interface of CCXML 1.0 to create a session and start a corresponding script. This script will then make a call to a phone at the user device (which could be a regular phone or a SIP soft phone on the user's device). This scenario illustrates the use of a SIP phone, which may reside on the users mobile handset.

After successful setup of a CCXML session and the voice connection the voice modality component instructs the CCXML browser to start a VoiceXML dialog and passing it a corresponding VoiceXML script. The VoiceXML interpreter will execute the script and play out the welcome message. After the execution of the VoiceXML script has finished, the voice modality component notifies the Interaction Manager using the mmi:done event.

session creation ladder

D.2 Processing User Input

The next diagram gives a example for the possible message flow while processing of user input. In the given scenario the user wants to enter information using the voice modality component. To start the voice input the user has to use the "push-to-talk" button. The "push-to-talk" button (which might be a hardware button or a soft button on the screen) generates a corresponding event when pushed. This event is issues as a mmi:extension event towards the Interaction Manager. The Interaction Manager logic sends a mmi:startRequest to the voice modality component. This mmi:startRequest message contains a URL which points to a corresponding VoiceXML script. The voice modality component again starts a VoiceXML interpreter using the given URL. The VoiceXML interpreter loads the document and executes it. Now the system is ready for the user input. To notify the user about the availabilty of the voice input functionality the Interaction Manager might send an event to the GUI upon receiving the mmi:startResponse event (which indicates that the voice modality component has started to execute the document). But note that this is not shown in the picture.

The VoiceXML interpreter captures the users voice input and uses a speech recognition engine to recognize the utterance. The speech recognition result will be represented as an EMMA document and sent to the interaction manager using the mmi:done message. The Interaction Manager logic sends a mmi:extension message to the GUI modality component to instruct it to display the recognition result.

session creation ladder

E Glossary

  • CCXML: CCXML is designed to provide telephony call control support for dialog systems, such as VoiceXML.
  • Controller Document: A document that contains markup defining the interaction between the other documents. Such markup is called Interaction Manager markup.
  • Data Component: The Data Component is a sub-component of the Runtime Framework which is responsible for storing application-level data.
  • DCCI: Platform and language neutral programming interfaces that provide Web applications access to a hierarchy of dynamic properties representing device capabilities, configurations, user preferences and environmental conditions. (http://www.w3.org/TR/DPF/)
  • Interaction Manager: The Interaction Manager (IM) is the sub-component of the Runtime Framework that is responsible for handling all events that the other Components generate. It is responsible for synchronization of data and focus, etc., across different Modality Components as well as the higher-level application flow that is independent of Modality Components.
  • Life cycle events:The Multimodal Architecture defines basic life-cycle events which must be supported by all modality components. These events allow the Runtime Framework to invoke modality components and receive results from them. They form the basic interface between the Runtime Framework and the Modality components.
  • Modality Component: Modality Components are responsible for controlling the various input and output modalities on the device. Modality components may also be used to perform general processing functions not directly associated with any specific interface modality, for example, dialog flow control or natural language processing
  • Nested components: A Runtime Framework and a set of Components can present themselves as a Component to a higher-level Framework. All that is required is that the Framework implement the Component API. The result is a "Russian Doll" model in which Components may be nested inside other Components to an arbitrary depth.
  • Runtime Framework: The Runtime Framework is responsible for starting the application and interpreting the Controller Document. It provides the basic infrastructure which the various Modality Components plug into and controls the communication among the other Constituents.
  • SCXML: "State Chart extensible Markup Language". SCXML provides a generic state-machine based execution environment based on CCXML and Harel State Tables.
  • Software Constituent: An architecturally significant entity in the architecture. Because we are using the term 'Component' to refer to a specific set of entities in our architecture, we will use the term 'Constituent' as a cover term for all the elements in our architecture which might normally be called 'software components'.
  • VoiceXML: VoiceXML is designed for creating audio dialogs that feature synthesized speech, digitized audio, recognition of spoken and DTMF key input, recording of spoken input, telephony, and mixed initiative conversations

F Use Case Discussion

This section presents a detailed example of how an implementation of this architecture. For the sake of concreteness, it specifies a number of details that are not included in this document. It is based on the MMI use case document [MMIUse], specifically the second use case, which presents a multimodal in-car application for giving driving directions. Three languages are involved in the design view:

  1. The Controller/Interaction Manager markup language. We will not specify this language but will assume that it is capable of representing a reasonably powerful state machine.
  2. The graphical language. We will assume that this is HTML.
  3. The voice language . We will assume that this VoiceXML. For concreteness, we will use VoiceXML 2.0 [VXML], but will also note differences in behavior that might occur with a future version of VoiceXML

The remainder of the discussion involves the run-time view. The numbered items are taken from the "User Action/External Input" field of the event table. The appended comments are based on the working group's discussion of the use case.

  1. User Presses Button on wheel to start application. Comment: The Runtime Framework submits to a pre-configured URL and receives a session cookie in return.  This cookie will be included in all subsequent submissions. Now the Runtime Framework loads the DCCI framework, retrieves the default user and device profile and submits them to a (different) URl to get the Controller Document. UAPROF can be used for standard device characteristics (screen size, etc.), but it is not extensible and does not cover user preferences. The DCCI group is working on a profile definition that provides an extensible set of attributes and can be used here. Once the initial profile submission is made, only updates get sent in subsequent submissions. Once the Runtime Framework loads the Controller, it notes that it references both VoiceXML and HTML documents. Therefore it makes sure that the corresponding Modality Components are loaded, and then sends Prepare for each Component. These events contain the Context ID and the Component-specific markup (VoiceXML or HTML). If the markup was included in the root document, it is delivered in-line in the event. However, if the main document referenced the Component-specific markup via URL, only the URL is passed in the event. Once the Modality Components receive the Prepare event, they parse their markup, initialize their resources (ASR, TTS, etc.) and return PrepareResponse events. The IM responds with Start events and the application is ready to interact with the user.
  2. The user interacts in an authentication dialog. Comment: The Runtime Framework sends the Start command to the VoiceXML Modality component, which executes a Form asking the user to identify himself. In VoiceXML 3.0, the Form might make use of speaker verification as well as speech recognition. Any database access or other back-end interaction is handled inside the Form. In VoiceXML 2.0, the recognition results (which include the user's indentity) will be returned to the IM by the <exit> tag along with a namelist. This would mean that the specific logical Modality Component instance had exited, so that any further voice interactions would have to be handled by a separate logical Modality Component corresponding to a separate Presentation Document. In VoiceXML 3.0, however, it would be possible for the Modality Component instance to send a recognition result event to the IM without exiting. It would then be sitting there, waiting for the IM to send it another event to trigger further processing. Thus in VoiceXML 3.0, all the voice interactions in the application could be handled by a single Markup Component (section of VoiceXML markup) and a single logical Modality Component.

    Recognition can be done locally, remotely (on the server) or distributed between the device and the server. By default, the location of event handling is determined by the markup. If there is a local handler for an event specified in the document, the event is handled locally. If not, the event is forwarded to the server. Thus if the markup specifies a speech-started event handler, that event will be consumed locally. Otherwise it will be forwarded to the server. However, remote ASR requires more than simply forwarding the speech-started event to the server because the audio channel must be established. This level of configuration is handled by the device profile, but can be overridden by the markup. Note that the remote server might contain a full VoiceXML interpreter as well as ASR capabilities. In that case, the relevant markup would be sent to the server along with the audio. The protocol used to control the remote recognizer and ship it audio is not part of the MMI specification (but may well be MRCP.)

    Open Issue: The previous paragraph about local vs remote event handling is retained from an earlier draft. Since the Modality Component is a black box to the Runtime Framework, the local vs remote distinction should be internal to it. Therefore the event handlers would have to be specified in the VoiceXML markup. But no such possibility exists in VoiceXML 2.0. One option would be to make the local vs remote distinction vendor-specific, so that each Modality Component provider would decide whether to support remote operations and, if so, how to configure them. Alternatively, we could define the DCCI properties for remote recognition, but make it optional that vendors support them. In either case, it would be up to the VoiceXML Modality Component communicate with the remote server, etc. Newer languages, such as VoiceXML 3.0 could be designed to allow explicit markup control of local vs remote operations. Note that in the most complex case, there could be multiple simultaneous recognitions, some of which were local and some remote. This level of control is most easily achieved via markup, by attaching properties to individual grammars. DCCI properties are more suitable for setting global defaults.

    When the IM receives the recognition result event, it parses it and retrieves the user's preferences from the DCCI component, which it then dispatches to the Modality Components, which adjust their displays, output, default grammars, etc. accordingly. In VoiceXML 2.0, each of the multiple voice Modality Components will receive the corresponding event.

  3. Initial GPS input. Comment: DCCI configuration determines how often GPS update events are raised. On the first event, the IM sends the HTML Modality Component an command to display the initial map. On subsequent events, a handler in the IM markup determines if the automobile's location has changed enough to require an update of the map display. Depending on device characteristics, the update may require redrawing the whole map or just part of it.

    This particular step in the use case shows the usefulness of the Interaction Manager. One can imagine an architecture lacking an IM in which the Modality Components communicate with each other directly. In this case, all Modality Components would have to handle the location update events separately. This would mean considerable duplication of markup and calculation. Consider in particular the case of a VoiceXML 2.0 Form which is supposed to warn the driver when he went off course. If there is an IM, this Form will simply contain the off-course dialog and will be triggered by an appropriate event from the IM. In the absence of the IM, however, the Form will have to be invoked on each location update event. The Form itself will have to calculate whether the user is off-course, exiting without saying anything if he is not. In parallel, the HTML Modality Component will be performing a similar calculation to determine whether to update its display. The overall application is simpler and more modular if the location calculation and other application logic is placed in the IM, which will then invoke the individual Modality Components only when it is time to interact with the user.

    Note on the GPS. We assume that the GPS raises four types of events: On-Course Updates, Off-Course Alerts, Loss-of-Signal Alerts, and Recovery of Signal Notifications. The Off-Course Alert is covered below. The Loss-of-Signal Alert is important since the system must know if its position and course information is reliable. At the very least, we would assume that the graphical display would be modified when the signal was lost. An audio earcon would also be appropriate. Similarly, the Recovery of Signal Notification would cause a change in the display and possibly a audio notification. This event would also contain an indication of the number of satellites detected, since this determines the accuracy of the signal: three satellites are necessary to provide x and y coordinate, while a fourth satellite allows the determination of height as well. Finally, note that the GPS can assume that the car's location does not change while the engine is off. Thus when it starts up it will assume that it is at its last recorded location. This should make the initialization process quicker.

  4. User selects option to change volume of on-board display using touch display. Comment: HTML Modality Component raises an event, which the IM catches. Depending on the IM language, it may be able to call the DCCI interface directly (e.g. as executable content in SCXML). If it cannot, the IM would generates an event to modify the relevant DCCI property and the Runtime Framework (Adapter) would be responsible for converting it into the appropriate function call, which has the effect of resetting the output volume.
  5. User presses button on steering wheel (to start recognition) Comment: The interesting question here is whether the button-push event is visible at the application level. One possibility is that the button-push simply turns on the mike and is thus invisible to the application. In that case, the voice modality component must already be listening for input with no prespeech timeout set. On the other hand, if there is an explicit button-push event, the IM could catch it and then invoke the speech component, which would not need to have been active in the interim. The explicit event would also allow for an update of the graphical display.
  6. User says destination address. (May improve recognition accuracy by sending grammar constraints to server based on a local dialog with the user instead of allowing any address from the start) Comment: Assuming V3 and explicit markup control of recognition, the device would first perform first local recognition, then send the audio off for remote recognition if the confidence was not high enough. The local grammar would consist of 'favorites' or places that the driver was considered likely to visit. The remote grammar would be significantly larger, possibly including the whole continent.

    When the IM is satisfied with the confidence levels, it ships the n-best list off to a remote server, which adds graphical information for at least the first choice. The server may also need to modify the n-best list, since items that are linguistically unambiguous may turn out to be ambiguous in the database (e.g., "Starbucks"). Now the IM instructs the HTML component to display the hypothesized destination (first item on n-best list) on the screen and instructs the speech component to start a confirmation dialog. Note that the submission to the remote server should be similar to the <data> tag in VoiceXML 2.1 in that it does not require a document transition. (That is, the remote server should not have to generate a new IM document/state machine just to add graphical information to the n-best list.)

  7. User confirms destination. Comment: Local recognition of grammar built from n-best list. The original use case states that the device sends the destination information to the server, but that may not be necessary since the device already has a map of the hypothesized destination. However, if the confirmation dialog resulted in the user choosing a different destination (i.e., not the first item on the n-best list), it might be necessary to fetch graphical/map information for the selected destination. In any case, all this processing is under markup control.
  8. GPS Input at regular intervals. Comment: On-Course Updates. Event handler in the IM decides if location has changed enough to require update of graphical display.
  9. GPS Input at regular intervals (indicating driver is off course) Comment: This is probably an asynchronous Off-Course Alert, rather than a synchronous update. In either case, the GPS determines that the driver is off course and raises a corresponding event which is caught by the IM. Its event handler updates the display and plays a prompt warning the user. Note that both these updates are asynchronous. In particular, the warning prompt may need to pre-empt other audio (for example, the system might be reading the user's email back to him.)
  10. N/A Comment: The IM sends a route request to server, requesting it to recalculate the route based on the new (unexpected) location. This is also part of the event handler for the off-course event. There might also be a speech interaction here, asking the user if he has changed his destination.
  11. Alert received on device based on traffic conditions Comment: This is another asynchronous event, just like the off-course event. It will result in asynchronous graphical and verbal notifications to the user, possibly pre-empting other interactions.; The difference between this event and the off-course event is that this one is generated by the remote server. To receive it, the IM must have registered for it (and possibly other event types) when the driver chose his destination. Note that the registration is specific to the given destination since the driver does not want to receive updates about routes he is not planning to take.
  12. User requests recalculation of route based on current traffic conditions Comment: Here the recognition can probably be done locally, then the recalculation of the route is done by the server, which then sends updated route and graphical information is sent to the device.
  13. GPS Input at regular intervals Comment: On-Course updates as discussed above.
  14. User presses button on steering wheel Comment: Recognition started. Whether this is local or remote recognition is determined by markup and/or DCCI defaults established at the start of application. The use case does not specify whether all recognition requires a button push. One option would be to require the button push only when the driver is initiating the interaction. This would simplify the application in that it would not have to be listening constantly to background noise or side chatter just in case the driver issued a command. In cases where the system had prompted the driver for input, the button push would not be necessary. Alternatively, a special hot-word could take the place of the button push. All of these options are compatible with the architecture described in this document.
  15. User requests new destination by destination type while still depressing button on steering wheel (may improve recognition accuracy by sending grammar constraints to server based on a local dialog with the us Comment: Local and remote recognition as before, with IM sending n-best list to server, which adds graphical information for at least the first choice.
  16. User confirms destination via a multiple interaction dialog to determine exact destination Comment: Local disambiguation dialog, as above. At the end, user is asked if this is a new destination.
  17. User indicates that this is a stop on the way to original destination Comment: Device sends request to server, which provides updated route and display info. The IM must keep track of the original destination so that it can request a new route to it after the driver reaches his intermediate destination.
  18. GPS Input at regular intervals Comment: As above.

G References

CDF
Compound Document by Reference Framework 1.0. Timur Mehrvarz, et al. editors. World Wide Web Consortium, 2006
CCXML
"Voice Browser Call Control: CCXML Version 1.0" , R.J. Auburn, editor, World Wide Web Consortium, 2005.
DCCI
"Delivery Context Interfaces (DCCI) Accessing Static and Dynamic Properties" , Keith Waters, Rafah Hosn, Dave Raggett, Sailesh Sathish, and Matt Womer, editors. World Wide Web Consortium, 2004.
EMMA
"Extensible multimodal Annotation markup language (EMMA)" , Michael Johnson et al. editors. EMMA is an XML format for annotating application specific interpretations of user input with information such as confidence scores, time stamps, input modality and alternative recognition hypotheses, World Wide Web Consortium, 2005.
Galaxy
"Galaxy Communicator" Galaxy Communicator is an open source hub and spoke architecture for constructing dialogue systems that was developed with funding from Defense Advanced Research Projects Agency (DARPA) of the United States Government.
MMIF
"W3C Multimodal Interaction Framework" , James A. Larson, T.V. Raman and Dave Raggett, editors, World Wide Web Consortium, 2003.
MMIUse
"W3C Multimodal Interaction Use Cases" , Emily Candell and Dave Ragett, editors, World Wide Web Consortium, 2002.
SCXML
"State Chart XML (SCXML): State Machine Notation for Control Abstraction" , Jim Barnett et al. editors. World Wide Web Consortium, 2006.
SMIL
"Synchronized Multimedia Integration Language (SMIL 2.1)" , Dick Bulterman et al. editors. World Wide Web Consortium, 2005.
SVG
"Scalable Vector Graphics (SVG) 1.1 Specification" , Jon Ferraiolo et al. editors. World Wide Web Consortium, 2003.
VoiceXML
"Voice Extensible Markup Language (VoiceXML) Version 2.0" , Scott McGlashan et al. editors. World Wide Web Consortium, 2004.
XHTML
"XHTML 1.0 The Extensible HyperText Markup Language (Second Edition)" , Steven Pemberton et al. editors. World Wide Web Consortium, 2004.
XMLSig
"XML-Signature Syntax and Processing" Eastlake et al., editors. World Wide Web Consortium, 2001.