Multimodal Authoring systems
Discussion on what is a valued tool and what is desired.
How clients and servers choose to use a specific modality and what that might mean.
Time lines- semantic support into the markup.
Note this is to develop APPLICATIONS
Dialogue steps- The story board approach
Video, audio, etc.
Binding input modalities to
Events, interactions, etc.
Action rules for state control
Introduction entry to shop
Selection of items- choose cd type
Preview of choice
Put in shopping basket
AS’s or run time agents all seem to require some magic to link inputs, data, flow control etc.
This is typically done by specific code being written – scripts, java …and no help exists.
There may be other equally valid AS’s.models.
Persistent semantic information across layers or states.
Where is lower left?
Procedural code limits the system deciding where they are bound, - in the server, client etc.
Who controls binding
System- client / server
To respond/understand connection/disconnected
Lack of Modality Preferences
Fixed single modality ( eg. Blind )
Mix Visual first, Aural next, etc.
Choice of sync points, or presentation, or action rules, by profile.
Should Allow output for transcoders to add value.
Also requires message passing including push capabilities
Authoring tools will also require control support for handling async events for clients and parameterisation for senders.