My Notes from the Hong Kong W3C/WAP public workshop
These are my personal reflections on the meeting, and nobody else's
opinion. To get a less idiosyncratic idea of what happened, consult the
minutes.
The workshop was 5-6 September. There was a lot of participation from
groups who are members of both consortia, and a few from groups who are only
members of one or the other. A major focus for many particpants was
Interactive Voice Response systems - at the moment typically automated
services provided by telephone.
Major discussion topics were
- the integration of WAP (small screen, text and basic images, expected to
have spec-based style implementations, but variation in properties
available) content architecture with voice content architecture (voice
only).
- Single authoring (write once, read on any kind of device) - what it
means, is it possible, and how to do it.
- Multiple modality - having text, graphics, sound, and other forms
available in diffferent systems.
- The need for use cases and usability research and data to drive
development in the right direction for users.
- Where the work should be done (agreed that it is more important to get
it done).
My perspective on the results were
- that single authoring is seen as a goal, but there is a perception among
many of the participants (who are large organisations in general) that it
was only going to be avialable to large organisations who could afford to
build special purpose tools for their own needs. It is agreed that the
ability to provide extra presentation control for specific media is
important for large content providers
- Architecturally it is possible to converge the WAP and Voice areas, but
the typical use cases for each are different - for example voice output
needs to be short, whereas it is easy for users to provide a lot of input
in one transaction. For mobile devices it is possible to provide more
output in a usable format, but it is difficult to do anything but very
simple input transactions. So it is not clear whether the goal is a single
language that can be directly used, a convergence between WML and VXML so
that they can be used in both milieux, or a different kind of source and
use negotiation (for example CC/PP) to select an appropriate
transformation (eg XSLT) for the target medium / user requirements. (My
preference is strongly for the latter approach, since that is scalable
into general accessibility and device independence. It puts some load on
the author to learn to produce better content, and a lot of load on the
tool developers to make authoring tools with better support for device
independence, but that is what I am trying to get anyway...)
- Multi-modality is seen, by many of the participants, as essentially new.
However there has been work in the area done in fields such as education
and accessibility for people with disabilities for a long time.
So... WAI has quite a lot to offer - in the area of authoring for
multi-modality, and usage scenarios / usability. We should follow this work,
because mobile devices are important to people who are not housebound, and for
people who are deaf or blind, for example, provide opportunities that were
never available when people relied on a fixed public telephone. From a wider
perspective it is important to make sure this stuff fits well with the
architecture we know well, and to look at what we can learn from it to apply
to that architecture.
Charles McCN, $Date: 2000/09/12
10:50:44 $