the Multimodal Web
The convergence of W3C and WAP standards, and the emerging
importance of speech recognition and synthesis for the Mobile
See also: Agenda, and
We have received position papers from the following
people/organizations. If you have sent a paper and it doesn't
appear below, could you please contact Jim Larson, <firstname.lastname@example.org>
Jim Larson, Intel Architecture
Dr. Volker Steinbiss, CTO Philips
Ted Wugofski, Phone.com
Shin'ichi Matsui, Matsushita
Casper Harnung, ZoomON AB
Steve Ehrlich, Peter Monaco and Rajesh
Scott McGlashan, PipeBeach
Kazunari Kubota, NTT DoCoMo,
John Graff, Parlant Technology
Stéphane H. Maes and T. V. Raman,
Toshihiko Yamakami, ACCESS Co.
Daniel K. Appelquist, Digital
Yoshihama Sachiko, SEC Co. Ltd.
Andrew Scott, Telstra Research
Adrian Lincoln, Chris Bryant &
Charlie Debney, Vodafone
Naoko Ito, NEC
Doug Dominiak, Motorola
Hidetaka Ohto, Matsushita
Marianne Hickey, Hewlett Packard
Thomas Lee, William W. Song, Ken
Sung, Frank Tong, E-Business Technology Institute, The
University of Hong Kong
Jon C.S. Wu, et al., Philips
Toni Penttinen, Ericsson
Charles Hemphill & Mike Robin,
Bernhard Suhm, BBN Technologies -
Speech and Language Processing Department
Bennett Marks, Nokia
David Pearce et al., Motorola &
Adam Goodfellow & David Hitchman, Microsoft
Here are the position statements you have given us
Alastair Angwin, IBM UK Laboratories
- I am very interested in enabling the Internet in all aspects
for mobile devices. I hope to contribute in this meeting to a
better understanding of the needs of mobile/wireless requirements
and to see progress towards ubiquitous access to the Internet
move a little closer through subsequent actions.
Daniel Appelquist, Digital Channel Partners
- Digital Channel Partners is an E-Business consultancy that
provides consulting services across the spectrum of business
strategy, design and engineering. Our client base of ".coms" and
".corps" in the publishing, financial and entertainment sectors
need to be exposed to the latest thinking about where the "Web"
is heading. How does their ownership of a "brand" and proprietary
content mesh with the new paradigms for content distribution born
in an "Internet Everywhere" world? What new software development
and deployment methods must be explored in order to meet these
changing needs? A position paper will be forthcoming which will
attempt to flesh out these and other relevant questions.
- I represent Sun on the WAPforum and on the HTML WG where I am
a co-editor on XHTML Basic. I have a keen interest in Sun's
position that interoperability with the existing Web and Internet
should be the primary goal of any mobile- targetted solution, and
that this rarely means that something new has to be
David Bevis, IBM UK
- Mobile communications, spewcifically as it relates to WAP and
future mutlimedia applications within 1 or more application
contexts where assyncronous events require interaction
Ruben Brave, Catchy.net
- Catchy.net is a Dutch mobile internet solutions provider,
which means that aour organistion provides consultancy,
projectmanagement and application services. To create usable and
succesfull mobile internet solutions catchy.net is convinced that
voice applications will play a significant role.
Thomas Bridgman, IBM Research
- I am a member of the Pervasive Computing Solutions group at
IBM Research, and one of IBM's representitives to the WAP Forum's
Wireless Applications working group. I am interested in the
convergence work between WAP and W3C standards.
Doug Dominiak, Motorola
- Motorola is interested in furthering the convergence of the
WAP Application Environment with the application environment of
the Web. There are many opportunities to leverage existing
technologies, to preserve application developer's investment and
leverage existing expertise. We also feel that voice browsing,
and, generally, multimodal applications, present a great
opportunity to help realize the full potential of the Web.
Steve Ehrlich, Nuance
- Interest in speech in, visual out interfaces from both a
technology as well as a user interface perspective.
Adam Goodfellow, Microsoft
- Member of WAP Forum's Wireless Application Environment
(WAP-WAG-WAE) Next Generation core team defining next generation
WAE specifications. Active interest in convergence of WAP
specifications with Internet (W3C/IETF/ECMA) standards.
John Graff, Parlant Technology Inc
- Development of technolgoy for parental involement in k-12
Tao Huang, Intel China Research Center
- In Interactive Voice Response systems, how to exploit
mobility and the opportunities for multimodal systems; and what's
the challenges presented by the convergence of Mobile and
Interactive Voice Response systems.
Naoko Ito, NEC
- NEC submitted a Note to the W3C, introducing the concept of
navigation which specifies a scenario to expose the data to the
user and proposing XDNL, or XML Document Navigation Language
which describes the navigation. XDNL can produce a document flow,
or a dialog from an XML document. Together with XSLT, it can
produce HTML, WML, VoiceXML. XDNL in itself may be still immature
but we believe it can be a good starting point to address the
issue on the device independent authoring.
Soon Kon KIM, Microsoft
- I'm responsible for the solution development under WAP and
other related wireless data application in Microsoft and this
workshop will help me to share the info.
Kazuhiro Kitagawa, W3C/Keio
- Multi-Modal interface is very important for web appliance.
W3C should keep in touch with this technologies.
Kazunari Kubota, NTTDoCoMo
- We are interested in the convergence of W3C Technologies and
WAP technologies, such as XHTML Basic and CSS2 developing
approach, and how to integrate VoiceXML and SMIL to the WML in
the next stage.
Charles Hemphill, Conversa
- Voice Enabling the Internet The Web has become successful due
to standards, the simple point-and-click interface, the visual
nature of the content, and the ability to search for content. Due
to this success, we have begun to see the emergence of devices
designed to access this content. Many of these new devices
include reduced screens and limited support for the
point-and-click paradigm. WML was created to specifically address
devices with limited bandwidth and small screens. VoiceXML was
created for devices with no screen at all. What is the role of
voice for these devices and their impact on the corresponding
markup languages? How can we best leverage existing content? How
can we best share approaches across the various markup languages?
We consider these questions and others as we explore how we might
voice enable the Internet. (longer position paper previously
submitted via E-mail)
Marianne Hickey, Hewlett Packard Laboratories
- Chair of the multimodal sub-group within the W3C Voice
Browsing activity. I am interested in the interplay between
spoken dialogue and other input and output modalities, such as a
graphical user interface (gui). How can different modalities
compliment each other, as alternatives or where there is
simultaneous input/output? How do we make it easy to create
multimodal services? How do we build services where people can
move seamlessly between using speech, a gui or a combination of
David Hitchman, Microsoft
- Program Manager for Microsoft Mobile Explorer, currently a
dual mode (wap and html) browser as shipped on Sony Z5, working
on future product roadmaps and specifications.
Chen-Ning Hsi, Philips Rsearch East Asia - Taipei
- Our main interest on content adaptation is to ensure a proper
content delivery according to the terminal
capability/restrictions, the user characteristics/preference, and
the user location/context.
Jim Larson, Intel Architecture Lab
- Chairman, W3C Voice Browser Working Group; Member, ETSI
Aurora project on Distributed Speech Recognition; Manager of
Advanced Human I/O at the Intel Architecture Labs; Teach courses
in building speech applications at Oregon Graduate Institute and
at Portland State University
Andrian Lincoln, Motorola Australian Research Centre
- A key focus of my activities is associated with the delivery
and presentation of content to, but limited to, mobile orientated
devices. My interest is helping Vodafone understand the issues
that are being considered and how we can work together to form a
broadly adopted standard.
Haoling Liu, NTT DoCoMo
- We are interested in the convergence of W3C technologies and
WAP technologies, such as XHTML Basic and CSS 2 developing
approach, and how to integrate the VoiceXML and SMIL to the WML
in the next stage.
Stephane Maes, IBM T.J. Watson Research Center
- My research interests and professional responsibilities
include speech, multi-channel and multi-modal user interfaces,
technologies, middleware and development environments. My core
specialty is in speech technologies: conversational engines and
algorithms (speech recognition, speaker recognition, natural
language technologies and dialog management). I have several
years of technical expertise in the domains directly addressed by
the W3C/WAP workshop on the multi-modal web. I believe that I can
contribute and significantly guide the directions, requirements
and next steps that will be discussed.
Bennett Marks, Nokia
- I am working on multi-modal presentation of combined text and
audio, using WAP screens as context for audio interactions.
Larry Masinter, AT&T Labs
- I am responsible for the design and implementation of
device-independent services. I have been active in the
development of many of the technologies for multipurpose content
and content adaptation.
Shin'ichi Matsui, Matsushita Electric Industrial Co., Ltd.
- Matsushita (Panasonic) is manufacturing and selling Web-aware
appliances such as mobile phones or Digital TV sets. I am
interested in circumstances for those products, especially for
XHTML Basic, CC/PP, SMIL, etc.
Charles McCathieNevile, W3C
- I am a member of all the WAI guidelines working groups, and I
aminterseted in authouring practices, device-independent
interaction design, mutlimodal multimedia, and general
Hidetaka Ohto, W3C/Panasonic
- My area of interest is how to apply Web technologies for home
appliances such as mobile devices, TV sets. Therefore I am
interested in the convergence between WAP and W3C. Especially,I
would like to make clear the similarities and differences between
them, and find out the more generic solutions as much as
David Pearce, Motorola (& ETSI Aurora)
- I am the chairman of ETSI Aurora Working group developing
standards for distributed speech recognition. DSR will enable
speech driven interfaces to the mobile web. The DSR architecture
enables the computationally intensive parts of a multimodal
interface to be performed on a remote server. I will represent
information from the Aurora working group that now as a subgroup
in Applications and Protocols that is looking at similar issues
to the workshop on how to build end to end services for the
mobile web with speech and other interfaces. We have already had
very useful liaisons with W3C voice browser working group, we
share a common vision and wish to build on this.
Dave Raggett, W3C/HP
- I am very interested multimodal and WAP 3G and beyond. How
does multimodal interact with SMIL? How can we move forward on
Andrew Scott, Telstra
- I have led the WML Generic Content Authoring Guidelines
(GCAG) effort within the WAP Forum's Developer Group (WDG). I'm
based at the Telstra Research Labs, and have worked on
WAP-related activities since 1997. In addition to WAP, I've
worked on transcoding generic web content, voice browsers, and
with the usability team at Telstra.
William Song, E-Business Technology Institute, Hong Kong
- We are developing a sort of wireless markup language, which
is based on XML and compatible to WML. Will write a position
Peter Stark, Ericsson
- Active in the WAP WAE working group. For position see
email@example.com mailing list.
Jiming Sun, Intel
- I am interested in supporting multi-type data stream under
the WAP infrastructure. In particular, the integration of voice
and ink (pen) data. Should there be inkXML? How do we collect and
generate simultaneous voice and ink data that will be used
extensively in the wireless industry in the near future.
Yoichiro Tomari, Mitsubishi Electric Corporation
- We are developing browsers for cellular phones and car
navigation systems. I am interested in convergence between WML
and XHTML, and multimodal user interface.
Babar Uddin, AllToSearch
- See http://alltosearch.com
Markuu Vartiainen, Phone.com
- I'm working on the WAP-Web convergence issues. Particularly,
my aspirations are in the convergence of WMLScripting with EMMA
standards. It is assumed that the next generation WMLScript will
be more compliant with the ECMAScript 3rd Edition.
- ECMAScript 3rd edition Mobile Profile.
William Wang, InfoTalk Corporation Ltd.
- bout our company: Our company develops multi-lingual
conversational speech recognition technology and natural language
technology to facilitate easy commnunication of information
between humans and machines over the telephone. We focus mainly
on the development of telephony speech recognition systems of
various Asian languages and dialects. Our systems and
technologies have been successfully deployed by companies such as
Cable & Wireless Hong Kong Telecom, Taiwan Paging Networks,
Bank of China, and Hong Kong and China Gas Company. For more
information on our company, please browse our web site at
http://www.infotalkcorp.com About our participation in the
Workshop: Voice Browser is a relatively new concept in Asia.
However, with the rapid growth of Internet and mobile phone usage
in various Asia-Pacific countries in general, and China in
particular, we foresee a tremendous growth of this market segment
in the near future. In fact, just recently, the first Chinese
Voice Portal in Asia was successfully launched using our Chinese
conversational speech recognition technology. We are also working
with many partners in various Asia-Pacific regions to launch
similar services in the near future. In this workshop, we seek to
discuss with and to learn from other industry experts on how we
can contribute to the standardization and growth of this
Wieslawa Wajda, Alcatel
- ETSI STQ Aurora Project would like to cooperate with W3C and
WAP Forum for harmonising protocol elements and multimodal markup
language for Distributed Speech Recognition applications.
Wei Wei, Microsoft
- WAP's protocal development statment and evolution. W3C forum
development statment and evolution. How we can match W3c and WAP,
espcially cooperate with our partners(Ericsson, Sumsong, Sony,and
other WAP/W3C ICP), to provide the smoothly development networks
Jon C.S. Wu, Philips Research East Asia
- Interested in content adaptation for multi-modal Web/WAP
pages concerning terminal capabilities, user preference, and user
Toshihiko Yamakami, ACCESS
- ACCESS is a leading solution provider for non-PC embedded
internet software. From 17 years experience of embedded network
software, ACCESS licences its network softare to cellular phones,
game consoles, PDAs and so on. For the next generation
information appliances, we are interested in the how the
multimodal web will evolve with non-PC devices.
Sachiko Yoshihama, SEC
- As I've working on a WAP products, I'm especially interested
in convergence between VoiceXML and WML. I'm also interested in
the content transforming, which I believe one of the most
realistic approach to encourage migration for the new content