MBUI Use Cases

From W3C Wiki
This text of this page is outdated and will be elaborated further here:Introduction to Model-Based User Interface Design


This section presents a set of compelling use cases on which model-based design of UIs will be particularly suitable.

Enabling Advanced User-Service Interactions in a Digital Home (1)

Digital home refers to a residence with devices that are connected through a computer network. A digital home has a network of consumer electronics, mobile, and PC devices that cooperate transparently and simplify usability at home. All computing devices and home appliances conform to a set of interoperable standards so that everything can be controlled by means of an interactive system. Different electronic services can be offered by a digital home system, including but not limited to:

  • Manage, synchronize and store personal content, family calendars and files.
  • Upload, play and show music, videos and pictures on a Home Screen, Home PDA or TV, using a mobile phone as a remote control.
  • Use a mobile phone as a remote control for the other devices, for example a thermostat.
  • Know who is at home, see your home from a web cam and be notified if the intruder alarm goes off.
  • A heat detector can tell you if there is a fire.

These functionalities should be made available through context-sensitive front-ends. In fact, such front-ends have to be capable of adapting to different computing platforms (touch points, web, mobile, TV, Home PDA, DECT handset, voice portal …), users (children, teenagers, adults, the elderly, disabled people, …) and environments or situations (at home, away, at night, while music is playing, …). The final aim is to provide a seamless and unified user experience which it is critical in the digital home domain. At this respect different automatic UI adaptations can be possible:

  • Zooming the UI if the user has visual problems.
  • Enabling multimodal interactions (ex. voice interaction) because the user is doing another task at the same time.
  • Distributing the UI to use different devices at the same time, for example, interacting with the TV while using a mobile phone.
  • Any combination of the above.

Dynamic variations in the computing platform, user and environment dimensions of the context of use will be automatically accommodated, thus supporting users in a more effective, personalized and consistent way. Furthermore, the engineering costs of developing context-sensitive front-ends for the digital home will be cut off, and lastly, the time to market will be improved.

Migratory User Interfaces (2)

Migratory user interfaces are interactive applications that can transfer among different devices while preserving the state and therefore giving the sense of a non-interrupted activity. The basic idea is that devices that can be involved in the migration process should be able to run a migration client, which is used to allow the migration infrastructure to find such devices and know their features. Such client is also able to send the trigger event to the migration server, when it is activated by the user. At that point the state of the source interface will be transmitted to the server in order to be adapted and associated to the new UI automatically generated for the target device.

Omitting minimalistic UIs through the use of an universal interaction device in production environments (3)

Mobile and multimodal user interfaces will play an even more central role in human-machine-interaction in future production plants. Mobile operator systems will replace the nowadays in most cases used device specific, minimalist and unimodal user interfaces for maintenance and servicing and make heterogeneous field device functionalitities and information via a homogenized multimodal user interface available for the user.

The challenge of the design of such future operator systems is the interplay of the interactive and physical actions which have to be realized by the user during the maintenance tasks. Human-machine-interaction is influenzed by this fact. This means, e.g., that the user while realizing a physical maintenance task has to put the the operator system out of his hands. To offer the user a situation and task oriented multimodal presentation of information, this interplay has to be considered while developing a user interface. For the realization of such a support, context-sensitive user interfaces which recognize the usage context of the user (e.g., the current environment of use)are needed. Based on this, these user interfaces select the most suitable input/output modality for information processing (e.g., GUI) and process the important information for the current task to be realized (interactive or manual) in a target-oriented manner (e.g., start of the of the appropriate dialog screen).

MBUID for efficient industrial Usability Engineering (4)

Especially in the industrial sector users are faced with complex machine tools and production utilities. Users need to interact with the machines through different sorts of human-machine-interfaces (e.g. control panels). These interactive systems often show a lack of usability which means that they are not very user friendly and offer only limited support to the user to accomplish his task in an effective and efficient way. Usability Engineering aims at a user-centered development process for interactive systems in order to create human-machine-interfaces of high usability so that the users may achieve their goals with effectiveness, efficiency, and satisfaction in their respective context of use.

Essential part in each industrial usability engineering project is a deep analysis of both the product or machinery and its users. A usability engineer needs to fully understand the user and his environment (the context-of-use) as well as the tasks that the user has to perform using the product or machinery.

The concepts of MBUID may help to formally model the information gathered in the analysis phase (e.g. task model). The models could then be used to derive user requirements and to iteratively refine and transform the models into abstract and concrete user interfaces. The objective would be to realize early prototyping as prototypes are needed for the continuously refinement of user requirements. Prototypes and underlying models could be reused for transformations and code generations.

HMI development in the automotive industry to handle HMI varieties and to increase efficiency of HMI development processes (5)

Car infotainment systems are currently developed using huge textual specifications that are refined iteratively while parallel being implemented. This approach is characterized by diverging specifications and implementation versions, change request negotiations and very late prototyping with cost-intensive bug fixing. Number and variety of involved actors and roles lead to a huge gap between what the designers and ergonomists envision as the final version, what they describe in the system specification and how the specification is understood and implemented by the developers.

Model-based user interface development could speed up the iterative implementation while reducing implementation efforts due to automatic generation of prototype interfaces. Different models could be used to establish a formal and efficient communication between designers, functionality specialists (e.g. Navigation, Telephone and Media), developers and other stakeholders. The resulting reduction of development time would make car infotainment systems more competitive and would narrow the gap to innovation cycles in the field of consumer electronics.

Another large aspect of modern car infotainment systems is the quality assurance that is performed by the vendors. The complexity of modern infotainment systems (more than 1000 different screens and different modalities) requires large efforts to develop formal test models on the basis of the system specification. The test models are then used by the vendor to test the implementation coming from his supplier. This procedure results in another time and cost-intensive gap that could be bridged by performing consistency checks on the models used to generate the infotainment system instead of testing the implementation.

Meta User Interfaces (6)

Smart environments equipped with a heterogeneous set of interconnected appliances and interaction devices such as displays in various form factors ranging from a smart phone up to wall-sized displayes offer a broad range of modes that can be used for interaction like remote controls, speech- and gesture recognition, or touch and pen-based input styles.

Meta-User Interfaces enable the users in such environment to understand and configure the interaction modes and media based on their preferences and their context. Following five basic properties characterizing user interfaces for smart environments and should be considered by a Meta User Interface:

Adaptivity User interfaces for smart environments much more rely on the possibility to adapt to the context of use. This on the one hand is due to the fact that interaction happens in various situations and under different circumstances, as well as due to the fact that multiple different devices might be used for the interaction.

Session Management Users tend to follow various tasks in parallel. Whereas some tasks can be accomplished in short term, long term tasks might be interrupted by more important ones and continued later on. This requires the handling of interruptions and the possibility to continue later on.

Migration As user interaction takes place using a variety of devices the possibility to migrate between different devices and even modalities turned out as an important aspect [1]. This is on the one hand useful to support a broad range of usage situations and on the other hand to best meet user preferences and usability criteria for certain tasks and applications.

Distribution While switching devices during the usage of an application is important in some cases, using multiple devices at the same time is also an important feature. This could simply be two monitors that are used simultaneously or the combination of a mobile phone and a touchscreen with gesture and speech recognition.

Multimodality Providing the ability to simultaneously support multiple devices at the same time also allows the creation of multimodal interaction if the different devices support different modalities. As humans usually combine multiple modalities during interaction, the usage of multimodality can make interaction very natural and robust.

Post – WIMP Widgets (7)

With the introduction of technologies like HTML5, CSS3, and SVG the way people can interact with the web has been fundamentally enhanced. Designers are no longer required to use a pre-defined set of basic widgets. Instead, interaction can nowadays driven by self-designed widgets specifically targeted to a certain user need or a specific application requirement. These Post-WIMP widgets are designed to support different combinations of modes and media and can guarantee a certain quality-in-use upon context changes.

Based on ubiquitous availability of browsers and corresponding standardized W3C technologies Post-WIMP widgets can be easily designed and manipulated (e.g. SCXML, ECMA), reflect continuous context changes (e.g. WebSockets) and consider different multimodal setups (e.g. XHTML+Voice, SMIL, and the MMI-Architecture).

List of assignments between participant and use case

  • CERTH/ITI (Nikolaos Kaklanis): 4,5
  • CNR-ISTI (Fabio Paternò, Carmen Santoro, Lucio Davide Spano): 1,2
  • CTIC (Javier Rodriguez, Cristina Gonzalez, Ignacio Marin): 2
  • DFKI (Gerrit Meixner, Marc Seissler, Marius Orfgen, Moritz Kuemmerling): 3,4,5
  • Fraunhofer FIT (Jaroslav Pullmann): AAL (Use Case has to be specified!)
  • Fraunhofer IESE (Kai Breiner): 3
  • Robert Bosch GmbH (Ran Zhang): 5
  • Université catholique de Louvain (Jean Vanderdonckt, Vivian Genaro Motti, François Beuvens): 2
  • University "La Sapienza" Rome (Paolo Bottoni): AAL / Cultural Heritage (Use Cases have to be specified!)
  • University of Dresden (Annerose Braune): 1,4,5
  • Universidade Federal de São Carlos (Sebastian Feuerstack): 2,6,7
  • Remote participants have to add themselves to the list