An XML-based approach for designing nomadic applications

Giulio Mori, Fabio Paternò, Carmen Santoro

I.S.T.I.-C.N.R. Via G. Moruzzi, 1 56100

{g.mori, f.paterno, c.santoro}@cnuce.cnr.it

Abstract

The wide variety of devices currently available, which is bound to increase in the coming years, poses a number of issues for the design cycle of interactive software applications. Model-based approaches can provide useful support in addressing this new challenge. In this paper we present and discuss our XML-based approach for designing nomadic applications and the related tool that we have developed, TERESA (Transformation Environment for inteRactivE Systems representAtions), whose aim is to provide a complete semi-automatic environment supporting the presented method and transformations.  Particular attention will be paid to design a high-level control panel for designers so that they can focus on main design aspects and choices through effective representations without even knowing the basic underlying mechanisms and concepts that support the possible transformations.

Introduction

Designing applications that exploit new multi-platform technology is often a difficult problem. For software developers this introduces the problem of constructing multiple versions of single applications and endowing these versions with the ability to dynamically respond to changes in context. Creating different versions of applications for different devices engenders extra development and expensive maintenance cost of cross-platform consistency, complicates the problems of configuration management and dilutes the resources available for usability engineering. Technological advances solve some of the problems of engineering applications for multiple devices: XML (eXtensible Markup Language) documents supported by XSL (eXtensible Stylesheet Language) stylesheets allow creating customised presentations for different devices or users. Wireless Markup Language (WML) permits to produce device-independent presentations for a range of small display devices. Wireless Internet gateways automatically translate HTML documents into WML documents (although they may produce unusable results if they rely on large displays). However, XSL (and related technologies) help with user interface presentation, but are limited by the fact that a different interaction design may be necessary when moving between radically different device types. More generally, while these solutions help with parts of the problem, they do not provide high-level guidance for guaranteeing quality across multiple versions of applications. We present our approach aims to support design and development of nomadic applications providing general solutions that can be tailored to specific cases, and we show the tool we developed –TERESA (Transformation Environment for inteRactivE Systems representAtions)-which supports the main phases of this method.

1. The Proposed Method

Our starting point is the identification of useful abstractions highlighting the main aspects that should be considered when designing effective interactive applications. Of the relevant models, task models play a particularly important role because they indicate the logical activities that an application should support. A task is an activity that should be performed in order to reach a goal. A goal is either a desired modification of state or an inquiry to obtain information on the current state. For example, querying the available flights from Pisa to London is one task that must be performed in order to book a flight to London (the relative goal). Tasks can range from a very high abstraction level (such as deciding a strategy for solving a problem) to a concrete, action-oriented level (such as selecting a printer).

 

Our method is composed of a number of steps that allows designers to start with an overall envisioned task model of a nomadic application and then derive concrete and effective user interfaces for multiple devices:

·         High-level task modelling of a multi-context application. In this phase designers develop a single model that addresses the possible contexts of use and the various roles involved and also a domain model aiming to identify all the objects that have to be manipulated to perform tasks and the relationships among such objects. Such models are specified using the ConcurTaskTrees (CTT) notation, which allows designers to indicate the platforms suitable to support each task.

·         Developing the system task model for the different platforms considered. Here designers have to filter the task model according to the target platform and, if necessary, further refine the task model, depending on the specific device considered, thus, obtaining the system task model for the platform considered.

·         From system task model to abstract user interface. Here the goal is to obtain an abstract description of the user interface composed of a set of abstract presentations that are identified through an analysis of the task relationships and structured by means of interactors composed of various operators.

·         User interface generation. In this phase we have the generation of the user interface. This phase is completely platform-dependent and has to consider the specific properties of the target device. For example, if the considered device is a cellphone, such information is not sufficient, as we also need to know the type of micro-browser supported and the number and the types of soft-keys available. 

1. 1From the Task Model To the Abstract User Interface

Starting with the task model of the system, we aim to identify the specification of the abstract user interface in terms of its static structure (the “presentation” part) and dynamic behaviour (the “dialogue” part): such abstract specification will be used to drive the implementation. By analysing the temporal relationships of a task model, it is possible to identify the sets of tasks that are enabled over the same period of time according to the constraints indicated in the model (enabled task sets). Thus, the interaction techniques supporting the tasks belonging to the same enabled task set are logically candidates to be part of the same presentation, though this criteria should not be interpreted too rigidly in order to avoid excessively modal user interfaces.

This shift from task to abstract interaction objects is performed through three steps:

- Calculation of enabled task sets: in this phase we identify the tasks which are enabled over the same period of time

- Heuristics for optimisation in terms of presentation sets and transitions: these heuristics help designers to group tasks in presentation sets that are better candidates to support the mapping into the user interface presentations.

- Mapping presentation task sets and their transitions into sets of abstract interaction objects and dialogue, so obtaining a specification of the abstract user interface

1.2    Identification of Presentation Task Sets

The first step is to calculate the Enabled Task Sets (ETSs) according to the system task model. The CTTE tool (which is freely downloadable at http://giove.cnuce.cnr.it/ctte.html) automatically performs the identification of these sets. Only application and interaction tasks are considered in ETSs because user tasks (those associated with internal cognitive activities) are not directly relevant to this transformation. The ETSs identify a number of potential presentations and the connections among different ETSs are represented by transition tasks. Once the ETSs have been defined, we need to specify some rules to reduce their number (which sometimes can be very high) by merging two or more ETSs into new sets, called Presentation Sets or PSs.

1.3    The Language for Abstract User Interfaces

The set of PSs obtained is the initial input for building the abstract user interface specification, which will be composed of interactors (abstract interaction objects) associated with the basic tasks. Such interactors are high-level interaction objects that are classified first depending on the type of task supported, then depending on type and cardinality of the associated objects and lastly on presentation aspects.

Figure 4. Tree-like representation of the language for specifying abstract user interfaces




The above figure provides a tree-like representation of the abstract language that has been used for specifying the abstract user interface. The language for abstract user interface has been written in XML format. As you can see from the picture, an interface is composed of one or more presentations and each presentation is characterised by a structure and 0 or more connections. The basic idea is that the structure describes the static organisation of the user interface, whereas the connections describe the relationships among the various presentations of the user interface. Generally speaking, the set of connections identifies how the user interface evolves over time, namely its dynamic behaviour.

1.4    From Presentation Task Sets to Abstract User Interface Presentations

The abstract user interface is mainly defined by a set of interactors and the associated composition operators. The type of task supported, the type of objects manipulated and their cardinality are useful elements for identifying the interactors. In order to compose such interactors we have identified a number of composition operators that capture typical effects that user interface designers actually aim to achieve Error! Reference source not found.. The operators are: i)Grouping (G) (whose idea is to group together two or more elements, so this operator should be applied when the involved tasks share some characteristics),  ii)Ordering (O) (which is applied when some kind of order exists amongst elements); iii)Relation (R) operator should be applied when a relation exists between n elements yi, i=1,…, n and one element x. The iv)Hierarchy (H) means that a hierarchy exists amongst the involved interactors.

At this point we have to map each task of the presentation set considered into a suitable interactor and build a presentation structure where the relationships among tasks are reflected through the different relationships between such interactors which are expressed by using the composition operators. In order to derive the presentation structure associated to the specific presentation set and deduce the operators that should be applied to them, we have to consider the part of the task model regarding the tasks belonging to a specific presentation set. In this process we have to consider that temporal relationships existing between tasks are inherited also by their subtasks.  It is worth noting that transformation allowing to map presentation sets into structure of  abstract interaction objects has been implemented using JAVA code.

1.5      The Dialogue Part

Once the static arrangement of the abstract user interface is identified, we have to specify its dynamic behaviour. To this aim, an important role is played by the so-called transition tasks. For each task set T, we define transition tasks(T) the tasks whose execution makes the abstract user interface pass from the current task set T into another task set T’. For each task set T, a set of rules (transition_task, next_TS) should be provided whose meaning is: when transition_task is executed, the abstract user interface passes from T to next_TS.

For example, if a task set TS1 has two transition tasks: t1 and t2, in order to express that via the transition task t1 the abstract interface passes from TS1 to TS2 we can use the following XML rule:

<task_set TS1 /task_set ><behaviour><rule>

<transition_task t1 /transition_task>  <next_TS TS2 /next_TS>  

</rule></behavior>

1.6 From the Abstract User Interface to Its Implementation




Once the elements of the abstract user interface have been identified, every interactor has to be mapped into interaction techniques supported by the particular device configuration considered (operating system, toolkit, etc.), and also the abstract operators have to be appropriately implemented by highlighting their logical meaning: a typical example is the set of techniques for conveying grouping relationships in visual interfaces by using presentation patterns like proximity, similarity and continuity [6]. A possible implementation for the presentation corresponding to TS2 is shown in Figure 2, supposed that the current room is about Roman Archaeology.

 


Figure 2. One presentation of the PDA user interface

With the suggested scenario it is possible to show how the same task can be supported differently depending on the platform used. Consider accessing the artworks list grouped by the material used: in the desktop application this task could be supported by buttons whose colour is similar to the material used and a long textual introduction to the section is presented too (right part of Figure 3). However, this solution cannot be implemented in the WAP interface (left part of Figure 3) as it does not support colours and there is no room for buttons and long texts, so a list of links is used in this platform.

Figure 4 shows another example of different presentation of some visual information between a desktop system and a WAP phone: as you can see from the bottom part of the picture, while in the desktop system it is possible to have details of the work (title, type, description, author, material, and date of creation), in the WAP interface we can have only low resolution images allowing to have just a rough idea of what the work of art is, together with the indication of the title and the related museum section.

 

 

 

 

 

 

 

 

 

 

 

Figure 3. An example of different support for the selection task (XHTML and HTML)

The top part of Figure 4 shows how in the CTTE tool it is possible to specify for each task that each platform (PDA, desktop system, cellphone or other platforms) can support different sets of objects manipulated during the task performance.  For example, for the “Show artwork info” task considered in the picture, the designer has specified that the title is available on all the platforms considered (all the checkboxes labelled “PDA”, “Desktop”, and "Cellphone" have been selected), whereas the description is available only for the desktop system.

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 4. Example of different presentations of a work of art

This information has been used to generate different user interfaces for the cellular phone and the desktop system: Figure 4 shows that the title is available on both the platforms whereas the description is available only for the desktop system.

Conclusions

In this paper we have shown the main phases of our XML-based approach for the design of nomadic applications. This approach is supported by a tool, TERESA, whose aim is to provide a complete semi-automatic environment supporting the presented method and transformations.