See also: IRC log
<Steven> Model-Based UI XG Final Report
<dsr> scribe: dsr
<Steven> s/Pablo Cear/Pablo_Cesar/
Jose introduces the MBUI area and the XG
The CAMELEON reference framework as the core architecture, and deriving from previous research.
<Gerrit> José gives a brief introduction of the work of the W3C MBUI XG group
Jose summarises the questions arising from the MBUI XG Report. (see slides)
Jose suggests standardization of baseline meta-models for the different abstraction layers in the Cameleon reference framework.
This would facilitate tools for interchange between different MBUI formats and tools.
Dave asks about the origins of the Cameleon framework?
Fabio: it dates back a few years
to an EU project that has now closed.
... do we think it is practical to put together a roadmap for
standardization?
Can we find an agreement at the meta model, starting with task/domain level and the AUI level?
This seems practical and would feed into other work.
Jose: seeing similarities and differences in task models, e.g. different kinds of relationships.
Fabio/Gerrit: these are extensible.
Fabio: we can define a core plus a means for adding extensions
Jose: modularity also
We can discuss XForms later which at first glance covers both AUI and the CUI levels.
<inserted> scribenick: kaz
-> http://www.w3.org/2005/Incubator/model-based-ui/wiki/UsiXML UsiXML
<inserted> UsiXML slides
(dave leads discussion about UsiXML)
* longevity of MBUI XG site is one issue
Note: "AUI" means "Abstract User Interface" and "CUI" means "Concrete User Interface"
toru: is UsiXML part of MDA?
dave: yes
fabio: the point here is considering abstract level
daniel: our scope is different from OMA's broader picture. nice to see here is what would better fit
@@@: thinking about the relationship with the broader picture would make sense
(some more discussion about Web engineering community and W3C)
<hiroyuki> kaz, Toru asks if MBUI is part of MDA
* how does UsiXML relate to other model based approaches?
* use cases?
* this workshop will provide some more concrete use cases
* much better organized research communities as well
* we want to understand industry demands
* not only UI but also other use cases
<Daniel> the observation is about the *lack* of industry uptake on using model-based approaches in general, as well as *methods*
<Daniel> this includes UI models
* question about experimental data on which approach is better than another approach
(comparison between approaches)
[ slides: TBD ]
(Gerrit introduces DFKI)
(MBUID Use Cases)
* several dozens of different devices are managed
(Useware Engineering Process)
* four phases: analysis/structuring/design/realization + evaluation
(Useware Markup Language (useML))
(Udit - useML-Editor)
* editor + simulator
(MBUID toolchain)
* process, tools and languages
* export from DISL to VoiceXML/SCXML (not implemented but should be useful)
(MBUID@Run-time)
* devices have to be maintained
* but there is bad accessibility caused by physical configuration
(SmartMote)
* remote control for inteligent production environments
* task-centered, adaptive and wireless
(Inddor Positioning Systems)
* Ubisense UWB-realtime positioning system
* RFID Grid
* Cricket Ultrasonic Indor location system
(Norms, Standards and Guidelines)
- Q&A:
* how to handle multiple users?
-> user group managed by supervisor or administrator
* difference between useML and CC/PP?
-> 5 elements assigned to specific task
-> can be extended depending concrete projects/requirements
not CC/PP but CTT
* any patent?
-> nop
* would like to follow the 2.0 version
[break]
<Steven> Scribe: Steven
Daniel: I've been working on
hypermedia since the 90's
... now Semantic Hypermedia Design Model
... I will do some reflections on what we have been looking
at
... and important points we should discuss
Pablo: Any relation to NCL?
Daniel: In early stages; they are
more lower level
... Why do we use Model based? What do we want?
... I want to leverage abstractions
... find a way to describe abstractions in a concise way,
ignoring some details
... providing a model, abstracting an artefact
... provides concise abstract language
Steven: What do you mean by artefact?
Daniel: The concrete thing we
want to produce
... we are not just observing artefacts, but engineer
them
... the models will abstract them, we need several models to
completely characterise what you want to produce
... translate between models, to get an executable one
xxx: Is a UI a representation of what a user wants to do?
Daniel: I come to that
later.
... We have looked at many types of models [slide lists
them]
... SHDM Models
... our meaning of 'navigation' is different from most
usage
... it is a conceptual layer, where you travel from node to
node in a hypermedia graph
... this is missing from the task level of Cameleon
... our abstract interface is looking at widgets only from the
role, not the form
... another model is the rhetorical model
... for mapping between models, especially time-dependent
ones
... gives timing and ordering of events
... especially when communicating between people
... for instance we used it to generate animated
transitions
... not just from an artistic point of view
yyy: How do you capture the semantics in such a precise way?
Daniel: We propose a semantic
model in terms of which we describe the interaction
... though not the dynamic semantics
Pablo: Is it right that the rhetorical model is the timing model?
Daniel: Yes, roughly
Pablo: You weren't talking about hypermedia
Daniel: No, only as an
issue
... on the web the hypermedia is built in
Fabio: I will provide an overview
of our work
... the point is how we use the concurtasktrees as
support
... Why another model-based language?
... well technologies develop fast
... such as mobile, gestures, voice
... need a language to address new issues
... and to clean up, make more usable
... MARIA uses features from existing languages
... data model
... events
... dialogue model
... support for Ajax scripts
... Many techniques are used on current websites
[Diagram of AUI Meta Model]
<Daniel> Discussion points from Daniel's presentation:
Fabio: Composition is of two
types
... grouping
<Daniel> 1 - Refinements and improvements to Chameleon's model: Navigation Model (as a relevant part of the "Task and Domain"model)
Fabio: [Scribe misses second]
<Daniel> 2 - A more precise characterization of "Abstract Interface"
Fabio: How can we support
service-oriented apps?
... at the service level
... at the app level
... at the UI level
<Daniel> 3 - Need for a Rhetorical Model to guide the mapping from Abstract to Concrete Interface mapping, especially for time-dependent interaction
Fabio: we like to address
composition at the app level
... The service developer create service annotations to provide
hints on implementation
... the hints are indepennt of the UI implementation
language
s/pendent/pendent/.
Fabio: We needed an informal
phase of task analysis, to formalise in model
... we transform to concrete UI
... methodology is not top-down, but has a first bottom-up
first step
Daniel: I need to have a analysis of what I need first, surely
Fabio: Sure
... but we have to think about what functionalities are
available before we can decide how to use them
... webservices impose certain constraints
[Demo]
Fabio: Starts with task model,
services and annotations, and task binding
... currently we have languages for desktop, mobile, [see slide
for full list]
<Daniel_S> please substitute PUC by PUC in my affiliation (Pontifical Catholic University)
Pablo: You mentioned 'nomadic'. How do you do that?
Fabio: By adaption, see the demo
Pablo: You mentioned SMIL. Do you integrate, or just generate?
Fabio: Generate HTML+SMIL
<scribe> [Postponed till later]
Carmen: Continuing from Fabio's
talk
... motivation is multi-device, without having to restart when
changing device
... domains such as shopping, bidding in auctions, games,
making reservations
... our system does a migration request
... and then there is transformation to obtain a UI adapted for
the new device, while keeping state
... we generate the UI at runtime, automatically
... migrating does a reverse step
... to obtain the semantics and state, which is used to
generate the new interface
Steven: Why do you need the reverse step?
Carmen: We need to reason about
the web page being used
... Migration does a device discovery,
... this uses a proxy server
... that captures the state
Steven: So you are migrating any app on the web, not just your own?
Fabio: That is right.
Steven: OK, now I understand the answer to my earlier question
[Diagram of semantic redesign stage]
Carmen: This is followed by a
splitting step if needed to reduce the amount of screen space
needed
... the user can customise the transformation step if
needed
[Example migration of a pacman game]
Carmen: On the mobile device
there is a dialogue requesting the migration
... and the game is represented on two pages
... It is possible to migrate only parts of an
application
... this needs interaction from the user
... to identify which parts are migrated
[Example partial migration]
Carmen: The user selects parts of
the page for migration
... and only those parts are migrated
Steven: So the parts that are not migrated are hardwired into the migrated version?
Fabio: Yes
[Video of migration support]
Carmen: This is migrating the W3C
home page partially to a mobile device
... as the user selects parts of the page for migration, it
gets highlighted on the screen
... the generation produced more than one page.
Pablo: This works for HTML, how about Flash?
[laughter]
Dave: How about simultaneous interaction with more than one device?
Carmen: We are working on this
zzz: I didn't understand the
semantic reengineering
... how do you do it?
Carmen: We have rules to map concrete description to semantics
zzz: Can this be standardised?
Dave: Let's make that a discussion point for tomorrow
[LUNCH]
<dsr> scribe: dsr
Keep the design time models at run time to support adaptation.
One kind of adaptation is moving an application dynamically from one device to another.
This can even involve "following" the user around the home.
This occurs without losing the interaction state of the user interface
This involves propagating user interface events up the abstraction layers, and similarly reifying actions down through the layers. This requires considerable flexibility in the models.
This creates challenges for what can be defined at design time given the need for adaptation at run time. e.g. the size of buttons
Jaroslav: we seem to be missing something at the concrete to final UI levels.
<fpatern> Question: what about the final user interface in the models considered?
Grzegorz: it is hard to define the boundary between the two...
Users can influence this e.g. distributing different parts of the UI to different devices.
Designers can set some constraints on the preferred UI, but this is not hard and fast.
Daniel: your models don't have the values.
Jose: the models have variables for say button sizes, right?
Grzegorz: yes, but the value is determined at run-time
<fpatern> Fabio: the use of reverse engineering techniques would allow the models to know the actual values of the interface element properties
Daniel: most designers today don't think in terms of preferences and constraints.
Steven: they think in terms of pixel perfection which is a problem for adaptation.
Question: to what level do you model the user?
Grzegorz: we have a limited set. e.g. adult vs child. or left vs right handed.
Does the approach learn from experience?
Not yet, that could be done in future work.
<Steven> My point is that we need a new generation of designers who understand fluid design
<Steven> where they think in terms of 'house style' rather than pixel perfection
<scribe> New devices provide new services (adds Jaroslav) as a point of extension
Questions: performance problems?
Grzegorz: currently we rely on a central server which tracks the whole environment, the devices are treated as being fairly stupid
Kaz: who picks the manager?
Dave: or it could even be in the "cloud"
Cloud computing can help with automatic Web UI migration.
Analogy with VNC which distributes UI as image tiles + UI events
Cloud based models can then be used to update the UI as needed.
Our lab has developed a virtual smart phone which runs in the cloud.
We now want to see how MBUI approaches can be used with this approach.
Fabio: doesn't this introduce latency issues?
The mobile devices have good processing, so what use cases are particularly suited to the cloud-based approach?
Answer: good for security as well as less dependent on device capabiltiies
This reduces the burden for getting applications to work of different devices.
We can take advantage of different device sensors, e.g. accelerometer, compass, temperature, location etc.
Jaroslav: this also makes it easier for users, since they don't have to download and install new apps.
(downside is lack of support for offline apps)
Heterogeneity of mobile devices presents challenges to developers, also users want to do different things from desktop users.
MyMobileWeb is an open source framework for rapid development of mobile web apps and portals
We make use of W3C specs such as SCXML and XForms.
We support synchronization for the delivery context between devices and servers.
We use DISelect and a XHTML like syntax but at a higher level.
CSS is used to map the abstract UI to concrete UI levels
Our root element is <ideal>. The content model starts with resources and is followed by the ui description.
Different controls e.g. for date/time
Steven: asks for more details on the UI controls in relation to XForms.
The set of UI controls are oriented to the needs of mobile web apps.
e.g. chained menues for a set of mutually dependent menus.
Many extensions in IDEAL2, e.g. maps, media, graphs
SCXML used to specify MyMobileWeb application flow.
We define new IDEAL elements for new UI controls and then map these to delivery formats as appropriate to the delivery context.
<Steven> I asked why there were inputtime and inputdate controls necessary, when you know from the data that it is a date or time
<Steven> so that a simple input control should be enough
Answer was syntactic convenience for expression the options involved.
<kaz> scribenick: kaz
[ slides: TBD ]
(Levels of Application Integration)
(SoKNOS Project)
* how to sync different UIs?
(Why Formal Semantics are Needed?)
* system with modular UIs
* with semantic support!
* no manual adjustment needed
<dsr> wrap events with semantic annotations to enable UI components to keep working when raw events change
* semantically annotated events provide mutual understanding
<dsr> Pablo: we had similar problem, and we used a common data model as a solution.
<dsr> Florian: common data model doesn't scale.
* avoid cross-application dependencies
(Ontrology of User Interfaces and Interactions)
<dsr> Florian: does an ontology for user interfaces and interactions already exist that we can use as starting point for semantic annotations?
(Separating "Real World" from "System World)
<dsr> Strict separation between domain models and interaction models.
s/soman/domain/
(Research Challenges)
* there are so many models and description languages, so implementers have to learn various models/languages...
* on the other hand, semantic models for UIs enable dynamic exchange of UIs
<dsr> We've found good performance (<2 seconds) even for reasoning over large ontologies.
Q&A:
jose: @@@ (sorry missed the question)
florian: domain ontology commits abstract models
<dsr> Fabio: this use of ontologies could be cumbersome, perhaps we could short cut that with some standard models, no?
<dsr> What is the value of formalizing the ontology?
<dsr> Florian: I am quite doubtful about many ontologies and prefer semantic rigor
fabio: we don't want to use ontlogy...
florian: we don't stick with
ontology
... UI is human created things
... ontology is rather formal
19: 30 at Via Cavour close to Termini station (northwest side)
[ slides: TBD ]
(W4: Presentation)
* LEONARDI GUI
<dsr> (W4.eu not the next generation of W3C, last W is for workflow!)
(Issues to solve)
* GUI is very important for end users
<dsr> We're involved in EU projects e.g. Serface and Serenoa, our aim is to add features to our products
* but implementing GUI is complex and delivering it is expensive
<dsr> 50% of development cost is related to GUIs (source IEEC)
(LEONARDI Scope)
* MVC construction
(Alternatives)
* programming GUI is expensive
* 4GLs/MDA solutions
* but still have several issues
(Vision)
* proposal: LEONARDI
* driven by business model
<dsr> W4's product (Leonardi) developers focus on business model, Leonardi deals with technical underpinnings
* (not 0 but) less code
* run-time execution
<dsr> Java based framework, UI generated by app engine, not a code generation approach
(how does it work?)
* model: XML description of business world
* compose: generate table of action and navigation tree
* speciallize: adding dynamic portion using Java (links to Java codes)
* deploy and execute
- on-the-fly generation of screens
(Architecture)
* various data resources
(Benefits)
<dsr> Dave thinks about role of RDF triples as abstraction over different data model frameworks
* cheaper and quicker
* also simpler from technical/design viewpoints
(Application types)
(Customers)
Q&A:
* this is destruction of data and behavior
* there is no workflow engine
* distinguish where to transition based on context
[break]
<Steven> scribenick: fpatern
<Steven> Scribe: Fabio
No question
question on accessibility of xforms
there is a person
question about possible ocnvergence between xforms and model-based group
positive answer: xforms is going to be re-charted and xbl needs improvements
if model-based approaches would have been adopted then we would not need ARIA
whether aria taxonomy and roles are motivated by usage in practise
<dsr> Fabio: we are using X+V in our work on MARIA, is X+V dead?
<dsr> Is the approach you described (MMI architecture) going to be supported in browsers?
<dsr> Kaz: Opera has lost interest in X+V and these days is more interested in HTML5 and adding new device APIs to browsers
<dsr> I am trying to interest browser vendors in multimodality, and not only in HTML5
<dsr> Jaroslav: role of EMMA as packaging format, right?
<dsr> Kaz: we need to apply EMMA to wider range of interaction types
<dsr> EMMA needs to handle different kinds of user input, including binary sensor data
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/hi// FAILED: s/Pablo Cear/Pablo_Cesar/ Succeeded: s/Steven, shall I circulate attendees list to get attendees' name?// Succeeded: i/Topic: UsiXML/topic: Session on Model-Based Approaches for Interactive Application Design Succeeded: s/Topic: UsiXML/Topic: UsiXML - Dave Raggett/ Succeeded: s/kobayashi:/toru:/ Succeeded: i/[ Slides URI/-> http://www.w3.org/2005/Incubator/model-based-ui/wiki/images/e/ef/UsiXML-MBUI-W3C2009.pdf UsiXML slides Succeeded: s/[ Slides URI: TBD ]// Succeeded: s/part of MBUI XG/part of MDA/ Succeeded: s/asks/asks if/ Succeeded: s/ML)/ML))/ Succeeded: i/Topic: UsiXML/scribenick: kaz Succeeded: s/nant/ant/ Succeeded: s/ction/tion/ Succeeded: s/animate/animated/ Succeeded: s/model the/model is the/ Succeeded: s/sue/gue/ WARNING: Bad s/// command: s/pennt/pendent/. Succeeded: s/pennt/pendent/ Succeeded: s/firs/first/ Succeeded: s/PUCC/PUC/G Succeeded: i/Topic: Run-time/topic: Session on Model-based Support at Run-Time (Adaptation, Migration) Succeeded: s/questions/question/ Succeeded: s/selc/selec/ Succeeded: s/Kageyamu/Kageyama/ Succeeded: s/Present+Pablo Cesar/Present+Pablo_Cesar/ Succeeded: s/Deshparde/Deshpande/ Succeeded: s/sizes/perfection/ Succeeded: s/about // Succeeded: s/Grzgorz/Grzegorz/ Succeeded: s/KNN/KN/ Succeeded: s/Florian/Pablo/ Succeeded: s/Pablo/Florian/ Succeeded: s/Fomal/Formal/ FAILED: s/soman/domain/ Succeeded: s/doman/domain/ Succeeded: s/semantic models/on the other hand, semantic models/ Succeeded: s/comlex/complex/ Succeeded: s/Usage/Topic: Usage/ Succeeded: s/MBUI/Topic: MBUI/ Succeeded: s/XForms in the context/Topic: XForms in the context/ Succeeded: s/MBUI/Topic: MBUI/ Found Scribe: dsr Inferring ScribeNick: dsr Found ScribeNick: kaz Found Scribe: Steven Inferring ScribeNick: Steven Found Scribe: dsr Inferring ScribeNick: dsr Found ScribeNick: kaz Found ScribeNick: fpatern Found Scribe: Fabio Scribes: dsr, Steven, Fabio ScribeNicks: dsr, kaz, Steven, fpatern Present: Steven_Pemberton Fabio_Paterno Pablo_Cesar Dave_Raggett José_Manuel_Cantera_Fonseca Gerrit_Meixner Daniel_Schwabe Kaz_Ashimura Katsuhiko_Kageyama Toru_Kobayashi Hiroyuki_Sato Claudio_Venezia Pavel_Kolkarek Jochen_Fiey Nacho_Marin Javier_Rodriguez Javier_Munoz Michael_Nebeling Yogesh_Deshpande Jean-Loup_Comeliau Grzegorz_Lehmann Carmen_Santoro Lucio_Davide_Spano Florian_Probst Patric_Girard Giorgio_Brajnik Jaroslav_Pullmann Agenda: http://www.w3.org/2010/02/mbui/program.html Got date from IRC log name: 13 May 2010 Guessing minutes URL: http://www.w3.org/2010/05/13-mbui-minutes.html People with action items:[End of scribe.perl diagnostic output]