07:21:22 RRSAgent has joined #mbui 07:21:22 logging to http://www.w3.org/2010/05/13-mbui-irc 07:21:47 kaz has joined #mbui 07:21:51 hi 07:21:56 rrsagent, make logs world 07:22:01 s/hi// 07:22:28 Meeting: W3C Model-based User Interfaces Workshop, Rome, Italy 07:22:37 Chair: Dave Raggett 07:23:22 Agenda: http://www.w3.org/2010/02/mbui/program.html 07:23:30 rrsagent, make minutes 07:23:30 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 07:24:17 Topic: MBUI Incubator Group work and results, José Manuel Cantera Fonseca, Telefónica I+D 07:25:16 -> http://www.w3.org/2005/Incubator/model-based-ui/XGR-mbui-20100504/ Model-Based UI XG Final Report 07:25:22 rrsagent, make minutes 07:25:22 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 07:26:35 Steven_ has joined #mbui 07:29:07 fpatern has joined #mbui 07:29:11 claudio has joined #mbui 07:29:12 pcesar has joined #mbui 07:29:41 rrsagent, make minutes 07:29:41 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 07:29:43 javier has joined #mbui 07:31:19 hiroyuki has joined #mbui 07:31:26 Gerrit has joined #MBUI 07:32:30 Present+Steven_Pemberton 07:32:39 Present+Fabio_Paterno 07:32:49 Present+Pablo Cesar 07:32:54 rrsagent, make minutes 07:32:54 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 07:33:04 Daniel has joined #mbui 07:33:17 dsr has joined #mbui 07:33:26 scribe: dsr 07:33:31 s/Pablo Cear/Pablo_Cesar/ 07:33:39 Present+Dave_Raggett 07:33:40 rrsagent, set logs public 07:34:44 Present+José_Manuel_Cantera_Fonseca 07:34:50 Jose introduces the MBUI area and the XG 07:35:42 Present+Gerrit_Meixner 07:36:02 Present+Daniel_Schwabe 07:36:03 Steven, shall I circulate attendees list to get attendees' name? 07:36:08 The CAMELEON reference framework as the core architecture, and deriving from previous research. 07:36:17 s/Steven, shall I circulate attendees list to get attendees' name?// 07:36:49 rrsagent, make minutes 07:36:49 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 07:51:14 José gives a brief introduction of the work of the W3C MBUI XG group 08:00:10 claudio has joined #mbui 08:00:18 Jose summarises the questions arising from the MBUI XG Report. (see slides) 08:01:22 Jose suggests standardization of baseline meta-models for the different abstraction layers in the Cameleon reference framework. 08:02:01 This would facilitate tools for interchange between different MBUI formats and tools. 08:04:23 Dave asks about the origins of the Cameleon framework? 08:04:48 Fabio: it dates back a few years to an EU project that has now closed. 08:05:28 Fabio: do we think it is practical to put together a roadmap for standardization? 08:06:09 Can we find an agreement at the meta model, starting with task/domain level and the AUI level? 08:06:23 This seems practical and would feed into other work. 08:07:01 Jose: seeing similarities and differences in task models, e.g. different kinds of relationships. 08:07:26 Fabio/Gerrit: these are extensible. 08:07:51 Fabio: we can define a core plus a means for adding extensions 08:08:06 Jose: modularity also 08:10:45 We can discuss XForms later which at first glance covers both AUI and the CUI levels. 08:11:20 Topic: UsiXML 08:14:08 -> http://www.w3.org/2005/Incubator/model-based-ui/wiki/UsiXML UsiXML 08:16:14 i/Topic: UsiXML/topic: Session on Model-Based Approaches for Interactive Application Design/ 08:17:23 s/Topic: UsiXML/Topic: UsiXML - Dave Raggett/ 08:17:43 [ Slides URI: TBD ] 08:21:57 rrsagent, make minutes 08:21:57 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 08:24:04 (dave leads discussion about UsiXML) 08:24:50 * longevity of MBUI XG site is one issue 08:25:20 claudio has joined #mbui 08:27:12 Note: "AUI" means "Abstract User Interface" and "CUI" means "Concrete User Interface" 08:28:29 kobayashi: is UsiXML part of MBUI XG? 08:28:42 dave: yes 08:29:46 fabio: the point here is considering abstract level 08:30:45 pcesar has joined #mbui 08:30:52 daniel: our scope is different from OMA's broader picture. nice to see here is what would better fit 08:31:08 s/kobayashi:/toru:/ 08:31:55 @@@: thinking about the relationship with the broader picture would make sense 08:33:24 (some more discussion about Web engineering community and W3C) 08:34:38 i/[ Slides URI/-> http://www.w3.org/2005/Incubator/model-based-ui/wiki/images/e/ef/UsiXML-MBUI-W3C2009.pdf UsiXML slides/ 08:34:49 s/[ Slides URI: TBD ]// 08:35:30 kaz, Toru asks MBUI is part of MDA 08:36:31 s/part of MBUI XG/part of MDA/ 08:37:45 s/asks/asks if/ 08:39:59 * how does UsiXML relate to other model based approaches? 08:40:07 * use cases? 08:40:30 * this workshop will provide some more concrete use cases 08:40:54 * much better organized research communities as well 08:42:18 * we want to understand industry demands 08:42:39 * not only UI but also other use cases 08:43:15 the observation is about the *lack* of industry uptake on using model-based approaches in general, as well as *methods* 08:43:33 this includes UI models 08:44:30 * question about experimental data on which approach is better than another approach 08:44:57 (comparison between approaches) 08:45:16 topic: UseWare engineering process - Gerrit Meixner, DFKI 08:45:27 [ slides: TBD ] 08:45:41 (Gerrit introduces DFKI) 08:47:41 (MBUID Use Cases) 08:48:12 * several dozens of different devices are managed 08:48:32 (Useware Engineering Process) 08:50:26 * four phases: analysis/structuring/design/realization + evaluation 08:50:55 (Useware Markup Language (useML) 08:51:03 s/ML)/ML))/ 08:52:05 (Udit - useML-Editor) 08:53:23 * editor + simulator 08:53:40 (MBUID toolchain) 08:53:51 * process, tools and languages 08:56:29 * export from DISL to VoiceXML/SCXML (not implemented but should be useful) 08:56:59 (MBUID@Run-time) 08:57:18 * devices have to be maintained 08:57:53 * but there is bad accessibility caused by physical configuration 08:58:03 (SmartMote) 08:58:15 * remote control for inteligent production environments 08:58:35 * task-centered, adaptive and wireless 08:59:05 (Inddor Positioning Systems) 08:59:25 * Ubisense UWB-realtime positioning system 08:59:32 * RFID Grid 08:59:47 * Cricket Ultrasonic Indor location system 09:00:29 (Norms, Standards and Guidelines) 09:01:01 - Q&A: 09:01:37 * how to handle multiple users? 09:02:12 -> user group managed by supervisor or administrator 09:02:41 * difference between useML and CC/PP? 09:03:11 -> 5 elements assigned to specific task 09:03:35 -> can be extended depending concrete projects/requirements 09:04:39 not CC/PP but CTT 09:04:52 * any patent? 09:04:55 -> nop 09:05:56 * would like to follow the 2.0 version 09:06:55 [break] 09:07:10 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html kaz 09:31:44 Scribe: Steven 09:32:07 Topic: Semantic Hypermedia Design Method (SHDM), Daniel Schwabe, PUCC-Rio 09:32:25 rrsagent, make minutes 09:32:25 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 09:33:22 i/Topic: UsiXML/scribenick: kaz/ 09:33:26 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html kaz 09:34:01 Daniel: I've been working on hypermedia since the 90's 09:34:37 ... now Semantic Hypermedia Design Model 09:34:50 ... I will do some reflections on what we have been looking at 09:34:59 ... and importnant points we should discuss 09:35:05 s/nant/ant/ 09:35:19 Pablo: Any relaction to NCL? 09:35:27 s/ction/tion/ 09:35:41 Daniel: In early stages; they are more lower level 09:36:06 Daniel: Why do we use Model based? What do we want? 09:36:13 ... I want to leverage abstractions 09:36:21 hiroyuki has joined #MBUI 09:36:38 ... find a way to describe abstractions in a concise way, ignoring some details 09:36:49 ... providing a model, abstracting an artefact 09:37:26 ... provides concise abstract language 09:37:45 Steven: What do you mean by artefact? 09:37:53 Daniel: The concrete thing we want to produce 09:38:23 ... we are not just observing artefacts, but engineer them 09:38:47 ... the models will abstract them, we need several models to completely characterise what you want to produce 09:39:01 ... translate between models, to get an executable one 09:39:50 xxx: Is a UI a representation of what a user wants to do? 09:39:57 Daniel: I come to that later. 09:40:03 Gerrit has joined #MBUI 09:40:32 Daniel: We have looked at many types of models [slide lists them] 09:41:00 ... SHDM Models 09:41:25 ... our meaning of 'navigation' is different from most usage 09:42:42 ... it is a conceptual layer, where you travel from node to node in a hypermedia graph 09:44:14 ... this is missing from the task level of Cameleon 09:44:54 ... our abstract interface is looking at widgets only from the role, not the form 09:45:16 ... another model is the rhetorical model 09:45:34 ... for mapping between models, especially time-dependent ones 09:45:49 ... gives timing and ordering of events 09:46:05 ... especially when communicating between people 09:46:30 ... for instance we used it to generate animate transitions 09:46:43 ... not just from an artistic point of view 09:46:57 s/animate/animated/ 09:47:30 yyy: How do you capture the semantics in such a precise way? 09:47:47 Daniel: We propose a semantic model in terms of which we describe the interaction 09:48:05 ... though not the dynamic semantics 09:48:21 Pablo: Is it right that the rhetorical model the timing model? 09:48:25 Daniel: Yes, roughly 09:48:36 s/model the/model is the/ 09:49:07 Pablo: You weren't talking about hypermedia 09:49:20 Daniel: No, only as an issue 09:49:40 ... on the web the hypermedia is built in 09:50:05 Topic: ConcurTaskTrees and MARIA languages for Authoring, Fabio Paternò, Carmen Santoro, Davide Spano, ISTI, CNR 09:50:13 rrsagent, make minutes 09:50:13 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 09:51:22 Fabio: I will provide an overview of our work 09:51:42 ... the point is how we use the concurtasktrees as support 09:52:04 Fabio: Why another model-based language? 09:52:24 ... well technologies develop fast 09:52:44 ... such as mobile, gestures, voice 09:52:54 ... need a language to address new issues 09:53:04 ... and to clean up, make more usable 09:53:25 Fabio: MARIA uses features from existing languages 09:53:41 ... data model 09:53:45 ... events 09:53:50 ... dialosue model 09:54:00 ... support for Ajax scripts 09:54:13 s/sue/gue/ 09:54:34 Fabio: Many techniques are used on current websites 09:54:53 [Diagram of AUI Meta Model] 09:54:56 Discussion points from Daniel's presentation: 09:55:56 Fabio: Composition is of two types 09:55:59 ... grouping 09:56:23 1 - Refinements and improvements to Chameleon's model: Navigation Model (as a relevant part of the "Task and Domain"model) 09:56:34 ... [Scribe misses second] 09:56:39 2 - A more precise characterization of "Abstract Interface" 09:56:49 Fabio: How can we support service-oriented apps? 09:57:07 ... at the service level 09:57:11 ... at the app level 09:57:16 ... at the UI level 09:57:35 3 - Need for a Rhetorical Model to guide the mapping from Abstract to Concrete Interface mapping, especially for time-dependent interaction 09:57:56 ... we like to address composition at the app level 09:59:02 Fabio: The service developer create service annotations to provide hints on implementation 09:59:29 ... the hints are indepennt of the UI implementation language 09:59:38 s/pennt/pendent/. 09:59:44 s/pennt/pendent/ 10:00:14 Fabio: We needed an informal phase of task analysis, to formalise in model 10:00:54 ... we transform to concrete UI 10:01:14 ... methodology is not top-down, but has a first bottom-up firs step 10:01:23 s/firs/first/ 10:01:57 Daniel: I need to have a analysis of what I need first, surely 10:02:02 Fabio: Sure 10:02:30 ... but we have to think about what functionalities are available before we can decide how to use them 10:02:55 .... webservices impose certain constraints 10:03:32 [Demo] 10:03:46 Daniel_S has joined #mbui 10:03:50 Fabio: Starts with task model, services and annotations, and task binding 10:04:28 ... currently we have languages for desktop, mobile, [see slide for full list] 10:04:50 please substitute PUCC by PUC in my affiliation (Pontifical Catholic University) 10:05:02 Pablo: You mentioned 'nomadic'. How do you do that? 10:05:16 Fabio: By adaption, see the demo 10:06:00 s/PUCC/PUC/G 10:08:39 Pablo: You mentioned SMIL. Do you integrate, or just generate? 10:08:57 Fabio: Generate HTML+SMIL 10:09:30 rrsagent, make minutes 10:09:30 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 10:10:02 Topic: Run-time role of UI models, Grzegorz Lehmann, DAI-Labor, TU Berlin 10:10:28 i/Topic: Run-time/topic: Session on Model-based Support at Run-Time (Adaptation, Migration)/ 10:13:44 [Postponed till later] 10:13:53 Topic: Migratory User Interfaces with MARIA, Fabio Paternò, Carmen Santoro, ISTI, CNR 10:15:53 Carmen: Continuing from Fabio's talk 10:16:53 ... motivation is multi-device, without having to restart when changing device 10:17:16 ... domains such as shopping, bidding in auctions, games, making reservations 10:18:44 ... our system does a migration request 10:19:09 ... and then there is transformation to obtain a UI adapted for the new device, while keeping state 10:20:07 ... we generate the UI at runtime, automatically 10:20:54 ... migrating does a reverse step 10:21:21 ... to obtain the semantics and state, which is used to generate the new interface 10:22:08 Steven: Why do you need the reverse step? 10:22:23 Carmen: We need to reason about the web page being used 10:23:42 Carmen: Migration does a device discovery, 10:23:59 Jochen has joined #mbui 10:24:20 ... this uses a proxy server 10:24:29 ... that captures the state 10:26:11 Steven: So you are migrating any app on the web, not just your own? 10:26:19 Fabio: That is right. 10:26:32 Steven: OK, now I understand the answer to my earlier questions 10:26:41 s/questions/question/ 10:27:15 [Diagram of semantic redesign stage] 10:28:04 Carmen: This is followed by a splitting step if needed to reduce the amount of screen space needed 10:28:30 ... the user can customise the transformation step if needed 10:28:57 [Example migration of a pacman game] 10:29:48 Carmen: On the mobile device there is a dialogue requesting the migration 10:30:02 ... and the game is represented on two pages 10:30:33 Carmen: It is possible to migrate only parts of an application 10:30:43 ... this needs interaction from the user 10:30:52 ... to identify which parts are migrated 10:31:04 [Example partial migration] 10:31:27 Carmen: The user selcts parts of the page for migration 10:31:34 s/selc/selec/ 10:32:04 Carmen: and only those parts are migrated 10:32:52 lehmann has joined #mbui 10:33:32 Steven: So the parts that are not migrated are hardwired into the migrated version? 10:33:34 Fabio: Yes 10:33:56 [Video of migration support] 10:34:43 Carmen: This is migrating the W3C home page partially to a mobile device 10:35:22 ... as the user selects parts of the page for migration, it gets highlighted on the screen 10:36:12 ... the generation produced more than one page. 10:36:31 Pablo: This works for HTML, how about Flash? 10:36:33 [laughter] 10:36:56 Dave: How about simultaneous interaction with more than one device? 10:37:01 Carmen: We are working on this 10:37:24 zzz: I didn't understand the semantic reengineering 10:37:39 ... how do you do it? 10:37:53 Carmen: We have rules to map concrete description to semantics 10:40:17 zzz: Can this be standardised? 10:40:27 Dave: Let's make that a discussion point for tomorrow 10:40:35 [LUNCH] 10:40:40 rrsagent, make minutes 10:40:40 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 11:49:38 Jochen has joined #mbui 11:51:33 topic: Run-time role of UI models, Grzegorz Lehmann 11:51:36 scribe: dsr 11:51:44 Present+Kaz_Ashimura, Katsuhiko_Kageyamu, Toru_Kobayashi, Hiroyuki_Sato, Claudio_Venezia, Pavel_Kolkarek, Jochen_Fiey 11:53:15 Present+Nacho_Marin, Javier_Rodriguez, Javier_Munoz, Michael_Nebeling, Yogesh_Deshparde, Jean-Loup_Comeliau 11:54:09 Daniel_S has joined #mbui 11:54:43 Present+Grzgorz_Lehmann, Carmen_Santoro, Lucio_Davide_Spano, Florian_Probst, Patric_Girard, Giorgio_Brajnik, Jaroslav_Pullmann 11:54:49 rrsagent, make minutes 11:54:49 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 11:55:01 s/Kageyamu/Kageyama/ 11:55:10 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html kaz 11:55:11 Keep the design time models at run time to support adaptation. 11:56:30 s/Present+Pablo Cesar/Present+Pablo_Cesar/ 11:56:36 rrsagent, make minutes 11:56:36 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 11:57:40 One kind of adaptation is moving an application dynamically from one device to another. 11:58:03 This can even involve "following" the user around the home. 11:59:52 This occurs without losing the interaction state of the user interface 12:01:59 This involves propagating user interface events up the abstraction layers, and similarly reifying actions down through the layers. This requires considerable flexibility in the models. 12:02:14 fpatern has joined #mbui 12:03:52 This creates challenges for what can be defined at design time given the need for adaptation at run time. e.g. the size of buttons 12:05:02 Jaroslav: we seem to be missing something at the concrete to final UI levels. 12:05:24 Question: what about the final user interface in the models considered? 12:05:43 Grzegorz: it is hard to define the boundary between the two... 12:06:28 Users can influence this e.g. distributing different parts of the UI to different devices. 12:07:18 Designers can set some constraints on the preferred UI, but this is not hard and fast. 12:07:49 s/Deshparde/Deshpande/ 12:08:04 Daniel: your models don't have the values. 12:08:21 Jose: the models have variables for say button sizes, right? 12:08:36 Grzegorz: yes, but the value is determined at run-time 12:08:53 Daniel_S has joined #mbui 12:09:15 Fabio: the use of reverse engineering techniques would allow the models to know the actual values of the interface element properties 12:09:44 Daniel: most designers today don't think in terms of preferences and constraints. 12:09:58 Steven: they think in terms of pixel sizes which is a problem for adaptation. 12:10:34 Question: to what level do you model the user? 12:11:06 Grzegorz: we have a limited set. e.g. adult vs child. or left vs right handed. 12:11:14 s/sizes/perfection/ 12:11:29 Does the approach learn from experience? 12:11:39 Not yet, that could be done in future work. 12:11:49 My point is that we need a new generation of designers who understand about fluid design 12:12:11 where they think in terms of 'house style' rather than pixel perfection 12:12:28 s/about // 12:13:35 New devices provide new services (adds Jaroslav) as a point of extension 12:13:55 Questions: performance problems? 12:14:11 s/Grzgorz/Grzegorz/ 12:14:30 Grzegorz: currently we rely on a central server which tracks the whole environment, the devices are treated as being fairly stupid 12:15:00 Kaz: who picks the manager? 12:15:50 Dave: or it could even be in the "cloud" 12:16:48 Topic: How can cloud computing ... (NTT) 12:18:00 Cloud computing can help with automatic Web UI migration. 12:21:19 Analogy with VNC which distributes UI as image tiles + UI events 12:21:39 Cloud based models can then be used to update the UI as needed. 12:22:06 Our lab has developed a virtual smart phone which runs in the cloud. 12:23:18 We now want to see how MBUI approaches can be used with this approach. 12:23:31 Fabio: doesn't this introduce latency issues? 12:24:58 The mobile devices have good processing, so what use cases are particularly suited to the cloud-based approach? 12:25:35 Answer: good for security as well as less dependent on device capabiltiies 12:26:24 This reduces the burden for getting applications to work of different devices. 12:27:31 We can take advantage of different device sensors, e.g. accelerometer, compass, temperature, location etc. 12:28:42 Jaroslav: this also makes it easier for users, since they don't have to download and install new apps. 12:29:27 (downside is lack of support for offline apps) 12:30:17 Topic: MyMobileWeb: A pragmatic approach to MBUIs, Ignacio Marín 12:32:42 Heterogeneity of mobile devices presents challenges to developers, also users want to do different things from desktop users. 12:37:47 MyMobileWeb is an open source framework for rapid development of mobile web apps and portals 12:38:08 hiroyuki has joined #MBUI 12:39:01 We make use of W3C specs such as SCXML and XForms. 12:40:35 We support synchronization for the delivery context between devices and servers. 12:41:24 We use DISelect and a XHTML like syntax but at a higher level. 12:42:03 CSS is used to map the abstract UI to concrete UI levels 12:43:40 hiroyuki has joined #MBUI 12:43:45 Our root element is . The content model starts with resources and is followed by the ui description. 12:45:03 Different controls e.g. for date/time 12:45:24 Steven: asks for more details on the UI controls in relation to XForms. 12:46:04 The set of UI controls are oriented to the needs of mobile web apps. 12:46:25 e.g. chained menues for a set of mutually dependent menus. 12:48:05 Many extensions in IDEAL2, e.g. maps, media, graphs 12:49:20 SCXML used to specify MyMobileWeb application flow. 12:49:34 MichaelN has joined #mbui 12:50:08 We define new IDEAL elements for new UI controls and then map these to delivery formats as appropriate to the delivery context. 12:51:13 I asked why there were inputtime and inputdate controls necessary, when you know from the data that it is a date or time 12:51:32 so that a simple input control should be enough 12:52:05 Answer was syntactic convenience for expression the options involved. 12:54:06 topic: Session on Model-based Approaches in Industrial Contexts 12:54:38 scribenick: kaz 12:55:10 topic: Formal ontologies and UI standards - Florian Probst, SAP Research 12:55:19 [ slides: TBD ] 12:55:41 (Levels of Application Integration) 12:56:09 (SoKNNOS Project) 12:57:30 s/KNN/KN/ 12:57:47 * how to sync different UIs? 12:58:16 (Why Fomal Semantics are Needed?) 12:58:37 * system with modular UIs 12:59:44 * with semantic support! 13:00:11 * no manual adjustment needed 13:00:19 wrap events with semantic annotations to enable UI components to keep working when raw events change 13:00:49 * semantically annotated events provide mutual understanding 13:01:31 Pablo: we had similar problem, and we used a common data model as a solution. 13:01:51 Florian: common data model doesn't scale. 13:02:03 s/Florian/Pablo/ 13:02:30 s/Pablo/Florian/ 13:02:52 rrsagent, make minutes 13:02:52 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 13:03:28 * avoid cross-application dependencies 13:03:43 (Ontrology of User Interfaces and Interactions) 13:03:59 Florian: does an ontology for user interfaces and interactions already exist that we can use as starting point for semantic annotations? 13:04:00 (Separating "Real World" from "System World) 13:04:41 Strict separation between doman models and interaction models. 13:04:54 s/Fomal/Formal/ 13:05:07 s/soman/domain/ 13:05:24 s/doman/domain/ 13:05:51 (Research Challenges) 13:07:06 * there are so many models and description languages, so implementers have to learn various models/languages... 13:07:36 * semantic models for UIs enable dynamic exchange of UIs 13:07:54 s/semantic models/on the other hand, semantic models/ 13:08:34 We've found good performance (<2 seconds) even for reasoning over large ontologies. 13:09:01 Daniel_S has joined #mbui 13:10:10 Q&A: 13:10:35 jose: @@@ (sorry missed the question) 13:11:02 florian: domain ontology commits abstract models 13:11:42 Fabio: this use of ontologies could be cumbersome, perhaps we could short cut that with some standard models, no? 13:12:17 What is the value of formalizing the ontology? 13:13:00 Florian: I am quite doubtful about many ontologies and prefer semantic rigor 13:14:54 fabio: we don't want to use ontlogy... 13:15:08 florian: we don't stick with ontology 13:15:44 ... UI is human created things 13:16:20 ... ontology is rather formal 13:18:29 topic: logistics on dinner 13:19:13 19:30 at Via Cavour close to Termini station (northwest side) 13:21:33 topic: LEONARDI, a model based framework for automating GUIs - Jean-Loup Comeliau, W4-Lyria 13:21:48 [ slides: TBD ] 13:22:16 (W4: Presentation) 13:22:37 * LEONARDI GUI 13:22:47 (W4.eu not the next generation of W3C, last W is for workflow!) 13:23:40 (Issues to solve) 13:24:12 * GUI is very important for end users 13:24:24 We're involved in EU projects e.g. Serface and Serenoa, our aim is to add features to our products 13:24:53 * but implementing GUI is comlex and delivering it is expensive 13:25:07 s/comlex/complex/ 13:25:30 50% of development cost is related to GUIs (source IEEC) 13:26:01 (LEONARDI Scope) 13:26:12 * MVC construction 13:27:03 (Alternatives) 13:27:51 * programming GUI is expensive 13:28:29 * 4GLs/MDA solutions 13:29:15 * but still have several issues 13:29:49 (Vision) 13:30:07 * proposal: LEONARDI 13:30:17 * driven by business model 13:30:33 W4's product (Leonardi) developers focus on business model, Leonardi deals with technical underpinnings 13:30:34 * (not 0 but) less code 13:31:20 * run-time execution 13:31:30 Java based framework, UI generated by app engine, not a code generation approach 13:31:36 (how does it work?) 13:32:29 * model: XML description of business world 13:33:43 * compose: generate table of action and navigation tree 13:34:30 * speciallize: adding dynamic portion using Java (links to Java codes) 13:34:55 * deploy and execute 13:35:44 - on-the-fly generation of screens 13:35:56 (Architecture) 13:36:26 * various data resources 13:36:35 (Benefits) 13:36:46 Dave thinks about role of RDF triples as abstraction over different data model frameworks 13:37:07 * cheaper and quicker 13:37:37 * also simpler from technical/design viewpoints 13:37:48 (Application types) 13:38:18 (Customers) 13:38:32 Q&A: 13:39:29 * this is destruction of data and behavior 13:40:02 * there is no workflow engine 13:40:34 * distinguish where to transition based on context 13:42:15 [break] 13:42:26 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html kaz 14:21:52 scribenick: fpatern 14:21:58 Scribe: Fabio 14:34:57 Usage of open standards in a document-oriented service framework, Jaroslav Pullmann 14:35:24 s/Usage/Topic: Usage/ 14:40:02 Gerrit has joined #MBUI 14:40:54 No question 14:40:59 XForms in the context of MBUI, Steven Pemberton, CWI 14:53:41 question on accessibility of xforms 14:54:30 Jochen has joined #mbui 14:54:35 there is a person 14:56:27 question about possible ocnvergence between xforms and model-based group 14:56:57 positive answer: xforms is going to be re-charted and xbl needs improvements 14:59:29 MBUI and accessibility, Dave Raggett, W3C/ERCIM 14:59:52 s/MBUI/Topic: MBUI/ 15:00:21 s/XForms in the context/Topic: XForms in the context/ 15:01:26 rrsagent, make minutes 15:01:26 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html Steven 15:03:31 http://webdirections.org/ 15:05:25 if model-based approaches would have been adopted then we would not need ARIA 15:10:10 whether aria taxonomy and roles are motivated by usage in practise 15:18:01 MBUI for multimodal applications, Kazuyuki Ashimura, W3C/Keio 15:20:55 s/MBUI/Topic: MBUI/ 15:22:17 Fabio: we are using X+V in our work on MARIA, is X+V dead? 15:22:43 Is the approach you described (MMI architecture) going to be supported in browsers? 15:23:49 Kaz: Opera has lost interest in X+V and these days is more interested in HTML5 and adding new device APIs to browsers 15:24:29 I am trying to interest browser vendors in multimodality, and not only in HTML5 15:25:37 Jaroslav: role of EMMA as packaging format, right? 15:26:13 Kaz: we need to apply EMMA to wider range of interaction types 15:28:55 EMMA needs to handle different kinds of user input, including binary sensor data 15:34:49 rrsagent, make minutes 15:34:49 I have made the request to generate http://www.w3.org/2010/05/13-mbui-minutes.html dsr