Tuesday afternoon - first session

Rainer Simon - The Mona Project

Fabio: could you use your technique to add metadata

Rainer: not a metadata specialist, but could use approcach where document is metadata

Wendy: re. your emulator, would be cool if you emulated assistive technologies

Oskari (to Wendy): is there a way to test a site for accessability

Wendy: evaluation and repair tools, show rendering order, text viewers, etc.

Lisa: hard to "simulate" reading disabilities using a screen...

Rotan: could be an issue for a break-out group. Simulate the metadata or emulate the client.

Rhys: you have an implementation of DI select?

Rainer: no

Rhys: so you could participate in DI Activity... In www2003, there was a notion in breaking down a dialogue into subdialogues, with pagination. How do you do that?

Rainer: you assign groups of widgets to individual tasks

------------

Barry Haynes - Orange Activity in Content Adaptation

Stephane: is there a major difference between your content selection and DISelect

Barry: remarkably similar

Stephane: what are you using to instanciate the parameter of delivery context, cc/pp?

Barry: we have the propriatory device database or UAProf, where we're going to

Stephane: dynamic device information, user prefs?

Barry: no

Rotan: about themes, is it similar to CSS?

Barry: mostly CSS, but we introduce markup to decouple ourselves from things,

and use a preprocessor approach.

Rotan: there's element level CSS, right Bert?

Bert: there's discussion about it. We're trying to find whether we

want it to work.

Oskari: content adaptation?

Barry: we use OML in terms of application. People don't type it. We provide tools, like the MONA tool set. In the long term we want to point surface developers at it so they can provide content using this markup

Phil: you have to give a tool to get metadata, pay people.

Rotan: 2 ways: legal requirement or pay them (demonstrate a benefit)

-------------

Owen Conlan - The Multi-model, Metadata-driven Approach to Content and Layout Adaptation

Joost: how do you author narratives?

Owen: started by hand, wasn't scalable. Then we worked on an authoring tool,

and it gives us insight on the strategy itself.

Joost: layout

Owen: we have adapted the tool, but we dfon't understand the layout. Same for everything, etc?

Fabio: how do you adapt to the user

Owen: we can do it dynamically, but don't want to confuse them. Or we can the user questions first and try to remain consistent.

Rotan: conflicts. noisy environment: user says it's useless, server says it's essential. How do you resolve?

Owen: we don't yet. There are conflicting decision. Should there be different strategies, one that has precedence, etc. We don't know yet.

Rotan: relationship woth @role

Mark:

Owen: role is closer to the content. our bigger issue is about vocabulary and taxonomies. role may facilitate, but the selection will still need a common vocabulary

Lisa: it's important to link the author to a strategy, not to the reader themself.

Mark: I see your approach as a multilevel "role". To operate at any level. With XForms, widgets are device-independent. You define the presentation in an abstract sense. That's a pattern. You could

Daniel: authors shoudlnt' know about the device but because of themes

I have the impression that they do

Rhys: it's certainly the aspiration, and some markups get close to it.

--------------

Roland: when I start thinking about humans, they want a degree of

predictability. Although people do diffferent things, they are

often the same and we should tak into account in the

adaptation the fact that the user has done something before,

often, etc.

Rotan: there's a lot to be said about predictive adaptation. Do we think

that could be done with a model of the user that's more than what

the user's like now, but adding information on what the user's done

before.

Rhys: "leave the corn flakes in the same aisle"

Rotan: thinking about client-side adaptation. You can have control about

the adaptation, because it's happening closer to you. But then there

may be conflicts between what the provider would like you to see and

what you want to see.

Phil: but I don't want to be given many choices in presentation.

Roland: there are people who switch to amazon web services interface

so they can use whatever interface they like

Rotan: yes, it's like RSS feeds.

Mark: in Xforms you have the bindong of control to instance data. if

data is date, you render with calendar widget, for instance. If

you take out type information, then you don't have presentation

information.

Rotan: markup is easy, you can just look at it and edit it

[discussion on device-independent authoring]

Bert: answer depends on the author, on the subject. Robert Cailleau

explains that HTML liberated him from layout. But may not be the

case for everybody. If you want to go further in adding

metadata, you need to have be done automatically. Computers

should infer the metadata, using AI techniques.

Lisa: is this the right question? I'm happy to write using notepad,

but people won't. People use word. When we're collecting metadata, we

make it simple, not expose markup, etc. The algorithm is only gpoing to

be right 90% of the time. You want to pass it back to humans to reach 99%

Bert: interesting that you don't try to reach 100%

Rotan: if author provides models (like "news article") the computer

could infer the metadata from the structure.

Mark: even if this example is right, there are plenty of examples where

it breaks. I could write about writing news articles, but it wouldn't

be a news article.

Rotan: the author could put the context information at the beginning,

by ticking a form a something.

Roland: many people don't use that. They go for changing the

presentation directly (fonts, colors). The journalist uses

RulesML and knows it so that's ok. But someone else could use

a RulesML editor to write a letter, who just sees that it

looks nice. Sometimes "bold" is just "bold".