Transcript for User Modeling for Accessibility
Online Symposium 15 July 2013

Related information:

Transcript edits:


[Introduction]

>> SHADI ABOU-ZAHRA: Okay. Let's get started. Welcome, everybody, to the Research and Development Working Group Online Symposium on User Modeling for Accessibility.

The Research and Development Working Group is one of the working groups of the W3C Web Accessibility Initiative and focuses on looking at trends and development in the research arena in order to inform the development of W3C standards and to help promote the accessibility field altogether.

This particular symposium is being chaired by Yehya Mohamad and Christos Kouroupetroglou. They are going to begin with explaining the symposium and the goals of the symposium.

Just very brief, the logistics, your lines are all muted to avoid noise on the call. If you want to speak up, please type the number 41 and then the pound key, sometimes also called hash key, the one that looks like a grid, and that will put you on the speaker queue.

The chairs of the call will be monitoring the queue and will try to take your questions as they go along or where it best fits.

So please feel free to engage. Also, you can use the IRC channel to put in questions and comments, and again, the symposium chairs will be watching for that and trying to maybe address some of these as they go along.

Finally, we also encourage you to send follow-up thoughts and comments by email as is listed on the symposium page so that we can get all those inputs to develop the research reports that will be the outcome of this symposium.

Without further adieu, I hand over to Yehya and Christos.

Please go ahead.

>> YEHYA MOHAMAD: Yes, hello. Here is Yehya. Hello, everybody, and welcome again. So this symposium, as mentioned, will be dealing with user models as explicit representations of user properties, including their needs, preferences, as well as physical, cognitive, behavioral characteristics.

So these characteristics in general are represented in variables, and user models are instantiated by the declaration of these variables for a particular user or group of users, and such instances of user models are called usually user profiles. Only a clarification to the terminology..

So that was a brief introduction to the Online Symposium, and I ask my colleague, Christos, can you tell us why we selected the user modeling as our topic for this symposium?

>> CHRISTOS KOUROUPETROGLOU: Yes. Thank you. Welcome from me too. User models are generally used to address particular user needs and preferences. User models are also known as user profiles personal archetypes, and they can be used for personalization purposes, and to increase the usability and accessibility of products and services. However, modeling for accessibility is still quite young as technology, and needs to be discussed and addressed in order for future development and actions in the area to have more impact on accessibility of products and services.

This is why we selected this for a topic for this symposium.

Yehya, can you tell us more about the symposium?

>> YEHYA MOHAMAD: Yes, this symposium will last for approximately two and a half hours. In the symposium, we will use a format closer to a structured panel discussion rather than a presentation model and will follow a question-and-answer approach. So we will ask questions, addressing them to the authors directly of the relevant papers to that question, and, so any comments you can please, then, as Shadi said, address, please, to the IRC channel, and we will try to consider them in that discussion. So it will consist of the questions to the authors plus questions from the audience via the IRC chat channel.

So the answers to the questions, please, should not exceed the limit of two minutes, and I remind again what Shadi said, so to get on the speaker queue, you will enter 41 pound for getting on and 40 pound for getting off the list.

So now we come to the organisational details of the symposium. Please, Christos.

>> CHRISTOS KOUROUPETROGLOU: Okay. Well, the Online Symposium is divided in three sessions. The first one will tackle the following points: The impact on interfaces. Gaps in user needs research. Other technical issues and privacy. And dynamic user models and methods for learning and adopting the user models.

Then we will have a five-minute break, and we will have the second part -- the second session of the symposium, which will tackle the following points: Approaches in user modeling, declarative versus the semantic approach; approaches in user modeling, user preferences versus user models approach; virtual user models, personas, and similar practices; finally, standardization issues.

After that in the third session, we will have a general discussion, conclusion, and next steps.

Before moving on with the discussion of the first session, let us introduce the authors of the accepted papers that will be taking part in the discussion. Yehya, could you start the intorductions of the authors?

>> YEHYA MOHAMAD: First, I will introduce Pradipta from the University of Cambridge. He will represent the first two papers, inclusive user modeling and applications, and the developing of these applications. And the second author is Markus Modzelewski from the University of Bremen, who is representing the authors of the paper: Application of abstract user models as customer involvement in product development.

And -- Nikolaos Kaklanis from the Informational Technologies Institute from Greece will be presenting the paper: Personalized Web accessibility assessment using virtual user models. Christos?

>> CHRISTOS KOUROUPETROGLOU: Yes. Silvia Mirri at the University of Bologna will be representing the authors of the paper: Profiling users from users' behavior. Matthew Bell from Loughborough University will be presenting the paper: Increasing the flexibility of accessibility modeling through the use of semantic relationships.

Whitney Quesenbery from WQusability and Usability in Civic Life is the author of the paper: Personas can tell the story behind the model. And Philip Ackermann from Fraunhofer Institute for Applied Information Technology, FIT, will be introducing the paper: Developing a semantic user preferences and device modeling framework that supports adaptability of Web applications for people with special needs.

[Session 1]

[Impact on user interfaces]

>> YEHYA MOHAMAD: We will now start the first session of the symposium with questions about the impact on user interfaces.

So my first question goes to Pradipta. You describe a simulator based on Web services. How does your approach change the user interface? Is it automatic update or user driven? Is it a design phase or at run time, or both?

So addressing your question, so in the inclusive user modeling approach at Cambridge, we tried to approach adaptation at design time as well as on the interface run time. So in our user model, we make a set of statistical models and algorithms which can relate user parameters with interface parameters, and we implement this user model in two ways. One is the simulator or inclusive user modeling simulator. That simulator is aimed to be used during design time, and it can be run even on a paper/pencil drawing, which can show you the effect of visual, hearing, and mobility impairment and to some extent also cognitive impairment.

And then we move on and developed a set of Web services, which is also in the second paper, if you check that out. And this set of Web services or the algorithm, those can adapt the interfaces in the runtime, and here we adapt to mean they can automatically adjust the color contrast, and it can also change the spacing between different screen limits, and also it can adapt the font size based on device and screen type. That means it is intelligent enough to take care of the screen resolution, type of device, and so on.

And we applied this user model in our number of applications, so we used this user model in the recently conclude EU GUIDE project, where we adapt interface for television, and presently we are using this user model for Indian rural population, where we are developing mobile phone and smartphone application, as well as this topic application, and also we are tweaking these interfaces in terms of size, color contrast, interval spacing, and so on.

But saying that we are not -- I mean -- imposing any designs on users, so they are dynamically adapted, but we also have preference settings where we ask for user's preference, in which case the interface can also be changed explicitly by user preference.

So we try to profile the default interface to user, which is adapted according to our model and according to the range of abilities of the user. Maybe his visual acuity, his mobility impairment, and even the situational impairment, like for a smartphone or mobile phone, we -- I am exceeding time? Sorry.

>> YEHYA MOHAMAD: Yes. So you will get more questions. This was just the first question.

>> PRADIPTA BISWAS: That's more or less what I wanted to tell, and thanks for the time.

>> YEHYA MOHAMAD: Thanks. Now I go to Markus. Markus, you presented in your paper an approach, and I am asking who are the users of your approach or system, and at which stage of user interface design / usage does it take place? At design phase or at run time.

>> MARKUS MODZELEWSKI: Hello. Yes, the users of our software, two different kinds of user. The first users are the designer, so product designers, physical product designers who will use the software. We have created a software framework for designers to help them design inclusive products. So the other are the end users of product

Wait. This was the first question; wasn't it? Is this enough?

>> YEHYA MOHAMAD: Yes, and at which stage do changes to the user interface take place? At design phase or at runtime?

>> MARKUS MODZELEWSKI: Ah, okay. We are only focusing on design phase, only the phase in which the -- start very early in the design phase, sketch design phase we are calling, the first phase, where the designers are creating drafts or paper drafts, paper sheets about the approach prototype, so start very early. And later on, we are going further into the computer-aided design phase, where designers are creating goodwill prototypes in which they already have (Inaudible) environment.

So we are aiming until only from the beginning and the sketch draft phase, and we are ending in the -- after the virtual prototype phase.

>> YEHYA MOHAMAD: Okay. Thanks, Markus.

>> YEHYA MOHAMAD: So I move now to Nickolaos. So you are developing websites according to WCAG 2.0 and ARIA and additionally to user virtual models. How and when does your approach influence the user interface.

>> NIKOLAOS KAKLANIS: Okay. Good afternoon from me also.

So we are using (Inaudible) in order to provide personalized Web accessibility element. So in order to asses accessibility of website, it is very important sometimes to perform a personalized Web accessibility assessment because it is very difficult and needs a huge effort to build a website that is accessible for everyone.

So in many cases, we have some websites that have a specific target group of users, so we need accessibility for this target group.

Our paper presents an approach that that performs personalized Web accessibility assessment using virtual user models. Our user models represent mainly people with disabilities, different kinds of -- various types of disabilities, like visual, hearing, cognitive, et cetera. And we are building these virtual user models in design time only. Which means we have to know from the beginning before the assessment which is our target group and corresponding virtual user model.

So we make a connection, a mapping between the virtual user models with the WCAG 2 guidelines. So for each virtual user model, we know which guidelines have to be tested for a website.

So is this enough? Do you need more details on this?

>> YEHYA MOHAMAD: No. Thanks for the feedback. We will come later to more details and other questions.

>> YEHYA MOHAMAD: So I am moving now to Philip. In your paper, you are also developing an approach for developing a semantic user preferences. Can you describe your tools on the final user interface?

>> PHILIP ACKERMANN: Fine. So hello, everybody. So to answer your question, we have different models in our framework. We have the Web technology model, the device model, and the user preference model. And inside the user preference model, we store different adaptation rules, and those adaptation rules can be changed at runtime, and they are also integrated by the application provider at runtime.

Yes, and so the user can use -- he has an interface called the Model Management System, and with this system, he can change these adaptation rules and then continue browsing the Web application. Okay?

[Gaps in user needs research]

So the topic of impact on user interfaces, we move now to the topic gaps in user needs research, and I start with a question to Sylvia.

How do you think your approach complements existing models and approaches? Like what you mentioned, IMS, ACCLip, CCCB, or ISO? And will your model cover all possible disabilities or only some of them?

>> SILVIA MIRRI: So first of all, the main idea of our project, of our approach, is to let our system learn user preferences. The more the user asks for adaptation. And so our profile, our user profile, is composed by not only user preferences, but also by what the user has discarded. So we have at the same level user preferences and what the user has discarded.

This can be very useful because we can understand what the user doesn't want. And this is an improvement of similar profiling systems, such as IMS, ACCLip, and some other similar profiling systems.

And our model can be used by any kind of user. Our first prototype is only for user who would like to ask for adaptation who needs adaptation for textual characteristics, but it can be complete, and it can also be used by other kind of user with some other kind of needs.

>> YEHYA MOHAMAD: Okay. Thanks, Sylvia.

>> YEHYA MOHAMAD: So I would go now again to Pradipta. In your approach, you describe models covering several user groups and disabilities. Can you tell us more about the detail level of your models, granularity, and which gaps do exist in them?

>> PRADIPTA BISWAS: Yeah, right. So thanks for the question, in our model, we cover the visual, hearing, and mobility impairment at this stage. And we developed some -- initially we developed some statistical model, which is in conformance with existing, say, literature from psychology, like the way visual perception, visual search works or the auditory system works or people, the details of their appertaining movements. So we made some research on that and tried to develop some models which we can simulate those processes.

And then we validated these models to a series of user trials, like for visual perception model using eye tracking study; hearing impairment model, we did some audiogram study; and also for the mobility impairment model, we conducted ISO 924.1 tests. Once we are happy with these validations, we used these models to adapt interfaces in runtime. And then we also conducted study on comparing adapted and not adapted version of interface to understand what is the benefit of using the user model.

So we reported results from one of those studies in the second paper, and -- but still we are conducting similar studies in different contexts, like when users are using an interface working and different things.

>> YEHYA MOHAMAD: So I move now to Markus, in a similar direction. In your paper, you are focusing on mild to moderate disabilities.

Which benefits would your model gain if extended to cover more user groups, like severe disabled persons or abilities like cognition, emotions, and how you can do that?

>> MARKUS MODZELEWSKI: Yes, in our model, we used nominal but also categorical values to support the users. We had different target groups of people, like mild impaired groups, like mild visually impaired or something like that.

If we would extend it, so this would mean that in our database, we will have more granular levels, there will only be more -- more attributes for each user model and more values and so on, so it's quite easy to extend it to values.

In our design, it would be that our target was to create products for a wider group of people, a wider group of customers. If we would include all of those different impairments, it would -- it would result in more accurate -- I think it would result in a more clear view of recommendations, would be a good idea for future work to extend it, for example, cognitive impairments, like -- okay. I don't remember any cognitive --

>> YEHYA MOHAMAD: Okay, Markus. Thank you for your answer.

>> MARKUS MODZELEWSKI: It would be a good idea for future work, yes.

So I move on to Matthew. Matthew, which assistive technologies and disabilities do you think lack of research in terms of user needs generally, and especially in your profiles?

>> MATTHEW BELL: Right. Rather than specific technologies, I feel that the (Inaudible) are the idea of (Inaudible) of the data and the (Inaudible) with which a profile is produced.

The approach we are taking takes any device or piece of technology or user, in fact, and models them in the same way. This allows communication to be mapped or modeled between a user or device or assistive technology.

[Other technical issues and privacy]

>> YEHYA MOHAMAD: Thanks. We will come to you later, Matthew, to ask more questions about your approach.

So this was gaps in user needs research. So we move on to the topic technical issues and privacy, which is very important in regard of user modeling. I think in the light of current news and in general, I think this is one of the crucial issues of having user data on government computers; therefore, I think the questions and answers here are very important for the research on the area of user modeling.

So the first question give to Nickolaos. In your approach, you are using as well Web services. What measures do you provide to prevent unauthorized access?

>> NIKOLAOS KAKLANIS: Okay. Actually, in our approach, we don't care about privacy because our virtual user models do not contain any personalized information. I mean, name, telephone, et cetera. So for us, a virtual user model is like a persona which contains realistic values, but it does not actually contain information about a specific user.

By someone having our virtual user model, they cannot trace the real user. That is presented by the specific virtual user model. We don't actually care about the privacy.

>> YEHYA MOHAMAD: Okay. And you are not using it at runtime; you are using it more at design time; right?

>> NIKOLAOS KAKLANIS: Yes, yes, of course.

So I have similar questions to Pradipta again. You are using as well Web services. What information to do these Web services expose, and how do you prevent unauthorized access?

>> PRADIPTA BISWAS: So what we keep about user is preferred phone size or color, preference, something like that, which is stored in an XML file, and the Web server is stored in a secured Web server, which we believe that it's secure. It's a third-party server.

But for some other implementation, like the -- in EU GUIDES project, we implemented stand-alone, so the whole profile, it's never stored in WAVE. It's stored on the individual device. And it's protected by user name/password.

>> YEHYA MOHAMAD: So Philip now.

You are exchanging preferences via http headers. How do you ensure that no unauthorized access to the user data happens?

>> PHILIP ACKERMANN: So before the user can use the adaptations, he needs to log in to the system by either a browser plug-in or a local -- small local server.

So we have in the first -- the first step is to log in, and after the login, the user has access to his user preference model, which is stored as RDF in the store, and he gets a unique ID. Since this ID with every -- or with every request going out to an adaptive application, so -- and this -- this identifier is only available for him. So after he logs in.

>> YEHYA MOHAMAD: So you think it's enough, or you think there is a need in the future to work more on this area?

>> PHILIP ACKERMANN: So for the moment, we think that it's enough because as well we don't store any private information in the user model because we only store the transformation rules, the user interface transformation rules. So we don't have any personal information, like name or age or we don't even store the disabilities if the user has one.

So I move on to Matthew. Matthew, how do you think privacy issues can be considered when using an approach like the one you are proposing?

>> MATTHEW BELL: Well, privacy is paramount. Without privacy controls on the user model, the user won't trust, and that leads to abandonment.

The model we subscribe to is that approved by the medical profession. So we look for a capacity to provide consent and then look for consent before collecting data. Collection processes are then transparent and should be transparent with any information that's collected.

It's become apparent through work with older people, which I have been involved in, that they don't identify as being disabled and therefore are not looking for support and choose to see that they are incompetent as opposed to it being a computer-based problem. And therefore, there's a need to take data that is a sensitive nature in order to provide them with assistance as opposed to allowing them to volunteer the data that they want to volunteer.

So I move on in the same topic to Sylvia. So you are using machine learning algorithms. Usually such algorithms need a lot of statistical information about the user's behavior. How do you cover privacy issues?

>> SILVIA MIRRI: Okay. Yeah, it is a very interesting question. For now, we do not need privacy issues because we just store in our profile preferences in terms of text size, font type, colors, and so on. So there is no private data. There is no name, age of the user, no information about useabilities, and so on. But in the future, we will need some privacy issue. We have to study the best one. Because we -- we -- our intention is to cluster profiles just to better provide adequate recommendation to users.

>> YEHYA MOHAMAD: Okay. I think some encryption, or I think even the preferences, not everybody, so you can from the preferences somehow infer to the person and to the disabilities or something like that. So I think even if we only store preferences, we may have to implement some privacy measures in user modeling.

So I move on again to Philip. How have you gathered the data of your user profiles or preferences, and how many profiles have you created?

>> PHILIP ACKERMANN: Okay. Let's see. I am sorry. Can I delegate this question to my colleague, Carlos?

>> YEHYA MOHAMAD: Yes. Shadi, can you unmute Carlos, please, Carlos Velasco.

>> CARLOS VELASCO: Okay. So the data was gathered, first we did a user group where people with disabilities, around 25 persons who were in the study, and they -- we trusted the results and we came to a set of preferences.

We do not have profiles per se. We have a set of preferences for adaptation. We don't get involved in the classic profiles. That's how we get the information.

And from that, there is around 10 core typical applications for these user groups, and there are many variations of them, so as we have said, in the user studies where these core adaptations, but models support different variations and combinations of those. And in fact, the applications have implemented those adaptations could adapt real-time to these different cognitions.

I don't know if that answers your question.

>> YEHYA MOHAMAD: Yes, yes, it answers it is very well. Thank you, Carlos.

So I move on to Pradipta with the same questions. How have you gathered the data of your profiles, and how many profiles have you created?

>> PRADIPTA BISWAS: Collected data from approximately 50 users in Spain, UK, and Germany. And then we clustered the data separately based on the visual, hearing, cognitive, and modal parameters, and which gave us a set of clusters in the form of, say, mild visual impairment or mild cognitive impairment or mild modal impairment, and then we create profile by taking any combination of these clusters, and in our recent work, we also conducted similar survey in India, and now we have database about 70 or 80 users, where we have collected visual, modal, and cognitive parameters. We can cluster them separately or together.

So Markus, the same questions. How have you gathered the data of your user profiles, and how many profiles do you have?

>> MARKUS MODZELEWSKI: Hello? Yes, we have made user study including 58 elderly people with impairments, from the age 65 to about 95 was the oldest. So we gathered much data from this, and which we implemented about six user models, and then afterwards different users models with a mixture of impairments. So from the study with 58 participants, we gathered attributes, data, and then we created a mixture from them.

What was the second question?

>> YEHYA MOHAMAD: This was the question. We wanted to know how you did it with user studies and how many you had, so you said 58. Okay. Thank you, Markus.

>> YEHYA MOHAMAD: I would like to go to Nickolaos with the same question. How have you gathered the data with user profiles, and how many profiles do you have in your database?

>> NIKOLAOS KAKLANIS: Okay. First of all, we took measurements from about 130 people with various disabilities, and the mainly modal and elderly people.

We didn't use these measurements to make personas. But we performed statistical analysis, regression analysis on these measurements. Because the sample size was relatively small, we used also information from the literature, and we developed a novel regression method that is able to handle small sizes. And this regression method enabled the development of virtual user models that correspond to specific population groups. I mean, this enables us to make virtual user models that correspond, for instance, to the 8% of users having arthritis. And then we extracted values, virtual user models using this approach, and we have developed a repository, which is online. I don't have currently the link, but I can find it, that has all the implementation of all virtual user models, and it is available to anyone that wants to use it.

>> YEHYA MOHAMAD: Can you send later maybe to the mailing list, Nickolaos?

>> NIKOLAOS KAKLANIS: Yes.

So I see that Andy has an opinion about the privacy issue. Shadi, can you please unmute Andy that he can tell us his opinion on that.

>> ANDY: Right. It was just the point that all the groups are working for preferences. They've all hung back on this privacy thing. And I think it's because you need more general solutions than are appropriate for just four preferences.

An example, for example, if you're in an educational system and you are doing stuff with assessment data, you've got (audio breaking up) -- requirements to adhere there, and you are going to build systems to manage those things. And it's perfectly possible for accessibility preferences to ride on the same systems. I think the crucial thing is that (audio breaking up) -- you can ride on the same system where the data is being protected on.

That's all. Thank you.

[Dynamic user models and methods for learning / updating user models]

So we close the topic of privacy and technological issues and go on to the last topic, the dynamic user models and methods for learning updating user models. And I start with Matthew.

You have identified in your paper the problem of the continuous update of user profiles to new contexts and associated acquisition of information. Can you please elaborate a little bit on this point?

>> MATTHEW BELL: Yes. All of the models that are going to run during runtime are going to require continual updating of models. Going back to older people as a user group, a very good example, because they are in continuous decline of capabilities. And as long as they use a device, they are going to need to update their profile as capabilities will change, not only on a day-to-day basis, but potentially more often than that, given the equipment that they are using at specific times, moving from one device to another.

As every device is independent, different, different user agents or technology agents can be used to update profiles for a specific device. This gives us the advantage that we can work with legacy devices as agents can be developed for specific platforms and tie in to the same generic storage format.

I move on now to Philip. You describe a set of tools to support the user in expressing her preferences and in detecting assistive technology on her computer. How are these changes managed and communicated to the server

>> PHILIP ACKERMANN: Okay. Yeah, the user -- we have a system on the client side that's called the Model Management System, and this is running on a local server, and as soon as the user informs changes in his user preference model, we synchronize those changes. We have a Web service with a remote RDF store, and so -- and the application provider, which makes adaptations to his or her application, reads those preferences from the global store.

So we have synchronization mechanism between the client and the server.

>> YEHYA MOHAMAD: And the user can she at any time update or change her preferences?

>> PHILIP ACKERMANN: Right.

So I move on now to Nickolaos. How is it for your approach to be extended to other formats and technologies rather than Web? For example, desktop applications, native apps, et cetera?

>> NIKOLAOS KAKLANIS: Our user models have actually been developed within the VERITAS project, which is not targeted on the Web. So can be used in any possible domain, application domain, and we have already tested them in the automotive, in entertainment, in workplace scenarios, in many, many scenarios. So they are not targeted on the Web, but they can be used in Web accessibility assessments.

>> YEHYA MOHAMAD: Okay. They are somehow generic?

>> NIKOLAOS KAKLANIS: Yes.

So I move on to Markus. You are describing an approach to support design mainly of CAD prototypes. Would it be easily possible to adapt your approach to other formats and technologies?

>> MARKUS MODZELEWSKI: Yes, we are using (Inaudible) and this could be easily included into a different format and for different things. For example, you have -- there are some future works ongoing about how to create something more adaptive and more adaptable, as in one paper with a similar thing. I don't know which one is -- okay -- from this conference. So they are already planned how to extend the whole thing and to different strategy, something like that. So there is also something else that's ongoing.

Okay.

>> YEHYA MOHAMAD: So you will create an API to another format, or how do you do it?

>> MARKUS MODZELEWSKI: Currently we are using -- we have a base ontology, and from this base ontology, we are using a regional, then afterwards we have something completely different. So this type will be -- I'm proposing right now to create to make my PhD, so afterwards it will be released, and afterwards I can publish it differently. So it's an ongoing.

>> YEHYA MOHAMAD: Okay. Fine. Thanks, Markus.

>> YEHYA MOHAMAD: Okay. I move on to Sylvia. Your system generates user profiles automatically based on a machine learning algorithm. How do you prevent the system from making unwanted adaptations which may confuse the user and jeopardize her trust in the system?

So because there is the experience with automatic adaptations that many users don't like them.

>> SILVIA MIRRI: Yes. We provide some threshold to the adaptation in the profile, so we provide the different kind of thresholds for preferences and from things the user has discarded.

The system works in this way. When the system finds a discarded characteristic in a webpage, then it evaluates this threshold, associated related to such characteristics, and then it can provide an automatic adaptation or just it can suggest that adaptation with a sort of pop-up window. And according to user feedback, the system assigns a reward or a punishment to that kind of adaptation, to the associated discarded characteristic, and to the (Inaudible).

And so the system learns how to adapt some characteristics according to user's behavior.

We are conducting some experiments just to understand which thresholds are used in our system. So we are still starting -- we are still starting how to fix this threshold and if this threshold mechanism can be useful, can work in a useful way or not.

>> YEHYA MOHAMAD: Okay. Thank you. So you don't have only automatic, so you are asking as well the user for feedback, which is, I think, a good feature.

>> SILVIA MIRRI: User feedback just to try to better fix the behavior of our system.

So I go now to Whitney. So you describe an approach using storytelling to illustrate user profiles, which will create static profiles in general. Do you think that an initially created profile using your approach could be combined with semi-automatic or automatic approach to ensure dynamic updates of the profile due to new circumstances or context of usage?

>> SHADI ABOU-ZAHRA: Whitney, would you like to type the answer in IRC, and we will read it out loud?

In IRC, Whitney writes I think that the idea of storytelling is good for understanding variation and context. It is information to feed into technical development and dynamic profiles. It makes sure that we have good assumptions about what users will need and how they can interact with the systems well.

>> YEHYA MOHAMAD: Okay. Thank you very much, Whitney. Now to Pradipta, last question in this section. Do you think there are possibilities to enhance the quality of automatically generated profiles like those used by your approach?

>> PRADIPTA BISWAS: Yeah, I mean, I always think that user modeling or anything, automatic adaptation, it should be accompanied by subject preference as well. But I'd just like to share one of our experiences, so in the early stages of the project made for elderly, we found that elderly users were very much like consistent interface or, I mean, where the interface is same as they found it before, and that's why initially we dropped the idea of automatically adapting or changing the interface. We made it only explicitly possible that if the user changed his preference explicitly, only this changes, otherwise, it remains as they found it last time. But saying that, I don't think that automatic adaptation or including the storytelling subject preference -- I mean, including the storytelling persona is a bad idea, but we -- our future, the research will definitely explore that. But at this moment, our user model doesn't support dynamic adaptation during a session, and we do not (Inaudible) something like storytelling or personas. Whether we trust the profiles we created during our user survey.

So we are now at the end of session 1. We will have a break of a few minutes, and we'll come back to do the session number 2, which will start with the approach in modeling declarative versus semantic Web.

Stay online, if you want to go for a few minutes away, then we will start again in a few minutes.

(Break taken)

[Session 2]

Approaches in modeling: Declarative vs. Semantic

>> YEHYA MOHAMAD: Okay. Christos, it's your turn.

>> CHRISTOS KOUROUPETROGLOU: Okay. Hello. We will start with the first topic of the second session, which is about approaches in user modeling with the declarative versus the semantic approach. I would like to start this topic with Sylvia for the first question. As I understand from your paper, Sylvia, you are following declarative approach based on XML to store the user preferences. Why did you choose this approach over a semantic one?

>> SILVIA MIRRI: Okay. Perfect. So we chose this approach because we started studying some similar profiling systems, such as the IMS profile and the related one from ISO, and also the CC/PP profile.

So we decided to try to announce the IMS profile, in particular the display section. And so we chose an XML version just to have compliance with this profile.

>> CHRISTOS KOUROUPETROGLOU: Okay. I would like to go to Nickolaos now. Nickolaos, I think that as I understand from your paper, you are also using an XML-based approach to describe your models. Why is it you choose that approach over a semantic one?

>> NIKOLAOS KAKLANIS: Okay. First of all, we are using both approaches, declarative and semantic, because this cannot be seen in the present paper, but in the VERITAS project, we have let's say a categorization of user models. We are starting for the abstract user models with describing disability, and this is -- the abstract virtual user models are represented using (Inaudible).

So in order to have specific instances of a user, like the virtual user models presented in the specific paper, we used the XML language, and more specifically the user XML language, because it has the primary support for user description, and we wanted to make an extension to use XML in order to support in a more detailed way the description of the users.

And our choice for using XML was also because using XML has very good support for task analysis. So when we are talking about disabled users that have some effective tasks, some problematic tasks due to their disabilities, this means that except from the description of the user, we also need the definition of the specific tasks. So we used XML approach in order to have union need approach.

>> CHRISTOS KOUROUPETROGLOU: I see. I see. Very well, I would like to follow with Pradipta.

I think that on your paper, you are following a semantic approach in building your user models, and I would like to go on the other side and ask you why you choose this kind of approach instead of the declarative one that we see in the other papers?

>> PRADIPTA BISWAS: I mean, I am not exactly sure what you mean by the semantic and declarative approach.

So what I --

>> CHRISTOS KOUROUPETROGLOU: You are using an approach based on ontologies instead of simple XML.

>> PRADIPTA BISWAS: Right, I got it. I mean, our approach can be -- so we give more emphasis on storing static profile. We give more emphasis on simulating psychological functions rather than on the storage format of the profile. So we give more emphasis on the simulation or even rapid eye movements in case of mobility impairment simulator. And our emphasis was small on the particular storage format, so we stored the user profile in an XML file.

But when we use this user model in the GUIDE project, then we converted this user profile into RDF format, and the GUIDE framework, it using this, and also the internal classes or the working, they convert the XML user into RDF format pretty easily.

>> CHRISTOS KOUROUPETROGLOU: So you are using both, actually?

>> PRADIPTA BISWAS: Yeah, we kept it as simple as possible as an XML file so that it can be converted into other formats easily.

I would like to follow up with Markus, Markus now. I believe you're also following similar approach based on Semantic Web. What are the reasons for choosing this approach, or do you combine also the two approaches, as we saw with Nickolaos and Pradipta earlier?

>> MARKUS MODZELEWSKI: Yeah, I have almost the same answer as Nik already had about the VERITAS. We have used an ontology, in which there was a classification needed, a classification for each user model to classify how to have some kind of level of impairment. So ontologies to have this classification. There is a good support of that. So this is our main reason for it, to use it.

I would like to continue with Matthew. In your paper, you are proposing the user of semantic relationships to increase the flexibility of user modeling. How do you deal with the interoperability of your approach in terms of the two different approaches? Can you use both models described in semantic-based way or in a declarative-based way equally?

>> MATTHEW BELL: No. The individual models on a specific device can, by all means, be declarative, and there are a number of advantages to using preferences. For example, as they are specific and tell a device exactly what needs to be changed. The issue comes with moving between devices as preferences, not always directly mapable. And where they can be mapped, a large number of mappings will be needed.

The advantage of the semantic approach, therefore, allow the declarative statements on each device to be inferred to another vocabulary, so as a higher or a lower level. The granularity can be specified.

And therefore, by mapping to a central vocabulary, you can map again to multiple different devices without having to declare the number of mappings you would if you were mapping between different devices or all different devices.

>> CHRISTOS KOUROUPETROGLOU: Thank you very much. I would like to continue the discussion with PHILIP. In your paper, you are proposing an approach based on Semantic Web technologies as well. How would you -- how easy will it incorporate with other semantic-based and declarative solutions we see in the rest of the papers?

>> PHILIP ACKERMANN: Okay. Can you repeat the question, please?

>> CHRISTOS KOUROUPETROGLOU: I see that in your paper you are proposing an approach based on Semantic Web-based technologies. I would like to know how easy would it be to cooperate between other semantic-based solutions and other declarative-based solutions that we saw in the papers, in the other papers?

>> PHILIP ACKERMANN: Okay. So one central component of our framework is the Web technology model, which semantically described Web languages, for example, like HTML. This is also the reason why we choose semantic models because we already reuse this Web technology model inside our -- inside the other models, for example, the device model.

Using this Web technology model, you can define things like browser Chrome, whatever is able to render HTML 5 and CSS 3. So therefore, I think also the other semantic approaches can use, for example, this model in their ontologies.

And finishing this topic, I would like to go to Nickolaos for an answer to the same question. How can declarative approaches be combined? You already told us you are combining both approaches, but can you tell us a bit more about how can they be combined in order to increase interoperability of solutions?

>> NIKOLAOS KAKLANIS: As I already said, we use both approaches, but for different types of details in the user modeling approach.

I mean, for the description of the disabilities, we have some abstract user models, and for the description for definition of virtual user model, it can be simulated directly within a simulation platform. We use the declarative approach. I think our approach is generic enough, given the fact that we think the (Inaudible) effort, we made converters in order to support the exchange format, which is an exchange format using also declarative approach for the definition of the users, and we have developed converters from our virtual user models to other virtual models, and vice versa.

So I don't know if I -- if it's clear, but I think our approach is quite generic.

Approaches in modeling: User Preferences vs. Models

>> CHRISTOS KOUROUPETROGLOU: Okay. Thank you very much.

I would like now to continue with the next topic about approaches in modeling storing user preferences and user models, two different approaches in the papers in the symposium.

And I would like to start the discussion there with Sylvia. Sylvia, in your approach, you worked with user preferences instead of characteristics. Can you tell us a few things about your choice there? Why did you choose to store preferences instead of characteristics?

>> SILVIA MIRRI: Well, we decided to choose preferences instead of characteristics because we started thinking about some visual disabilities, such as low vision, color blindness, but also difficulties related to elderly people.

And we know the reason of textual characteristics that really meet the needs of all users. So we thought it would be better to try to understand the different preferences instead of provide the same adaptation to the specific kind of users. In order to better meet user needs, in order to better understand each single user needs.

>> CHRISTOS KOUROUPETROGLOU: I understand. I would like to continue with Philip or Carlos. I don't know. I think Philip can answer that. In your approach, you're also proposing the modeling of user preferences instead of characteristics. What do you think are the benefits from such an approach?

>> PHILIP ACKERMANN: Okay. I think the main reason why we did this is because user profiles can get very large and can contain a lot of numbers of user characteristics, which we actually don't need for the adaptations, and we wanted to concentrate on the user interface adaptations. Therefore, we came to the point that we -- well, only described the user interface adaptations inside the user preference model. And for our purpose, it's enough because we didn't want large model, a very large ontology, because we need to synchronize it quickly from the client to the server, so this seems to be the best approach for us.

>> CHRISTOS KOUROUPETROGLOU: I see.

Well, I would like to continue with Markus. In your paper, you are using user characteristics to group users into user groups, and then you insert preferences and make recommendations for designers.

Could your approach see benefits from a system that focuses on user preferences, Sylvia's or Philip's, that we already discussed?

>> MARKUS MODZELEWSKI: Hello? There are different user models with different properties and preferences and for different aims. So it was quite different for our approach, and our approach is the only focus on physical products or physical user interfaces. Most of the papers refer to user models which rely on variables that are very important when you are creating your Web user interface, for example. So we have -- we have quite different preferences. We have more (Inaudible) preferences and so on. It's a little bit similar to the GUIDE project where also they use this, but we are concentrating more on an abstract level just to give the designers' recommendations based on the target group of people

So it is interesting to have a very big model from all of those approaches, but for each different aim of the systems, it's -- I think it's good to have a different solution for that because some of the papers, you need different target groups or some of them, there need to be individual. So there's a good variation of virtual user models, including different attributes in the different papers. So I think it's quite interesting to have a very big model, things like a white paper, which is -- which already was said by I think Pradipta, where we collect all attributes that we use, but to use them, we are currently not able to do that. Probably in the future.

>> CHRISTOS KOUROUPETROGLOU: Thank you very much, Markus.

I would like to continue with Matthew now.

In your paper, you describe that the user profile needs to be updated through some kind of procedure. Do you think that gathering user characteristics could be a mechanism for updating these profiles?

>> MATTHEW BELL: In part, yes. User preferences, preferences with the dictionary definition of what the user wants, are a very clear way to understand what the user can do.

Unfortunately, they don't give us a whole story. With a user that is not proficient or an expert in the capabilities of their device, there will be settings and preferences that they cannot access due to inexperience. So there's a need for additional assistance in suggesting preferences that they may like to try.

I would like to continue with Philip. Have you already discussed -- as we already discussed, you are focusing on preferences. I would like to ask, how easy would it be to infer user characteristics from the preferences that you gather?

>> PHILIP ACKERMANN: Yeah. Since we use a semantic approach, and we store all the models and the instances of the models in RDF in attributes, we have different ways of making queries to this database. Yeah. I think that --

>> CHRISTOS KOUROUPETROGLOU: So you think it would be easy to do it because you are using a semantic approach?

>> PHILIP ACKERMANN: Yes.

Whitney, I would like to go to Whitney now.

In your paper, you are describing how personas are used in user experience studies. Do you think that gathering user preferences could be tied with personas to enhanc the information we can have from personas to designers?

>> WHITNEY QUESENBERY: So most of the people I think who have been speaking are really grappling with the hard technical questions of how to implement things, and I live in a more user research world, where I am thinking about how to understand people. Not that these don't go -- and these, of course -- not that these are separate. These are things that we all do. And so for me, a persona is a way of wrapping up the data that we've learned about people, what have we learned from the research, into sort of coherent packages.

I've heard people say things which match in a more technical way what I am about to say, which is if we start from the understanding of the kinds of things that users need -- and that would include preferences -- that we can then use those as a starting point to allow people to then -- either so that we can make additional suggestions technically or so they can make additional adaptations or express additional preferences.

So personas, it's one of those words for which there are many definitions, but for me they are neither purely data driven or purely storytelling driven. They start with data, and the stories wrap a context around that data to try to explain what the implications are in the real world for what we've heard.

So for example, if I'm working on -- this is a story from a friend of mine. If I am working on a device where I can authenticate and the door will open, some users -- some people might need more time to get from the authentication device through the door because they walk slowly, because they need time to pick things up, so there might be many reasons that as many characteristics of the users that would lead to the same preferences.

So I think that we need to be very careful about not using either preference as a way of not having to grapple with characteristics -- mass characteristics of people or to use characteristics as a way of not understanding that people overlap.

Maybe -- I am not sure if that answers your question, but maybe it does.

>> CHRISTOS KOUROUPETROGLOU: Yes, it does answer my question very well.

Sylvia, I would like to now continue with Sylvia. In your future work, you think that it would be interesting to combine your approach with user models describing user and device characteristics. How helpful would machine learning be as a technology in that direction?

>> SILVIA MIRRI: Well, we are thinking about combining our user profile, our preferences profile, together with a profile which describes also device characteristics. And we thought -- we are thinking about how to expand, how to move user behavior from a device to another one.

For instance, we are thinking that some adaptation the user can ask can be taken into account with a different way when the user exploits another device.

For instance, when I use a laptop, I can ask for font size increasing, and then probably I would need the same adaptation with a small screen device. So we are trying to understand if there is some relationship among the adaptation we provide with our prototype and all the different characteristics of the device the user can exploit.

So we thought that machine learning algorithm can be adjusted. In particular, the algorithm we exploited was one algorithm based on the idea of reward and punishment from starting from user feedback, so we thought that could be useful, adjusting fixes the weight of reward and punishment of the machine learning algorithm we are using:

>> CHRISTOS KOUROUPETROGLOU: Thank you very much, Sylvia.

>> SILVIA MIRRI: You're welcome.

Virtual User Models, Personas and similar practices

>> CHRISTOS KOUROUPETROGLOU: That concludes the discussion about user models and user preferences, and I think it was a very interesting one. And we can now continue with the next topic about virtual user models, the personas and similar practices in design.

And I would like to start discussion there with Whitney. In your paper, you are talking about personas as an extension of user modeling. How do you see user modeling benefiting from personas and other techniques using the user experience?

>> WHITNEY QUESENBERY: So I think there's one important aspect is what I refer to in the paper from Isobel Frean.

When we work on something technical, a technical standard, there's a language, and we understand what we are talking about. But when we then ask the people in that context to weigh in on the standard, often I think they approve or they just say yes because they think it sounds good but they don't really understand the implications.

I have friends who are working on a standard for health communications, HL7, and they brought in nurses, and the nurses said we have no idea what you are talking about. And what they did is they wrote essentially a narrative use case, but a story that said here's the context. Here's a description of the problem in narrative real-world terms, and now let's show how that maps to the technical models we are discussing. And I think that it has a benefit for both sides, for the people who are not particularly technical it allows them to take part in the conversation. It allows them to be part of the standards-making process or the consistency-making process. And I think for the people on the technology side, it allows them to not only hear users better, but to stay grounded in a real-world context.

Okay?

So it keeps reminding us that it has to work technically and it has to work in the real world at the same time.

I think it also addresses, if I may, one more problem, which is that they're not all just mono-dimensional. Even if we look at our practices over time or our characteristics and preferences over time, we speak with different registers. That might, in linguistics terms, they are talking about social contexts also, but we might be talking about social contexts. We might be talking about device contexts. But we might also be talking about, for instance, the difference between how I want my screen to be when I'm reading a technically paper versus how I want my screen to be when I am SMSing with friends.

And so one of the things that I think stories can be helpful in is starting the process of articulating. The difference is not only in user characteristics and user preferences, but user context.

>> YEHYA MOHAMAD: Whitney, there is a question from Andy to you. So he is asking do you have a feeling for how we can prevent the misuse of personas as stereotypes? For example, it's his experience that he knows it's a model and not accurate to an individual, but in the commercial world, there is much pressure to do stuff for the cheapest possible cost and avoid an idealized view.

>> WHITNEY QUESENBERY: Yes, I think like almost every good idea, there's misuse of the idea by people who want to pay lip service to it but not think deeply about it. And I think that's probably true across the board.

I think that there's not a particularly good answer to that except to speak up about it. I certainly have seen personas that are little more than a cheap tagline, you know, Josie the buyer. But I also think that personas are really interesting way to merge different views. So for instance, we might merge a market research approach with user characteristics and begin to bring them together in a way that lets the persona be the embodiment of the conversation between the data. It doesn't create it on its own.

If you are having a lousy conversation, you are going to have a lousy persona. If you are having a deep and rich conversation, then your personas and your stories for those personas can be a way to embody that deep and rich conversation.

>> YEHYA MOHAMAD: Okay. Thanks. And this is a nice answer.

So okay. Then let Pradipta speak, Shadi.

>> PRADIPTA BISWAS: I have a question for Whitney. The question is how do you make sure that your stated persona is adequate to cover the whole range of your intended users, or you have the whole picture of the story that will happen when the product will be launched in practice? Or in technical terms, how do you make sure the sampling is adequate?

>> WHITNEY QUESENBERY: So the question of sampling is one that applies, I think, to any research and to any modeling, not just a qualitative, but I think it gets asked a lot about qualitative research in part because we tend to work with smaller numbers in the sample.

But you could have a survey that collects data from 10,000 people and still not have covered the diversity of experience. And I think you could have research that will fit 10 to 12 people and that much closer. So this is really a question about data quality and research quality and not particularly unique to personas.

I think one thing to say about personas is that I have done projects where we started from a small set and then gradually expanded them. I've done projects where we started from a very big set, and over time we began to learn how we could combine them to see that, for instance, one was for a university, that this student, although different in some ways, had a lot of overlap and similarity to others. And we began to be able to combine our models.

The challenge, I think, we have here in the accessibility community is that people with disabilities are often the people that are left out at the beginning and then never get included again because the needs are more extreme and, therefore, it becomes problematic.

So for me, the political shift is to say we must get a good range of abilities into the base model, not leave them out at the beginning.

Does that address that question?

>> YEHYA MOHAMAD: Yes. Thank you, Whitney.

So we move on now to Nickolaos.

In your approach, you are using virtual user model, VUM. What would you have for combination of personas in combination with the VUM?

>> NIKOLAOS KAKLANIS: First of all, persona is good approach in order to help developers and designers to understand users' needs and to create more accessible products and services. But the bad thing concerning personas is that it cannot be directly simulated in a simulation framework. So they do not have a formal definition like an XML thing that can be -- using an XML partner.

So in our approach, we use personas as a starting point in order to see which are our target groups concerning the disabilities, but then we pass to the virtual user models that we propose that enable the direct mapping between disabilities and the WCAG 2 guidelines. And then the automatic assessment of Web-based accessibility assessment using virtual user models.

So we needed a more formal definition of the user. That's why the personas by themselves do not fit our need.

>> YEHYA MOHAMAD: Okay. Thanks, Nickolaos.

>> CHRISTOS KOUROUPETROGLOU: Okay. Whitney, I see that personas is a technique that is used in user experience for many years. Could you see it becoming a vehicle for taking the user modeling into the core of design process for products and services?

>> WHITNEY QUESENBERY: Yes, absolutely I could see that. I think that we really need a way to bridge the qualitative work and the modeling work. And whether personas is the perfect solution, it's one that's worked for me, and it's one that has enough knowledge of how to use it that we could make that happen.

>> CHRISTOS KOUROUPETROGLOU: All right. I would like now to continue with Markus. You have developed a system aiming at designers using virtual user models. Could you -- could the representation of virtual user models through personas help in the uptake of these visual user models in the design process?

>> MARKUS MODZELEWSKI: What we are using, we are concentrating only on the place where we are using abstract models to give accommodations to the designers, who we don't have the need really to use personas or to have as much detailed user as possible, only in the first phases.

In the later phases, we are creating a simulation framework, so in this phase, it would be interesting to extend it to the user persona to have more detail in it.

But I don't see -- currently, I don't see the benefit if we are using personas instead of virtual user models because virtual user models, we can use more abstract values or relationships between the models, between different objects in the models, so there are different things which are more suitable, different issues that result in a more suitable way for us if we are using virtual user models instead of personas.

>> CHRISTOS KOUROUPETROGLOU: Okay. Thank you very much. And final question for that topic would be for Whitney.

Do you think that in the future -- I think you already implicitly answered that question, but do you think in the future we could have personas being backed up by user models and so that they can be used as a more friendly interface for designer and for users to communicate between them?

>> WHITNEY QUESENBERY: Yes, I think so. And I'll add we talked about the front end, which is going from user research to technical models. I saw a presentation from the University of Trent. They were working with a local government on a very complicated question of did you need a work permit, and they took their personas, put them online as a starting point. They said pick the person closest to you, and then allowed the users to adjust the story to match their own personal situation.

Behind the scenes, all they were doing was filling in a very complicated search form. But they did the search form -- the users experienced the search form as an adjustment to a story, and that gave -- sort of like you had done an interview with a clerk. That let them then say here's the circumstances. What forms do you need to fill out? And so on. And I thought it was really nice the way they started with a technical model, they developed user stories and personas, they used those to refine the technical models, they used those to present the starting point story to the users and then let the users use that as a vehicle to express technical needs.

>> CHRISTOS KOUROUPETROGLOU: Thank you very much, Whitney.

I would like to go on with the next topic about --

>> CHRISTOS KOUROUPETROGLOU: Oh, okay Andy, do you want to add something to the conversation?

>> ANDY: Okay. I put myself on the queue because I was going to answer that point that Whitney answered, in fact, extremely well. There are lots of examples out there of systems involving preferences where one starts with something which is very similar to a persona, which is a set of preferences that match for some particular groups of people and then adapts them to one's own personal needs.

An example is the accessibility solution on the BBC site in England: Essentially, you pick a persona and then you adapt it. It's a common way to do it. Good bootstrapping process

Ontologies alignment and Standardization issues

>> CHRISTOS KOUROUPETROGLOU: Thank you very much for the input and the example there.

Okay. Is there anyone else on the queue? I don't see anything else. Anyone else?

Okay. So I would like to go on with the next topic in this symposium about the ontologies alignment and standardization issues.

I would like to start the discussion there with Matthew. Matthew, in your paper, you describe an approach so that semantic relationships can allow capabilities defined in various standards to be used together. Could you elaborate more on that? I mean, in particular, have you tested the approach? Which standard have you used? And how successful was it? If you tested the approach.

Matthew?

>> MATTHEW BELL: We are still testing at the moment. Sorry. We are still testing at the moment, and we are using a variety of standards that are all based around human capabilities.

>> CHRISTOS KOUROUPETROGLOU: Okay.

>> MATTHEW BELL: Very interested in both the major approaches at the moment, ISO 24751 and 24756, which take different -- one takes a more human approach, the other one takes a very preference-based approach at the moment. Mapping between them would allow information in one format to be transferred into the other.

The question is the granularity at which you want your data. If we break any task down into its constituent parts, we will find that many tasks are related. And therefore, the mapping between ontologies is reliant on understanding what the similarities between the ontologies are.

I have seen an approach -- I will have to provide the paper later -- which maps via one of two methods. The first is to map directly between ontologies, and the second is to use a go-between, and I think I've already described the preference or our preference for a go-between.

I would like to continue with Pradipta. In your approach, you are using a set of Web services to allow communication with user models. Could such a Web service be the basis for communication between user profiles from different information sources? What problems do you think such an approach could have?

>> PRADIPTA BISWAS: Good. In our system, the user profile can be stored online, and when it's stored online, it's (Inaudible). For example, the point the user is giving his preferred font size on a laptop, we are calculating the minimum visual angle, which can then be converted when he is using television or maybe a smartphone. And once this profile is created from a single device, then the same profile is seamlessly used for any device the user is using, and presently, we already use this model in smartphone, DTV, and normal laptop computers. So does that answer your question?

>> CHRISTOS KOUROUPETROGLOU: Partly. Can you tell me if your system could use information from other sources?

>> PRADIPTA BISWAS: Yeah, definitely. So we are trying to standardize a format through the EU VUMS cluster, and presently I am taking it through ITU, the ITU focus group on smart TV. Now if any source can store the information in this specific standardized format, then we can read it using the Web service, and we can put on our user model or predict interface parameters or user preference based on that particular profile.

>> CHRISTOS KOUROUPETROGLOU: Okay. Thank you very much.

I would like to continue with Philip.

In your approach, you are using different models, communicating with each other. Do you provide a specific API for communicating and updating these profiles? And what would the benefit be from such an approach?

>> PHILIP ACKERMANN: We have an API that is provided -- so if you mean this by API, we have a set where you can make queries to the models or create instances or such things.

So yeah, we have an API.

>> CHRISTOS KOUROUPETROGLOU: Okay. Is this based on Web services, or is it an API for developers?

>> PHILIP ACKERMANN: No, it's based on Web services. What do you mean by an API for developers?

>> CHRISTOS KOUROUPETROGLOU: No, I mean an API in a library for Java or something like this. To develop similar applications.

>> PHILIP ACKERMANN: Okay. Well, we have developed a Java library for creating instances of those models because we also use those models in one of our Java tools. So yeah, we have an API like this as well.

>> CHRISTOS KOUROUPETROGLOU: Okay. Quite covered.

I would like to continue with Whitney. Would you see as a better solution a standard that could be used to describe users and user characteristics, or would you say that it would be better to have something that would allow the communication, such as an API that we call it, the communication between different sources of information so that we can combine these profile information from various sources? Which standard do you think would be more effective?

Whitney?

>> WHITNEY QUESENBERY: Okay. I don't know the answer to that. I think there's always been a wish in any field to say can't we just define the users once and then we are done? And it never seems to work.

And then the alternative seems to be well, we'll just let the users pick their own preferences. That doesn't work either.

So I think my fear about standardization is the long time it takes the standards process to work.

On the other hand, there's some advantage to that in providing some time for people to learn to work with that standard. So I don't really have a good answer. I am sorry.

>> CHRISTOS KOUROUPETROGLOU: Okay. No problem with that.

I would like to continue with Nickolaos, with a similar question.

First of all, how is it -- how easy is it for your approach to use information from other sources, and would you see a standard for describing users through an ontology, maybe, or would you see a standard for communicating between information sources as a more effective way of going forward?

>> NIKOLAOS KAKLANIS: Yes. In order to standardize our virtual user models, our proposal, first of all, we are (Inaudible) our work within the EU (Inaudible) project, and we have made converters that support the transformation of a virtual user model in the VUMS format into our format and vice versa.

As a next step, we also try to promote our work within the W3C MBUI Working Group, and as I told you, our virtual user models are based on XML, which also tries to be a W3C standard, and it's very -- it is possible that during the next month our extension regarding our virtual user models will be integrated into the next version of usiXML

So with these three approaches, we try to promote our work and standardize it, but currently the VUMS format and the VUMS models to our user models is the first try that has some implementation behind it.

I mean, we have made some converters for it.

I would like to continue with Markus. I know we are slightly over time for this session, but there's a couple of questions more and we are finishing.

Markus, how easy is it for your approach to use information from other sources, and would you prefer a standard for describing users as an ontology, maybe, or would you prefer a standard for communication between information sources such as APIs that we showed earlier?

>> MARKUS MODZELEWSKI: I work of ontology, all this data, context data, there's also the question about comprehensible models, sustainability of these models, how to use them afterwards. So the VUMS class, the idea behind this, is about this. So we are using ontology models for different areas in different projects with different attributes and so on, just to have them in one big such as a table or with different variables and different attributes that can be used afterwards in different projects. So this is one first step to create something that can be used by different purposes.

To have an API, to have a connection to these ontologies, there are many standards, many different APIs how to use this ontology already available. So there is really no -- I don't see the need to have a different API again just to access these ontologies. We can be just focused on very low level to have this information available, so no ontology.

>> CHRISTOS KOUROUPETROGLOU: Okay. Thank you very much. I would like to close this topic and session of the symposium with Sylvia.

The question is more or less the same. How easy is it for your approach to use information from other sources, from other systems developed? And would you prefer -- would you say that a solution for the future could be an ontology for describing users for something that could allow the communication between different information sources?

>> SILVIA MIRRI: Well, I think maybe the best solution, it would be a system that allows the communication between different information sources. Because we think that we can, you know, collect information from devices and from different users, and so my opinion is that the best way would be try to collect, store, and operate with information from different sources, description of devices, and other information and profiles from other kinds of standards and sources. This is my thought.

[Session 3]

Open discussion

>> CHRISTOS KOUROUPETROGLOU: Okay. Thank you very much. This concludes the second session of the symposium. I don't know if Yehya has any questions or if anybody would need the help of Shadi to see if there is anyone in the queue who wants to ask something more to the authors?

>> YEHYA MOHAMAD: I saw Andy was in the queue. I think we can open to discussion now. We can exchange opinions. Anybody who wants to add anything or add questions to the others, please put yourself on the queue, and then we can have a loose discussion.

Andy, you wanted to express your opinions?

>> SHADI ABOU-ZAHRA: Maybe just remind the people how to get on queue. 41 and the pound sign is to get on the queue. And we will look at the queue and call you.

So I see Pradipta already on queue, and then Andy.

>> PRADIPTA BISWAS: Oh, sorry. I have a question for Yehya and Christos. What do you see the future of this symposium? Will it be continued in next year?

>> CHRISTOS KOUROUPETROGLOU: I can tell you that this symposium is actually the start of the writing of a research note that will be circulated and published within the group and then to the audience, to the public, and we would like -- this procedure will lead to the publishing of this research note with future guidelines and agenda for future research and the outcomes for the symposium in general.

I think that Shadi can explain in more detail, if it is needed, how the procedure is going to be, but the actual outcome will be a research note based on the symposium and the comments that we will receive.

>> SHADI ABOU-ZAHRA: Yes, that's exactly correct. This is Shadi. It will be published as a working draft, and all authors and participants of the symposium will be contacted, so you have an opportunity to review the draft and provide input and further comments on it. And then we hope to close -- publish the final research notes by the end of this year. So hopefully early autumn you will get the first draft and the opportunity to review -- autumn, maybe I should qualify that because we have people in different parts of the world. So Yehya and Christos, I think you were planning maybe September timeline to have the first draft ready?

>> CHRISTOS KOUROUPETROGLOU: Yes.

>> Yes, possibly.

>> SHADI ABOU-ZAHRA: Right. And so that's what -- the timeline you are looking at.

>> CHRISTOS KOUROUPETROGLOU: That's correct. Thank you.

>> Who else is in queue?

>> SHADI ABOU-ZAHRA: Next is Andy.

>> Andy.

>> ANDY: I wanted to respond after Matthew's points, really, because I think he touched on some very pertinent issues, and I just want to bring out what I see as a number of quite difficult issues but where we probably need to be working.

And the first problem is all these kind of disparate things that kind of don't fit together. Right? And I think you can see this in some of the questions that the organizers asked. And from my perspective, the problem is how to get a cohesive approach across standards, across projects, across vendors into some kind of critical mass thing. I mean, all of these approaches are good, but they all need some kind of broad-scale framework to work in. If you are going to interoperate with different kinds of information, how are you actually going to do that? And I think that is a very difficult problem.

And part of that is how you go from needs to solutions. And how needs and solutions fit together. And it's not completely clear. You know? And I'd like to suggest a sort of little area that I think really needs tackling that will be quite easy to tackle, and also a significant issue that I wonder where people have got any insight on.

And the area that I think could be dead easy to tackle is we've all got different ideas of what a preference is, what a requirement is. For example, suppose you need a high-contrast -- you need some enhanced display. Is that a preference? Is that a requirement? Is that a need? Do you take that need and generate solutions that deal with it which might be increased contrast or mucking about with fonts or whatever, you know, depending on the technology? There's no kind of general agreement about what these terms mean. They are defined in different ways in different standards. It's not that easy to define them, but I think you could. It's not that easy because one man's need is another man's requirements, and it changes over time. What's a solution one day becomes built into the infrastructure the next day.

So that's one little thing that I think might actually be worth doing some work on.

Another thing that I wanted to mention as a general problem that I don't know the solution to, but I think it came out in what Matthew was saying about these two particular standards, one of which I'm an editor of, is that there seem to be a lot of people in the world -- includes all the health services, all the people that do kind of designing buildings and things like that -- that have what I would think of as a medical model approach to accessibility. Kind of classifies a disease. And it's a model and very much a model. And the fact that it's a model touches on my earlier question to Whitney as well.

But how -- one author has another approach which is based on preferences and supports about how you actually get those preferences and how you actually use those preferences, and it seems to me that these two approaches don't actually work together terribly well yet at the real technological detail level they don't work together terribly well, yet half the world uses one approach and the other half of the world uses the other approach.

I would love it if we could get entirely away from a medical model approach, but we can't because some things in some areas, it's the best model of what actually goes on that we have and that we can have. And a lot of areas around cognitive disability would fit into that. So I just think if we could figure out what the relation between these two things are and the bit about needs, preferences, and requirements fits into that somehow, and I'm not completely sure how. But if we could figure out those relationships, we could go a long way. And that was just something that I just wanted to say.

Thank you.

>> CHRISTOS KOUROUPETROGLOU: Thank you very much, Andy, for your position there. I think it touched on a lot of the issues that are under discussion in this symposium, and we are seeking for answers in that direction.

I don't know if any of the authors has anything to add or anything to propose.

>> SHADI ABOU-ZAHRA: Matthew is on the queue.

>> CHRISTOS KOUROUPETROGLOU: Okay. Matthew.

>> MATTHEW BELL: Hi. Yeah. I liked a lot what Andy said there. Those are standards, as already implemented, our standards are usable, and they talk about the same things. And it basically -- just to reiterate what Andy said, that there are different languages that different people use, and they are stacked hierarchically upon each other. Where one person might say they need the text size increasing, another person might say that they have a visual acuity of X. And the medical model is generally seen as a very bad thing because it victimizes the user. However, it is unfortunately a very specific model in that it says I have a problem doing this. And -- computers work in binary, yes or no. That's the answer we want.

We want to know if the specific person, given the context, can use a certain device. So I think starting at whatever level you are working on, you then need to be able to talk to others around you by saying I have this information. I am telling you -- I am not making a judgment. I am just telling you what I know, that the person is able to do this or not or at a certain level, and providing that information that can then be used either as it is or abstracted either up or down to be then used across a different device.

Thank you.

>> CHRISTOS KOUROUPETROGLOU: Thank you, Matthew. I would also like to add to that that you mentioned about context of using a device or an interface or something.

We see that today the way it is going through all devices, you can see it in mobile phones and tablets, tomorrow you will see things in devices in your home everywhere. So I think that this also puts another parameter in the adaptation, in the modeling discussion about the -- something that Whitney mentioned sometime about the social context of using.

I mean, it might also be needed as a discussion to start the discussion on that for the future, how we can also model apart from the context of users, how we can also discuss about the social context of using a device or, for example, am I using the TV sitting alone in the living room, or do I have company? I'm with my family in using it? This might change a lot of things in the preferences that they might want to use, the TV or whatever other device it might be. And I think that it might be too early to discuss about stuff like this, but I think we need to also keep an eye open on that kind of modeling too.

Is there anybody else on the queue?

>> YEHYA MOHAMAD: Yeah, Pradipta is on the queue.

>>CHRISTOS KOUROUPETROGLOU: Okay.

>> PRADIPTA BISWAS: Hello. Hi. I'd just like to -- I actually agree with Chris Tuesday, and also just to add another point to that regarding Andy's points about medical model and preference model. And I have a feeling that both are necessary in some contexts, for example, say a user may know that he is finding it difficult to read the screen, so if you increase the font size, that will be good for him. Or even maybe he has a particular preference about screen color. So that the user knows, and we can use his preference. But say, for example, a user, maybe an elderly user, he has no idea what sticky key means or how to change spacing in Windows interface or an interface, and he also doesn't have any idea that it will actually make the interaction much better because if you increase the spacing between your buttons, then a person having spasm or tremor in finger, it will reduce the chances of wrong selections.

So these things that users do not know, and in these cases, if we have a kind of -- I don't want to term it as medical model, but a way by which we can map users' physiological or psychological characteristic, then it becomes more usable. And we attempted to do that in the GUIDE project. In the GUIDE project website, there is a project which is freely downloadable, and this application tries to combine both medical and preference model together to develop adaptive interfaces. So thank you.

>> CHRISTOS KOUROUPETROGLOU: Yes. Thank you very much, Pradipta.

I see discussion on irc. I don't know if Whitney or Markus who are participating can also tell us a few things.

>> SHADI ABOU-ZAHRA: If you want to speak, you need to get on the queue, and we have Matthew on the queue so far.

>> CHRISTOS KOUROUPETROGLOU: Okay. Matthew

>> MATTHEW BELL: Yes. I would like to pick up on the point around context. As Whitney has described so fantastically, context is everything. And at the moment, without context in a model, the model is -- it may be applicable, but you don't know that it's applicable, and you don't know how widely the model is applicable and what situations it can be used in.

The problem with context is it is so wide and varied. And context for one person is simply the state of the person that they are communicating with, along with the environment and many levels of social and various other contexts as well.

I think as a field, user modeling is fairly immature, and it's gradually gaining emphasis and looking at things like, I think, the motion that was talked about. That sorts of thing is a level of context that is shown through lower-level abilities.

So if my voice were to get louder, it might be because I was angry. It also might be because I couldn't hear what was going on. So that context is gradually starting to be used. I think the important thing is to have the capacity to put it into the model, but also that it can be ignored if -- in order to be compatible with lower-level or less sophisticated or more specific models, for example, interface, and the lower-level accessibility stuff.

>> YEHYA MOHAMAD: Okay. Thanks, Matthew. I think Andy is on the queue.

>> ANDY: Yeah. Well, there's a lot of misunderstanding here. I'm basically with Matthew on context, but a model is a model. The medical model is a model. But the preferences model is also a model. It just happens to be just a little bit, I hope, closer to real user requirements than a medical model.

In fact, I think using this with my content -- and I am going to do WCAG 2 on my content and therefore it will be exactly what you want -- is also a model. It's just further away from the user altogether.

There seems to be some issue of control that is different in the preferences kind of model than the other models, and we haven't really drawn that out, and I think maybe it might be worth it another time having a discussion that does draw that out.

And the other thing is that -- and this was my point, really, to Whitney earlier on about -- is that we know that these are models because we're intelligent. Okay? I go to my doctor's surgery, and he says I think you've got this disease, but I think -- but our knowledge of it isn't very good, and it's only as good as the model we've got, so it might not fit you precisely. And we know that. Okay? But out there, Joe public, a website, website design or a company that pays you to tell them how to make their stuff accessible, they don't know this, and they are looking to abuse stuff very, very often.

>> YEHYA MOHAMAD: Okay. Thanks to Andy.

I think we have now 6:30 Central European Time. Shadi, we have sharp time, or can we extend a little bit?

>> SHADI ABOU-ZAHRA: We don't have any more captioning after this.

>> YEHYA MOHAMAD: Okay. Then we would like to thank everybody for joining us and having an interesting discussion about User Modeling for Accessibility.

Christos, do you want to add something?

>> CHRISTOS KOUROUPETROGLOU: Yes. I would also like to thank everybody, and also Shadi for the background work that he does there on the meeting and the meeting people, and for the organization. And I would also like to encourage authors and participants to follow up the discussion on the email list and provide further input on the email list.

We have on the symposium webpage how you can participate through email, and this is something that you can do after the symposium ends. So feel free to provide further input after the symposium ends, and we will take it under account for our research note.

Thank you once again from me.

>> SHADI ABOU-ZAHRA: Just one final note from me. The text transcript of the captioning will be made available online. You will have this as well, hopefully very soon, in a week or two or something. And that will be additional input.

As Christos was saying, do provide your additional background or thoughts as we go along. Thank you very much for all your contributions, in particular the authors who made this happen.

Thanks, everyone. Good-bye.

>> YEHYA MOHAMAD: Okay. Bye-bye.

>> CHRISTOS KOUROUPETROGLOU: Bye-bye, everyone.

(Call ended at 11:23 a.m. CT.)

********

This text is being provided in a rough draft format. Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

********