W3C homepageWeb Accessibility initiative homepage

WAI: Strategies, guidelines, resources to make the Web accessible to people with disabilities

25 June 2012

WAI R&D Symposia » Mobile Home » Transcript

Transcript for Mobile Accessibility
Online Symposium 25 June 2012

>> SHADI ABOU-ZAHRA: Let’s get started. This is the start of the mobile accessibility symposium. Thank you, everybody, for joining. My name is Shadi Abou-Zahra. I’m the staff contact, W3C staff contact for the research and development working group that is organizing this on-line symposium on mobile accessibility. We have a great panel today of discussion. Just before we get started, a couple of logistics. Your phone line is automatically muted when you joined. As explained in the E-mail, in order to get the speaker queue, if you have a question or want to bring in a comment, during the discussion, please type 41, and then the pound or hash key on your phone, on your dial pad. This way, you will get in queue. When the symposium chairs then call on you, your phone line will be unmuted. Otherwise, only the speakers and symposium chairs will be unmuted for the respective session. And so that we can keep the noise level as quiet as possible on the call. Most of you have identified your phone lines by typing in your personal pass code. Please do so if you haven’t, in order to identify your line so that we can mute and unmute you. Your pass code starts with the number 43, and then a five-digit number. You can actually see if you are on the phone or not if you join the live chat, we have an interface there that will actually show who is registered on the phone. That is a pretty helpful thing as well. So without further ado, I’m going to call on the symposium chairs, led by Yeliz Yesilada and Simon Harper and Peter Thiessen, who will get you started on the day. Go ahead.

>> YELIZ YESILADA: Hi, good afternoon. I’m Yeliz Yesilada. I’m one of the chairs of the symposium and my co-chairs are Simon Harper and Peter Thiessen. Welcome to the second research and development working group symposium. The focus of the symposium is mobile accessibility and mobile devices provide many opportunities for people with disabilities to access the Web. However, as we know, there are many accessibility barriers to mobile access. In this symposium, we aim to bring researchers and practitioners together to discuss these challenges and possible solutions. And hopefully, develop a roadmap for future research in the area. After this symposium, RDWG will develop a research report using the results of this symposium. So along with the discussion and the papers and the findings, we will actually create a research note. Today, we have five papers. Hopefully we will have a great discussion today. The way we will be running this symposium, each paper will have 15 minutes. So each presenter will first answer three questions, and we will have a discussion where all the participants will have, will be able to ask their questions. If you would like to ask a question, please type in Q + in the chat room, or in your phone pad please type in 41 hash key or pound key. You will be able to join the queue. And after the presenters answer their questions, we will be able to also ask your questions. Once we will actually finish, once all the presenters finish answering their questions, at the end, we will also have a discussion and close the symposium. Let’s get started. Our first paper is Accessible Security for Mobile, by Elizabeth Woodward and Brian Cragun. First, I would like to ask our generic question to the presenters: How does your work relate to the symposium objective and research questions?

(Pause.)

Are you there?

>> ELIZABETH WOODWARD: Yes, Brian is going to be taking this one.

>> BRIAN CRAGUN: Yes, I’m unmuted now. Thank you for inviting us.

As the symposium attendees well know, mobile devices are increasingly important and are already the primary form of getting to the Web in many parts of the world. People with disabilities, unfortunately, have many difficulties when they try to access the Web with mobile devices. We have been working within our enterprise, IBM, to plan and pilot solutions for ourselves, because we want to be able to bring our own devices. We have a BYOD policy, and mobile security is an important part of that.

We adopted this about two years ago, and bring your own device means that employees who want to work outside of the office, or who have a device, can bring their own smart phones into the corporation and use it.

We still get Blackberries to about, which are the cure devices specifically for corporate use, to about 40,000 employees right now. But there is another 80,000 who use smart phones and tablets, including the ones they purchase for themselves.

IBM ranks high at the top ten of corporations for accessibility and employing individuals with disabilities. And it’s really important to us that the productivity gains of mobile devices should be available to all employees. I think many people are aware here that security and accessibility often cross-purposes, and this is especially true in mobile devices, because of the environment that, the small screens, the mobile nature require specific security practices.

That is why we think it’s actually very applicable here. The topic that we have been addressing is, how to enable secure accessibility in general for mobile device users, and particularly in a bring-your-own-device model.

We have already seen many practical issues in implementation of mobile security and accessibility. One of the reasons we wanted to participate is to support and encourage investigation of the technical challenges in terms of APIs, platforms, browsers, applications, accessible security tests, authentications, and Web content accessibility, as they relate to enterprise mobile security, and then beyond that, how can we leverage security technology to actually enhance the accessibility of mobile devices for enterprise use.

>> YELIZ YESILADA: Thank you. We want to move to the second question, from the abstract. What particular security concerns are raised by the combination of the mobile Web and users with disabilities for social engineering vulnerabilities, and to be more specific, what is the connection between accessibility and mobile devices and the security dangers and problems specifically faced by people with disabilities?

>> ELIZABETH WOODWARD: Yes. So one of the big problems that we have encountered is that devices are made secure, but we see that as being secure at the expense of someone who is disabled or situationally disabled, being able to actually use their device, which is a big problem.

As an example, we recently evaluated a containerization solution, and what that is is that is a solution that partitions a smart phone so that you have both the work partition and a personal partition, and that lets users within the enterprise separate their work applications and data from their personal applications and data.

The very first challenge that we saw was that the container solution itself couldn’t be installed by someone who was blind. Right off the bat, we had a bit of a problem.

Then if the solution would have been deployed at this early stage of the review, a user would not have been able to use the Eyes-Free Shell that they were used to using in order to access the content that was in their work partition.

Basically, they were given a choice between access to the eyes free shell and the work partition. Another challenge, and the navigation of the container system and the application provided in the work partition were not accessible as well.

So, another thing that we saw was that when the assistive technology was running, the authentication to the container crashed. So that meant that you could no longer authentication to get into the container.

So all of these things indicate this serious challenge if we haven’t considered accessibility when we are developing and creating our security solutions.

Beyond that example, the assistive technologies that we saw in that containerization solution had to be recompiled and installed as a separate name space in each partition, so there was some effort surrounding that.

And the problem is more likely the device can be made secure, but it ends up being unusable by someone who has a disability. We can look at biometrics as well. If I can’t use fingerprints and the authentication solution for my mobile device is fingerprint authentication, the mobile solution ends up being completely impractical for me, and renders the device useless, unless there is an alternative.

From another perspective, we have seen assistive technologies themselves that have created some security issues. If you consider Siri, currently being blocked within IBM due to security concerns. First, IBM has issues with content being sent to the Apple Cloud, that might be private and confidential. But also, by default, Siri can or could be used even when the smart phone was locked with a pass code. I’m not sure that is still true, but it’s an example of the impact that assistive technologies can have on security.

There are significant challenges we need to consider security and accessibility in combination to make sure that we end up with a solution that works for those who have disabilities, and others.

>> YELIZ YESILADA: Thank you, Elizabeth. Before we move on to the last question, if anybody has any questions, you can please put it into chat room for after we finish the third question, we will be able to actually ask your questions to the presenters.

So the second question, actually, the third question, in your abstract, you say key areas that needs to be addressed in order to create an accessible mobile security, so you mention a list. Are these concerns specific to disabled users?

>> BRIAN CRAGUN: So, more and more, we are seeing with mobile that there is a blurring of lines between those who have disabilities, and those who don’t, because with mobile, almost everyone is experiencing at one point or another what I would term a situational disability. They get in a circumstance that is not working very well for their mobile device. It may be that they have got a little screen and small fonts, and so people are now looking for where is the setting that helps me enlarge the fonts? They will get in situations such as a noisy airport, and suddenly they can’t hear their phone very well. And the assistive technologies to be able to see the text are very important.

That’s a temporary hearing impairment actually. If the lighting isn’t right, maybe you go outside in the bright sun, you have a challenge seeing what is on the screen, if you are traveling in the car you are not supposed to use your hands or take your eyes off the wheel. So you need a hands-free eyes-free solution. Those kinds of solutions are important to those, both who have disabilities, and those who have these temporary limitations.

The techniques and concerns that we have in our focus is for persons with disabilities and the impacts it has for all people using mobile devices.

We have seen a case where a password restriction was too difficult for persons who were blind, on mobile, so people stopped using the application, which was in a limited pilot.

I’d say the concerns we have identified are specific to people in all of these circumstances.

>> YELIZ YESILADA: Okay. Thank you, Brian. So now I’m actually opening the floor to participants, if they would like to ask questions. And now I see Shadi on the queue. Shadi?

>> SHADI ABOU-ZAHRA: Sorry, actually just checking if anybody is on the queue. I was not getting on the queue. I see Christos typed a question in IRC.

>> YELIZ YESILADA: Yes, I will take that question. Sorry, I thought you would like to ask a question. I think it’s a good time to remind people if they haven’t actually entered their personal code, please enter your personal code.

Now I guess, Christos, would you like to ask a question?

>> BRIAN CRAGUN: While that question is coming up, I see a question in the queue that someone has asked: Is this not good news that we need to mainstream accessibility in AT, something we can capitalize on.

(Echo.)

That is a very definite yes.

We have seen –

>> YELIZ YESILADA: Brian, that’s the next question. Christos is now I think unmuted. Let’s get Christos’s question, and then we will move on to that question.

>> BRIAN CRAGUN: Okay.

>> CHRISTOS KOUROUPETROGLOU: So….given security that you describe, do you think it’s best to focus the accessibility guidelines to take into account security? Or do you think the other way around is the best one, to have the solution, the security solution to address accessibility in the first place? Which one is the best way for accessible security?

>> BRIAN CRAGUN: At this time we are not really calling for additional security guidelines to be added to accessibility or vice versa.

As far as we can tell (echo) the guidelines for each of the various thoughts are clearly able to (inaudible) themselves, we think that the problem is that people don’t bother to test for both security and accessibility in their applications.

(Echo.)

>> ELIZABETH WOODWARD: I’d like to add to that too, Brian. I think that what we have been trying to do is get this message out of the impact of accessibility on security, with the individuals who typically work on security who may not have even thought about what is the impact going to be on accessibility? And in this case, because it does reach so many people, it’s been particularly valuable to them to really understand what accessibility means, and what the impact of their actions might be, and making sure that we bring persons with disabilities into the initial use cases and user stories, so that they can develop solutions that work for everyone.

We have taken it from the security perspective. We have also had similar conversations around Cloud computing, where we have gone out to people who are implementing Cloud computing solutions, and we have talked about the impact at different, in different areas, you know, of Cloud computing, either from creating the infrastructure, developing applications, or providing the management infrastructure.

So I think taking the message to the mainstream has really positive impact too.

>> YELIZ YESILADA: Thank you, Brian and Elizabeth. I think we can move, we have another question. So I see the name on chat room, as E., who is E? The question is, is it not this the good news that we need to mainstream accessibility and AT, something we can capitalize on? Elizabeth, Brian, would you like to answer this question? I cannot see who is E. But….

>> BRIAN CRAGUN: Yes, and definitely that, so what we have seen is that accessibility in general is sometimes, it depends on who in your enterprise or who you are working with, whether they are very sympathetic to that or not.

But we found now that individuals have mobile devices, and they are able to realize that in certain circumstances, they have a hard time using their mobile phone, or they are experiencing aging eyes, right, and it’s harder to see things.

They are becoming aware that accessibility is really something that applies to everyone at almost one time or another. And so, we have often used the word, usability, or just simply the word, access, or another word that has been very helpful is consumability. The ability to actually take a solution and use it by everyone who needs to use it, that is resonating with our management team.

It’s allowed us to get much farther, make much farther inroads in terms of implementing.

>> YELIZ YESILADA: Thank you, Brian. Elizabeth, would you like to add to that?

>> ELIZABETH WOODWARD: No, I think he’s exactly right. I think when, the one thing that I would add is that people have a vested interest, when they see themselves in the problem, and I think that now with the pervasiveness of situational disabilities related to mobile computing, just about everybody can see themselves in the problem.

I think that that is really helping. It’s also helping us to boost the business case in terms of dollars and the potential risks.

It’s a magnifier, I think, that has been really helpful.

>> YELIZ YESILADA: Thank you. I think we can take one last question. If there are any questions, is there anybody who would like to ask a question for our participants? If not, I have plenty of questions.

I see Andrew in the queue, I think. Andrew, would you like to ask a question? Or you just want to say, I agree. You just ed to say you agree with Elizabeth and Brian. Just agreeing, okay, thank you.

I think I can ask one last question. In your paper again, this is related to your paper, what is, use of disabled users (distorted audio) this is a follow-up question which was previously asked.

>> ELIZABETH WOODWARD: We tell everyone the biggest step is to engage disabled users or engage people with the expertise to speak on behalf of persons with disabilities, include them in their security solution.

We came into one pilot that had performed a users needs study. The first thing we saw was they hadn’t included any people with disabilities. This is a problem for persons with disabilities. It’s also a problem for people who are going to experience the disability, situational disabilities that Brian described, and it’s a problem for the IP staff, chief security officer and so on who won’t get the input that they need to make informed decisions about the security deployment.

Secondly, the second big thing is building accessibility into the enterprise mobile security solution. We often see that accessibility is treated as something that gets done to the project after it’s complete. I’m now going to test and then I’m going to remediate. But we know that making changes earlier is going to be less costly than trying to fix those systems after they have already been delivered.

We have several other recommended steps mentioned in the paper, which we can go over in a lot more detail, but in the interest of time, I’ll summarize just a few here.

First is identifying or determining the appropriate standards, both security and accessibility, with which the mobile applications and solutions should comply, updating local instructions, the guidance and education related to accessible and secure mobile computing, determining which combinations of the operating system, device, carrier, assistive technologies, and work applications are going to be supported.

That is important because we see significant fragmentation in the Android market, and it can be very costly to try to support everything.

Also, making decisions about how you are going to support accessibility with assistive technologies, some of which may require third-party assistive technologies and evaluating those, building accessibility into any new security technologies, biometrics, containers, hypervisor solutions, and what have you, and determining how compliance monitoring enablement and enforcement will be done.

Brian had some interesting research thoughts around the compliance monitoring and enablement. Brian, would you like to share those?

>> BRIAN CRAGUN: Sure, if we have time here.

>> YELIZ YESILADA: Brian, we are actually a bit overrunning.

>> BRIAN CRAGUN: We can come back later.

>> YELIZ YESILADA: If you can wrap it up, I think that would be good.

>> ELIZABETH WOODWARD: That is it. If anyone is interested in more discussion on any of the steps in the paper, we would be happy to go through those in greater detail.

>> YELIZ YESILADA: That’s great. Thanks a lot. Brian, would you like to say a few words?

>> BRIAN CRAGUN: All I would say is that we applaud all of this effort to try and research these things. We are finding that it is, that the limitations in testing applications, and the content, the cure tests that are accessible, all of these things are impacting an ability to move into a mobile space and make sure it works for everyone.

So we really applaud this effort and we look forward to what the other speakers have to say. Thank you very much for inviting us.

>> ELIZABETH WOODWARD: Yes, thank you very much.

>> YELIZ YESILADA: Okay. Thanks, Elizabeth, Brian. A few reminders, before we move to the next paper. You can follow live captioning. I put the URL in the chat room. And also, if you can identify yourself on chat room, asking questions will be much easier.

And also, if you like to ask a question remember to type Q+ into chat room and also 41 hash key or pound key on your dial pad. We will now move on to the next paper, Enabling Flexible User Interactions With Mobile Media, and (audio breaking up) by Mike Wald, E.A. Draffan, and Yunjia Li.

Again, I will start with our general question: How does your work relate to the symposium objective and research questions? I think Yunjia is going to answer our question?

>> MIKE WALD: I’m sitting here with Yunjia. Yunjia is our technical developer. He will come in on some of the technical details. E.A. Draffan is a long way away, Scotland, on a mobile phone. Probably the easiest thing, she will come in and add some comments as well.

That said, the best thing to do is explain why we are doing what we are doing and what we are doing. One of the problems about using video for learning is that as soon as people see a video, they tend to go into what I call movie mode.

They tend to lean back and say, someone entertain me. We have been using annotation of video on desktop, laptop machines for some years. Both captions interact with transcripts as well as teacher student generated annotations and notes.

I’ll go on to the next question to explain the benefits for disability. But one of the problems we had with trying to do this on a mobile is that mobiles haven’t been designed to allow interactivity with video. You have already heard some of the information from the previous speaker about some of the issues of mobile Web accessibility, and some of the questions here about the paradigms and interfaces and gestures. I’ll let E.A. Talk about that when she comes in a bit later.

We are more interested in our research about the interaction with the video using the mobile, how you can cope with the fact that if you are annotating a video, you have got a lot of information. And we are looking at responsive design, how you can cope with the different mobile devices, the different ways you can interact with them.

And clearly, the size of the device and the resolution of the device is one of the issues that can help in that situation. We will be talking under the W3C guidelines. One of the guidelines that we are looking at is also to deal with issues to do with media fragments, actually how you handle switch of media. I’ll have Yunjia quickly talk about some of the issues of handling media fragments.

>> YUNJIA LI: Hello, I’m going to talk about handling media fragments. Media fragments for the mobile Web, mobile device is limited to bandwidth. It is not (inaudible) the whole video from the very start. That is a very good idea, mobile devices, you can say bandwidth (inaudible) could be more precise, pointing to the pointer you are really looking for. The program of the media fragments, there is a specification over there. Media fragment working group is already producing the video fragments specification. But sometimes there is limitations and they are format dependent. It is different formats. So it’s quite difficult to deliver the media fragments among different components of the video.

The result, mobile is quite limited and different varieties of mobile, also is different. So they are subject to comparability and accessibility problems for the video.

>> MIKE WALD: Okay.

>> YELIZ YESILADA: Thank you. Now I’d like to ask you some specific questions about your abstract. First question: Do you have any sense of scale of benefits we may expect if annotation of the media is easy, and able to appear in parallel to its rendering, and for what user group especially for people with disabilities?

>> MIKE WALD: It’s interesting because when we brought annotation systems, video in about four years ago, it’s called Synotes, the original idea was it was designed for people with disabilities. One of the problems was in order to get any funding for this, as soon as you put anything with a particular target group of hearing impairments for the captioning the reviewers in terms of funding immediately think this is a minority.

I actually changed the way I approached this. I actually said annotation of video is an important issue for any user. The ability to search and find video. And in the end, I found the analogy, the best analogy I used for the textbook and I said, if you had a textbook and it had no (typing) page numbers and no section headings, it would be a storybook and you couldn’t interact with it and use it for learning.

One of the issues, addressed all the time, we are not addressing this issue just for people with disabilities, because otherwise we found it difficult to get universities or funding bodies to say this is a mainstream thing.

But the advantages for people with disabilities are immense. Clearly, captioning is one of the important issues for people with hearing impairments. And being able to have captions appearing, synchronized transcripts, at the same time as the video, is a big issue.

Being able to use captions as annotations rather than embedded in the video is quite important, because if you are giving other people captions, we are using speech recognition captioning, and again that means editing, and therefore, the ability to bring these things together, rather than having them just embedded in the media stream.

The other issue is the ability to take notes, the ability to watch something and think how does this impact on my learning. And this applies to all disabilities, particularly ones with memory issues, cognitive impairments, whether you have got learning disabilities.

But again, it’s something for all users, the ability to actually take notes and jump back and search and find the parts of fragments of the media that you want. We also found that actually having transcripts and being able to translate the transcripts was actually useful for non-native speakers, who find it difficult to understand the accents, or audio quality, if you have a transcript, you can understand it, but also the ability to use translations of the text. You can use it also in translation.

The answer really is we found value for all users, so wide range of people with different disabilities. Again, if you are a blind user, the ability to put a description, either a text description that can be read out by a screen reader of some of the images or some of the video.

The important thing is, media, whether it’s text, video or audio or images, has different properties, and it’s the ability to put them all together, and for the user to use whichever modality they find the most benefit.

>> YELIZ YESILADA: Okay, thank you. Our last question, what are the accessibility benefits in HTML5 that led you to select it as a solution?

>> MIKE WALD: Just talking generally, and then I’ll hand it to Yunjia, so he can talk in more detail about this, but one of the issues was, we wanted people to be able to bring their own device, and rather than building it for an iPhone or build it for a particular flavor of Android device or particular device, and therefore, one of the issues was Adobe for HTML5, means we are not having to repeat a build for every device.

Clearly, trying to go with a standard means that in theory it will be able to continue to be used and extensible, but of course, we have run into some of the difficulties that going for HTML5 where the specification is still fluid, and video format it can play are unlimited, brings up certain issues.

Yunjia, I’ll hand over to you, you can talk in more detail.

>> YUNJIA LI: One issue with mobile, mobile is cross platform, so that standard even though is still work in progress, I think (inaudible) HTML5 will be settled and then everyone will be going to follow that. But on the other hand, mobile devices, there are quite a lot of mobile devices on the market now, we have (inaudible) iPhone different platforms and standards.

For development reasons it is quite difficult to cope with every kind of platform.

But one unique feature to all the mobile devices is that they all (inaudible) without a platform (inaudible) differ across the different devices (background noise) might provide (inaudible) (beeps).

There are standards that could be very useful. For example, HTML5 has platform (inaudible) media fragments and other standards. These standards provide (inaudible) accessibility functions and applications. Going along with this, different annotation, video and audio, annotation 5 standard, we don’t have to develop anything more, and besides HTML5 also defines accessibility functions.

I think HTML5 is a good solution in this context.

>> YELIZ YESILADA: Great.

>> MIKE WALD: If you could bring in our third colleague, E.A. Draffan, can you switch?

>> Hello.

>> E.A. Is unmuted.

>> E.A. DRAFFAN: Hello, can you hear me?

>> Yes, go ahead.

>> E.A. DRAFFAN: I think what really enamored us with HTML5, even though you don’t have all the standards as Yunjia was saying, one of the glories of it is there are also a lot of people working on the accessibility options, and strategies for making it more accessible, whereas the apps that we see very often are as closed as closed can be. And there is very little you can do other than depend on flaky screen reading on some mobile phones, extremely good screen reading on other mobile phones.

And when we tested the iPad and the Galaxy Samsung and several other tablets against these options too, we found that by going for the HTML5, we had a much more responsive, as Yunjia would say, type of development, in that very often, if you are using an app type development, you don’t get the screen success that you can get with HTML5, and with a Web application.

That isn’t to say that the Web application isn’t a little flaky so far. But our hope is that this is something that will change in the very near future.

We feel it’s a much more open application. We are very anti this idea that what one develops should be totally closed. We would like to see a lot of sharing going on while we are developing these sorts of things, so that we can improve them, perhaps hopefully more speedily than some of the apps which, if we were to believe what is going to happen, for instance, with the iPhone 6 or iPad 5 or whatever, that you are going to find accessories change, software changes, at the whim of certain companies, and we really don’t want that to happen.

>> YELIZ YESILADA: Great. Thank you. I see actually, we are almost over time. But I would like to hand over to Elizabeth to ask her question. Elizabeth?

>> ELIZABETH WOODWARD: Yes, certainly does appear to be a very useful application for students and even more generally, and I really appreciate the sharing.

One clarification though. You mentioned personal data stores. Is this only personal markups and access? Or are you also working on a collaborative model where multiple people transcribe and annotate? I wasn’t clear on that. I think that would be helpful.

>> MIKE WALD: Yes, the model the system uses is collaborative. So when we set the original project up, it was a collaborative system. The annotation is collaborative. There is a way you can make your own private area. But the original idea was it’s collaborative annotation.

>> YELIZ YESILADA: Okay, great. Any other questions anybody else would like to ask, any question?

Okay. I see nobody on the queue. And we actually used 15 minutes. So I think we can move on to the next paper. Our next paper is Accessibility in Multi-Device Web Applications, by Cristina Gonzalez, Javier Rodriguez and Caroline Jay. Again I would like to start with our general question: How does your work relate to the symposium objective and research question? I think Cristina will answer the question.

>> CRISTINA GONZALEZ: Yes, I will answer the question. But if any of my partners would like to ask any further information, feel free.

Our work is focused on a specific technical challenge related to mobile accessibility. It tries to analyze techniques which help to solve or mitigate existing technical problems for mobile Web users. We show that problems affects users on wide varieties, but of course people with multiple impairments will be particularly affected.

Investigation will be users who have trouble with (inaudible) or mobile phone. We assume that by helping this group we will also be helping research. One of the problem is text input interaction. New model interfaces has appeared. It has led to the development of a great number of different, each one presents its own problems.

Coming up our work will contribute to know what is the state of mobile accessibility, flexible mechanisms, for instance, common problems in different families of varieties and suggest multiple techniques to solve those problems.

With regard to the relation to Web accessibility baseline we must say that our work keeps in guidelines of the, accessibility (inaudible).

>> YELIZ YESILADA: Thank you, Cristina. I would like to move on to our next question, question specific to abstract. You say in your abstract, you actually make an assertion that the user needs on the desktop can directly transport to mobile context. Can you please elaborate on this? Isn’t this a strong assertion, and how do you know if this is a correct assertion?

>> CRISTINA GONZALEZ: Yes, perhaps it is too strong. We should rewrite the sentence saying that some of the problems appear in the right context, we state that similar problems apply to different user groups in different contexts.

More specifically, more end users that are in some manner situationally impaired users using the keys or environment in which it is used, experiencing problems, mobile impaired desktop users. This has improved following the methodology. We can investigate pointing or typing errors for multiple impaired users. They have made some experiments for a set of devices and now we are analyzing the same assertion apply to the broad range.

>> YELIZ YESILADA: Thank you, Cristina. I will move on to the next question. How close are we to seeing results, and do you have any preliminary results or examples available?

>> CRISTINA GONZALEZ: Right now, we don’t have any kind of results, seeing as we are still analyzing the data. Our next steps will be, we will create a complete table that relates three issues, the mobile keyboards, the common errors that could occur when using these keyboards and the possible techniques required to correct them.

The table will be allowed based on that extract after analyzing a lot of different mobile users while interacting with basic prototype that we have produced, developed using (inaudible) when this table has been complete, we will establish priorities for such implement the most useful connection techniques in different families of varieties.

>> YELIZ YESILADA: Thank you, Cristina. Any questions? I see nobody on the queue. But is there any question from our participants? Would you like to ask any question to Cristina? I actually have plenty of questions.

But I just wanted to ask to our participants. Okay, anybody? I want to ask my own question. In the meantime, if anybody has any questions, please remember to raise your hand by typing Q+ into chat room and 41 hash key on your dial pad.

So Cristina, how does your work in MyMobileWeb relates to people with disabilities and accessibility in general? I think it would be nice if you can answer this question.

>> CRISTINA GONZALEZ: Okay. MyMobileWeb (inaudible) relate to mobile device Web applications, so far it isn’t paying special attention to accessibility apart from technique, for instance, text for images. This work is having a platform which facilitates creation of accessible mobile Web applications by implementing the text correction technique as we have explained.

>> YELIZ YESILADA: Thank you, Cristina. Any questions from our participants? I will ask another question then. How, can you please explain us in more detail how are you intending to correct keyboard description problem? You mention this in your abstract and your work. Can you please give us more information about that?

>> CRISTINA GONZALEZ: Okay. So we need a way to describe the different keyboards to be analyzed. The first approach, refine the mapping between physical key and (inaudible) raised on a specific device model. In order to support the devices we need to review the amount of information to these mappings and also (inaudible) produced here model. For instance, we may describe a general key to mapping for a specific family, for example, S40 family and then describe a different model variations, languages variations and so on.

The idea is keyboard mapping model in our description.

>> YELIZ YESILADA: Thank you, Cristina. Anybody would like to ask any questions? Okay. I think I see Brian. Brian, would you like to ask your question?

>> BRIAN CRAGUN: Our experience is that even when you have a keyboard associated with a mobile device, that sometimes the user interfaces are no longer set up to interact with a keyboard, that you have to use touch in order to indicate which field should have focus.

Do you have any recommendations as to how to create again an environment where the user interfaces will respond to keyboard events or at least be compatible between keyboard events and touch events?

>> JAVIER RODRIGUEZ: This is Javier Rodriguez. We need to define mapping between both a hardware keyboard and software keyboard. For instance, we will need to analyze different software keyboards, as for example, the swipe keyboard that is very famous in mobile devices and we need to establish these mappings not only for hardware keys and hardware keyboards but also for software keyboard.

The thing is that we need to establish both mappings. The ones corresponding to hardware elements and ones corresponding to software applications of different attached keyboards.

>> BRIAN CRAGUN: Thank you.

>> YELIZ YESILADA: Brian, does that answer your question, or do you have any follow-up question?

>> BRIAN CRAGUN: Yes, thank you. And is your team aware of the independent user events standards group that is starting with W3C? That might be interesting to you.

>> YELIZ YESILADA: Javier, Cristina, would you like to respond to that?

>> JAVIER RODRIGUEZ: We are aware of that. And of course we will follow in future in order to be aligned with these solutions.

>> BRIAN CRAGUN: Very good. Thank you for your answers.

>> YELIZ YESILADA: Any other questions? We still have time. I would like to ask another question, related to Brian’s comment actually. We see that there are new smart phone technology coming up. Do you think in the smart phone actually they already address some of the issues that you are describing in your abstract? Or in your work? And also the technologies that are coming up now.

>> CRISTINA GONZALEZ: Yes, some prediction, introduce some text, the swipe. But the main problem, this mechanism, fragmentation, it is difficult for developers to get which features are supported in the keyboard of this device.

>> YELIZ YESILADA: Thank you, Cristina. Javier, would you like to add to that?

>> JAVIER RODRIGUEZ: No, nothing to add, thank you.

>> YELIZ YESILADA: Thank you. So one last question. So you are planning to use MyMobileWeb platform. So how many users will be in MyMobileWeb, what is the scope of the data that you will be collecting?

>> CRISTINA GONZALEZ: First of all, we need to make the test, in environment to our users, so we prefer to make a small survey to eight people and make the test more in that, as after all we are analyzing much data, we are still gathering the data from other common users, not only other users.

>> YELIZ YESILADA: Okay. Thank you, Cristina. We are looking forward to see your results and the data collected. I would like to thank the speakers, Cristina and Javier are with us, thank you.

I think I see on chat room that Brian I think would like to ask a question. We still have few minutes. Brian, would you like to ask your question?

>> BRIAN CRAGUN: Yes. One of the things that, also, that we have seen with keyboards that perplexes us is that some people interpret the soft keyboard as being sufficient to meet standards.

I wondered if the authors have any opinions about the way that standards perhaps should be changed, or reinforced, or reworded, to deal with soft keyboards, or is it their feeling that only physical keyboards are the only way to solve these problems?

>> JAVIER RODRIGUEZ: I know my feeling is it not yet a matter of producing new standards, but also for the software manufacturers to be compliant with them. Of course, the standards are very helpful too in order to guarantee cross platform compatibility. But the thing is that they should implement the standard in the same manner. This is also an issue.

>> BRIAN CRAGUN: So I understand from your answer that your feeling is that the standards are sufficient. It would be that mobile devices without physical keyboards or being unable to work with physical keyboards would not actually be meeting the standards. That would be your opinion.

>> JAVIER RODRIGUEZ: Well, let’s say that my opinion is that we need to work in standards, but not yet working new standards to guarantee that implementing those standards with those. If we have one, two or three manufacturers implementing these new standards, but a lot of them are not implementing those and these standards, we are in trouble. So of course, we will need to work on the definition of guidelines and new standards for browsers. But is it faster to guarantee that new browsers are implementing them.

>> BRIAN CRAGUN: Thank you very much.

>> YELIZ YESILADA: Thank you, thanks, Cristina and Javier for the answers. I also would like to thank the second paper presenters. I’m sorry, I did not explicitly thank them at the end.

Now, we can move on to paper number 4, Assessment of Keyboard Interface Accessibility in Mobile Browsers, by Jan Richards, Alison Benjamin, Jorge Silva and Greg Gay. Again I would like to start with the general questions to the presenters, how does your work relate to the symposium objective and research question?

>> JAN RICHARDS: Yes. First of all, can you hear me?

>> YELIZ YESILADA: Yes, we can.

>> JAN RICHARDS: Great. Our paper focuses on, a little on the topic we were just discussing, something that seems simple and perhaps subtle, but it’s actually really dynamic in the mobile space which is keyboard accessibility.

So we all know that the evolution of smart phones is generally away from onboard physical keyboards, towards touch screen driven devices. We are seeing Apple, iPhone obviously started that. They don’t have any phones with keyboards, but Android of course is going that way, Windows phone, even Blackberry which is known for their physical keyboards has announced that their first DX phone is not going to have a keyboard.

So that’s fine. But as this process has been going, we started to hear developers saying that keyboard accessibility shouldn’t be a requirement anymore because mobile platforms don’t support it.

And in fact, they do, it is fairly well-supported. Blackberries has their physical keyboard, and you can attach a Blue Tooth external keyboard. Android has excellent keyboard support. They used to; we will get to that in a second. And Apple has external keyboard support.

So from an accessibility perspective, we don’t actually, to give my answer to Brian’s last question, I don’t think we need to require on board physical keyboards because of course they are very small anyway because the devices are small, but we need to be able to, we need to have robust on-screen keyboards and we have to have the ability to get external input, whether it be from external keyboards or scanning devices into the devices.

But our work, the work of this paper grew out of the realization that while there were these keyboard capabilities on various platforms that wasn’t widely appreciated, even by the developers of those systems. You talk to people who are very closely associated with Android, and they might not know that Android has this.

Certainly, not when you get to the third party developers, right? Their test machines, if they get a test machine that doesn’t have a mobile, if it doesn’t have an onboard physical keyboard, they likely won’t know that there is any support for keyboard navigation.

We really deeply got into these issues, as we were developing our app called Tecla access, which you can find on the Android market for free.

And it is a scanning keyboard for Android, which allows people using even single switches to control Android, the Android system.

Looping back then to your question, I think many of, almost all of the objectives of this, of this meeting and of the research questions are touched on by this work. Right? We really, we speak to the keyboard accessibility of platforms, browsers, we test some applications, and then let me give you, talk about the W3C guidelines.

We draw in UAAG especially because UAAG focuses on browsers, but of course browsers I guess, they are split from OSs, but in some cases, promo app or something, they can be platforms themselves. Some of the requirements that are in UAAG are very helpful in making sure that a mobile platform or, would be accessible to keyboards.

Just one more thought. There is a bit of a kind of purity almost to keyboard access, that helps one then think about the complexities of mobile device interaction. So you can think of all these sophisticated input methods you might have, all these gestures, speech, you can imagine eye gaze, even direct brain interfaces, right? But there is always going to be users who won’t fit into those cool new input methods.

For those, you are going to have to have an alternative. And in the end, most basic alternative is going to be the single switch. That is the most pure, one bit of information, can you control the system with one bit of information.

And that gets back to keyboard accessibility, and then if you have got a story that you can tell, a solid control story around one bit control, then you can build on top of that all sorts of interesting things.

>> YELIZ YESILADA: Great, thank you. I think I was too quick. I really would like to ask all the questions. I’d like to ask you one specific question about your abstract. In your abstract you say that mobile software developers would need to be aware of the important accessibility role played by keyboard interfaces. And will need to test the functionality as a routine part of Q/A.

What do you think, in order to achieve this, what do you think, what can be done and what needs to be done, Jan?

>> JAN RICHARDS: The best way for this to happen is for platform developers, Android, Apple, to build in these testing tools into their quality assurance suites, the developer tools that they provided developers, to basically have this be some kind of review mode or something, where the ability to step through sequentially through your user interfaces and make sure everything is working in that mode is, something that is like a step that must be passed in order to move on.

I think it is in their best interests to add this, and to maintain interoperability on the platform, because they are doing, when we point out the inconsistencies, the Apple team with voice over, and Android team, they are doing these things, and if it’s being broken by all the third party app developers, that is making for an inconsistent user experience which is not going to reflect well on their platform.

It is really in their interest to make this work.

>> YELIZ YESILADA: Okay. So related to this question, again, my last question is actually a follow-up question. What can be done to increase the mobile developers’ accessibility awareness then? What do you think? What would be the best approach to actually increase their awareness?

>> JAN RICHARDS: The best is to get to the developers. These developers don’t know it exists. Some of them may have even only, their first phone might have been an iPhone. They may not even really think that iPhone supports keyboards. You need to get to the developers, in the tool that they are using. The only tool we know that they use which are the development environments, right?

But on top of that, you can do other things. You can make explicit statements, say around these features, rather than hidden things that they almost have accidentally or remnant. So in the case of Android, this is almost a remnant of a former, of the fact that Android used to support a lot of physical keyboards, right? So it’s still there, because they haven’t broken it yet.

Another thing, I mean, from our community’s point of view, we can create a series of videos, showing all the, showing developers quick videos, how is iPhone used, when you’ve got, with a single switch? Can it be? Just to show that yes, it doesn’t take specialized stuff. It is almost all already there.

>> YELIZ YESILADA: You are suggesting that some education would be useful.

>> JAN RICHARDS: I am suggesting that, though I know it’s not like these developers are not going to go to YouTube and search for single switch access to IOS. That is why I’m saying, number one, it has to be in the developer tools as a Q/A tool, not as some separate accessible special people thing. It has got to be built into the mainstream Q/A process.

>> YELIZ YESILADA: Actually without them noticing that they are being accessibility, you think we can encourage them to actually achieve accessibility.

>> JAN RICHARDS: Right. It will be sold under something like, you know, interoperability of focus mechanism or something, and step number 5 in the Q/A process, and you press this button and it scans through using, this is easier in Apple where voice-over provides a linear scanning where you can basically press down, down, down, and go through the entire sequence of controls.

And just say to the developer, track the focus, make sure it shows up. If it’s not showing up, then you have broken something. It is a bit harder in Android, where there is a nonlinear navigation structure. But they should move to a sequential one anyway. That is something which is on the platform responsibility.

>> YELIZ YESILADA: Yeah, thank you. I see a question on chat room now from Brian. Brian?

>> BRIAN CRAGUN: So, in your experience, do all of the platforms retain the concepts that are currently on the desktop, such as focus order, being able to navigate between different frames or major groups of contents or fields?

>> JAN RICHARDS: No, not fully. But iPhone of course has quite powerful navigation tools both in voice-over and so it can do all that stuff. It is a matter then of how do you control voice-over? So we have a, Android is much more closed software environment. For Android we have – sorry, not Android – IOS, we have a Tecla version for IOS which is a piece of hardware which is sending almost like arrows, arrow buttons to voice-over to control voice-over. We do have scanning on IOS. On Android because it’s not a closed software system, we can actually replace the input method. We can do dynamic scanning of our own keyboard actually as our own input method.

But, Android doesn’t have, as you probably know, as well-developed accessibility API, for things like you are saying, groupings and that kind of thing. So that is a down side there.

Blackberry, more like, well, closer to Android in terms of its accessibility API, but unfortunately it’s a closed system, so we can’t get our tool in there.

Windows we haven’t done as much testing with. From what I know at least so far, maybe someone can correct me, I don’t know that it can actually take an external keyboard at this point.

>> BRIAN CRAGUN: Do we need to call for additional APIs or consistent APIs across platforms to be able to handle these basic inputs?

>> JAN RICHARDS: The people working on it, ATIA has a group that is trying to call for this. It would be something probably around Blue Tooth support for hit or something like that.

These things, the fact is, there is existing keyboard support into the major platforms, and I would be surprised if Windows didn’t catch up.

We should definitely, somewhere, the expectation that these behaviors would be there should exist, I suppose.

IndieUI is going to be interesting too because it’s going to put another layer on top of it. Right now our control is all around left, right, up, down, back, enter. IndieUI hopefully means we will be able to send higher order control signals, next page, search, that kind of thing.

Maybe that is a place for it. I’m not sure where exactly, which W3C guideline would have to do exactly with OS requirements. The functional requirements to support linear navigation, focus, focus indication, that is all in user agent accessibility guidelines which is what I call it in the paper.

>> BRIAN CRAGUN: Thank you.

>> YELIZ YESILADA: Thank you. I also see Marina in the queue. She has a question as well. Marina? Are you on the queue? Or that was a mistake?

>> Okay. Can you –

>> YELIZ YESILADA: Yes, we can hear you now.

(Echo)

>> Okay. So in your technology, to make keyboard, platforms a dynamic impact, I would like to know how did your technology, can you shape accessibility perspective. Thank you.

>> JAN RICHARDS: Sorry, I wasn’t able to catch that question very clearly.

(Echo.)

Something about shaping dynamic keyboard?

>> Use the tech, is there a technology that makes the screen having the keyboard, soft keyboard, the up, the down, the keys and so you can add perception or the numbers (inaudible) technology of USA announcing for the next year to be the first platform with this technology. I don’t know if you know this.

>> JAN RICHARDS: I think I heard a question about the tactile technology that some people are working with. I’ve heard of this for several years. People are working with things where they want to be able to be able to touch on the screen, be able to touch keys. Yeah, it sounds great. Of course, it’s still for a certain user group.

Some of the users of our Tecla access keyboard, they can’t touch anything. Even if it was, even if the mobile phone was the size of a laptop, they still can’t touch it because they can’t move their hands. They are even using let’s say a single switch puff switch.

It’s, it still comes back to the fact that even if you have these great powerful gesture system and that kind of thing, you have to come back to the user who is the one bit user, using an ability switch like a puff switch or head switch that needs to get that information, via Blue Tooth or whatever. Blue Tooth is definitely the standard today, via Blue Tooth into the system and be as productive as possible.

That means scanning, in our case, on the platform a scanning keyboard so they can do row column scanning and all that kind of thing, which is I think, the fact that we can do that on Android shows it can be done. People who say that, oh, this isn’t a need, or the platforms are not strong enough, they don’t have sufficient keyboard, forget it, they do. It is a matter of doing it and not breaking things.

Most of our paper actually is about the fact that browsers are produced by third-party manufacturers, third-party developers. The browser manufacturers probably don’t in some cases know that the keyboard control is in there.

So they are leaving off that the keyboard formatting, or the keyboard control indicator, focus indicators, that kind of thing. So then even if a mobile Website does the right thing and follows all the WCAG and ends up on, displayed by this browser, the keyboard only user, can’t use the site because their focus indicators aren’t there. So everybody in that stack has to follow their guidelines.

And I don’t think it’s a lot of new guidelines. It just means, the Web content people, following WCAG, the mobile browsers following UAAG, the platform guys, they should take a good long look at UAAG, but they should also acknowledge and take strong steps, like I said, to defend keyboard accessibility on their platforms. It is already there.

Then we will be okay.

>> YELIZ YESILADA: Thank you. Is there any other question to our presenters? I see nobody on the queue, and we are actually on time.

So thank you very much. I would like to thank our speaker again, Jan, and also the coauthors. Thank you.

So we can now move on to the next paper, which is our last paper, Inclusive Mobile Experience, Beyond the Guidelines and Standards, by Katja Forbes. I will ask our main question again to our presenters. How does your work relate to the symposium objectives and research questions?

>> KATJA FORBES: Hi, this is Katja. Can you all hear me?

>> YELIZ YESILADA: Yes.

>> KATJA FORBES: Brilliant. I’ve come from a user experience and design background. The way that I research and look at the challenges we have is from that human-centered point of view. I’m not as much of a technologist as some of the people on the call.

I’m always looking at the system, looking for human-centered accessibility guidelines. Some of the objectives for this particular symposium, I’ve been looking at the coverage of mobile accessibility by existing standards, primarily WCAG and mobile Web based practices, mobile Web based education which now touches as well.

The fundamental thing that I’ve discovered is that, and it’s probably obvious to a lot of people as well, there are actually no user perspective that will be standard for mobile accessibility. And the concern I raise in my paper is that, come up with something that might be relevant and get it accepted, the state of the landscape of the mobile world moved along so fast that it will be irrelevant so quickly.

The other things that I’m looking at is the work, standards work that has been done, great work that was done by IM research project was done at a point of time where we didn’t have a lot of the devices that we have around today. In particular we didn’t have the iPad tablets at all. And so while it was doing the right thing in trying to find a stable table legs within WCAG that we could apply to mobile, it just moved on so fast, that the last research which I believe was in 2009.

Additionally the other things I’m looking at as part of my research are these new interaction models, and the new interaction paradigm.

There is a lot of opportunity to look at totally different interactions, because new devices are providing us with a lot of features and geo location services and things like that. This brings up a huge opportunity for us to look at supporting people, accessibility with specific applications such as money readers and things like that, which uses the term to find the denomination of the money that the person is using.

And also I’ve been looking at ways that we can use the geo location services, to minimize all that physically intensive data input, such as putting addresses into phones with a tiny little keyboard that we have been talking about in earlier cases.

I think the other thing I’ve been looking at which is to that last question that we had was that type of technology which is that tactile layer that allows buttons to appear and disappear as they are required, which was a very good point was made, is relevant and useful to some of the people with accessibility needs, but we have other people to whom that is going to make absolutely no difference.

But the point is that I do, I am trying to look ahead to things that are being put out there, that could possibly be used, and then form part of standards as well.

But yeah, coming at it from the user experience background, what I can talk about a little bit later, I’ve been trying to look back and really take a focus of the user’s point of view, and what their requirements are, and trying to let that drive our standards and drive the activity that we have in the standards. That’s the end of my statement.

>> YELIZ YESILADA: Okay. Thank you, Katja.

I would like to actually move on to our question specific to your abstract. I already see some people in the queue. So, do you think there is a tension between increasing accessibility requirements resulting from increasing complexity (audio breaking up) and need for less (inaudible) guidelines. If you think there is a tension, what suggestions do you have (audio breaking up) such tension based on your –

>> KATJA FORBES: That is great, two-part question. I think the answer to the first part, absolutely there is tension between increasing accessibility requirements, and that increasing complexity.

I’d say it is not only due to the complexity of mobile applications, but also due to the state of the development that, and all the material nature that the different companies and manufacturers apply with their own user experience and design concepts to the ways that they put together apps, I think really well referred to earlier on, the whims of certain companies as to how they approach their design.

I’m going to answer the first part of the question with a current example. And in this example I’m including tablets because as I’ve gone through my research, I’m always including tablets as part of my mobile devices.

The example that I can give you, look at the Windows 8 touch strategy and the design patent which are also going to be used in 8, and having a look at all the user experience design and touch developers, they are going to be confronted again with a new set of touch design constraints, and that they are also going to be presenting accessibility challenges.

Although I had a sneak peak workshop which I was involved in which was really great, and I discussed with the guys there how their touch first was actually going to be accessible. And I was assured that everything that was possible to do with touch on their tablets with Windows 8 was going to be possible to do by keyboard. But then I discovered certain gestures such as rotation based controls that this can’t be replicated in the keyboard browser context.

The statements that I’m making on all of this, with the fact that I have not done extensive testing on Windows 8 (inaudible) gone through all the materials that are available on line and also have got some additional Microsoft templates which I received during that workshop. During the workshop I had an opportunity to play with Windows 8 and did some design for it. What does this also mean for Windows 8 which is code named Apollo, and apparently turning up mid 2012, which should be pretty soon, the industry knowledge has been that it is going to be, Windows 8 is going to be closely tied to windows .8, share ecosystems, components, and user experiences. They will both be using Internet Explorer 10. This does create a really complex situation.

The other, there should be some improvement specifically around speech to text, as market does plan to introduce a redesigned version of their speech to text tool. We can’t just focus on visually impaired accessibility, which is a really good point that other speakers have made as well.

All these gestures based interactions, and much of the Windows 8, touch approach relies on them. If we have (inaudible) additionally, there is a lot of questions out there already about how Norita is going to work with the whole concept of large tile, which is one of the interactions mechanisms of metro, where live content changes in one of the little tiles is on your home screen.

These are integral to the Windows 8 and Windows .8 experience. My problem in developing that work, lots of explanatory articles about how to build accessible metro apps, and most of those actually rely on implementation of the HTML5 with ARIA and JavaScript or the combination of them in some way or another. I don’t want to get too much into the technology here, because this area, if we are looking at Windows 8 touch and then offering that as something for people to use with their keyboards as well, then screen readers such as Jaws, version 10 and above and even version 13 are having trouble with HTML5 ARIA, which I’ve experienced recently, in design and development work that I’ve done in the other role that I have.

And all of the guidelines that developers refer to as information, W3C, WAI, versions of HTML5, nothing are actually about mobile accessibility at all. They are not even being referred to the WCAG and MWBP mapping. I think they are in their papers, they have got a very technical set of 11 accessibility guidelines, which are specific to their application development, nothing about the user experience.

The other aspect to it as well, as Microsoft is assuring the disability community that they are working with the assistive technology companies to ensure everything is integrated with their new touch first, but there is other comments that are being made on the Microsoft developer network, saying that they have already done a great job because certain mirror driver models have been removed in Windows 8. I don’t know what that means, but it doesn’t sound good.

I can go on with a lot more Windows 8 accessibility issues and challenges. But just having that example answer the first part of the question about whether there is tension between complexity of app development and technology less complex guidelines.

The second part of the question, to suggest an approach to these kinds of issues, it is really hard. I certainly don’t have all the answers. The way that I try to look at it is through the, we have well structured problems, we have ill structured problems and we have something called wicked problems, and this is something 73 came up with. Wicked problems are situations, there is no complete list of moves that we can make to get to a solution. And the solution to wicked problems can be good or bad.

There is always more than one explanation for these problems and every problem is the (inaudible) and every problem is unique. We are really in the waters of the wicked problem state.

But the other good thing, strategy that we can approach these kinds of wicked problems with, one of those that I like is a collaborative approach. This collaborative approach, making the people that are affected in the planning process so that they are actively involved in the planning process. I advocate this as a strategy, solving or at least moving forward to this solution of the problem trying to create standards in the mobile landscape. The mobile landscape is made of quicksand.

That being the case, I believe that use refresh to form the basis of guidelines and adding those to the WCAG 2 are relevant to mobile and touch interaction. The challenges of this approach is achieving the shared understanding and it’s a time consuming process. But I think it’s time that is really well spent.

I hope that answers that question.

>> YELIZ YESILADA: Yes, thank you, Katja. I would like to ask you quickly our last question. And then I see that there’s two people in the queue. Related to your comment, what strategy would you specifically recommend to help keeping guidelines like records up to date with mobile technology. You already briefly mentioned a few recommendations, but… (Audio breaking up).

>> KATJA FORBES: Sure, okay. This is what strategy would you recommend to keep guidelines up to date?

>> YELIZ YESILADA: Yes.

>> KATJA FORBES: Yes, okay. So the strategy that I put forward is similar to what I was just talking about which is continually get with the user for accessibility requirements. That is the key to maintain moving standards. In the first instance we need to establish a baseline which is going to be possible to detailed user refresh. I can talk about that later on if we have time.

Another strategy, another part of the strategy, cataloging all these new input mechanisms such as gestures, such as speech input, and utilization of all the mobile device native functions such as camera, geo location, and identifying through that accessibility challenges and subsequent requirements for design and development to meet those. To define the problem space which provides a stage for us to get to solutions, that is something that we face is this problem space isn’t well designed. We haven’t looked at all these new initiatives and new interactions, and taken them apart.

I think because we see it as such a huge issue, such a wicked problem, we can break it down into more manageable components, that would be a really good strategy for getting future development.

Another part of the strategy is standardizing gestures across platforms, which would go a long way to good guidelines creation. (Inaudible) excellent reference guide (inaudible) touch tablet, not just mobile devices. But however, like everything we have got in this dynamic area, it is already out of date because we already cover 7, not 7.5. All that work will get chopped out again when 8 turns up because that has a significantly reduced number of gestures and a lot of different concepts. It uses semantic which it activates using the pinch gesture and currently that gesture is identified across the rest of the mobile devices as zoom in and out of a content area like a picture or page rather than the concept of semantic, display all the tiles in the Windows 8 screen landscape.

That is standardization of gestures would be great. Additionally, using basic (inaudible) assistance to keep the guidelines up to date, as long as the technology is specific to the use.

For example, some of the gesture specific heuristics we can look at come from Brian Cragun who writes the UX magazine, his heuristics, users shouldn’t be required to form gestures repeatedly or over a long period of time unless that is the goal. (Inaudible) increase and precision decreases so that affects performance. That is one technology agnostic gesture heuristic that we can use.

Applied to general usability, easily adapted to mobile accessibility such as not requiring somebody to use speech to input a password, which then goes back to the whole concept about security.

The last idea I had was also that we could create a dedicated on-line area for users to input the challenges that they are facing and the kind of requirements that they feel they have from an accessibility, and this goes back again to getting the participants in to solving the problems. This would be something that WAII could consider changes to WCAG 2, go forward with that to include mobile in it standards. There are a lot of forums on-line but they are usually very technology specific and quite uncoordinated.

But there is an opportunity I think for crowd sourcing, to form part of the input for the working draft of these guidelines. If people want to participate, they just don’t have that simple and well-publicized method of doing so. I’m not suggesting this would be a free-for-all, but it would be utilizing a structured fashion such as putting out a call for input around specific guideline areas over a fixed period of time, don’t want to get into a situation you have 365 days of moderation of this kind of on-line area. There are a bunch of things we can throw into the strategy.

>> YELIZ YESILADA: Thank you. I already see people who would like to ask questions. I think Shadi has a question. Would you like to ask a question?

>> SHADI ABOU-ZAHRA: Yeah, hi, this is Shadi Abou-Zahra. Hi. Very interesting paper, and point, I was actually going to ask about specifically what you talked about later on there. First of all, if, the position that there are no mobile accessibility guidelines, and the question if we need separate mobile accessibility guidelines versus better clarifying and improving the coverage of accessibility in the existing accessibility guidelines, given that mobile access is, has or is shortly before overtaking traditional desktop access, and that platforms are moving towards, not differentiating between mobile device and not just because most devices meanwhile are mobile or at least ubiquitous in some form.

So I guess this is one of the things that I wanted to check on with you.

>> KATJA FORBES: That is a tricky question. We have content accessibility guidelines, WCAG, which definitely have been based on browser-based requirements. We also have now got the other guidelines, best practices, Web application best practices. These ones focus more on the usable experiences, rather than accessible experiences.

I did have another question that was put to me about what is on my WCAG point here wish list, and my main statement was full coverage, consolidated set of gesture based interactions with guidelines in the design for people who have disabilities.

But the things that you have also talked about which I would call device convergence, because the incident, it is not a browser, it is not a computer on a desk. It is now tablets. It’s now, you can even look at it as eReaders. You can look at it as something that is also just mobile smart phones across all of those different operating systems.

But the Web tomorrow it is not going to be that either. Is it probably going to involve your television further. It is going to involve your car further. So it’s like how big do those guidelines have to get, as we have all of these devices converging and giving us that information and that product. I would say the other thing we have to take into account is the project that is going on, global public input infrastructure project which is looking at allowing users to invoke and use the asset features that they need anywhere, any time on any device.

So then how does this affect us with standards that we can put, somebody has got inclusive infrastructure that they can call on at any time.

I think the guts of the answer to your question is it has to be a step change. We can look at the stuff that we can try and solve now which would be around including gestures, including native opportunities to use cameras and things like that. And then I think we are actually going to have to take a wider view, with all the things that are coming down the pipe with us as well. Because WCAG doesn’t address what happens on a television, and it then wouldn’t, we are getting a bit into the relevance of guidelines in their individual states.

But we do need to have them. And I think we are going to come to a point where the devices are going to be so diverse that we are going to have to come completely back up to a technology agnostic fit, and try and cover that with that. Whether that is called WCAG or whether it is called something else, I think it is the step change that we have to go through.

Does that answer your question?

>> SHADI ABOU-ZAHRA: Yes indeed. I think some of what you touched upon is also the layer or the level at which accessibility is addressed. I mean, for example, the discussion about gestures and making those accessible and alternatives to them, in may be considered a lower layer. Also what Jan was talking about, the keyboard, the APIs, the back ends, the platforms, what they offer and so on. There is a lot of work that happens in there and there is already happening in making those technologies provide accessibility features, whereas the guidelines really address the interface aspect of those, at the later point in time.

So, I feel from your discussion that there is a little bit of tension in there also between what belongs in the guidelines, and the mobile Web accessibility guidelines or WCAG, or UAAG for that matter, and what belongs into technology such as HTML5, or independent UI and so on.

>> KATJA FORBES: Yeah. Definitely, I would say that there is tension between both of those things. But I would say that we do have to look at consolidating some of the standards that we have out there, so that we do raise it up that level from, I think you are kind of structuring it as things that are done in we, as opposed, cataloging gestures, things like that, and there may need to be some consolidations Web based practice, application best practices and things like that, to include accessibility as part of those best practices and perhaps not something that hangs off the side, if you get what I mean.

>> YELIZ YESILADA: Thank you, Katja. I also see Elizabeth in the queue. I think she would like to also ask you a question. But we are actually overrunning your time. So if you can briefly answer this question. Would you like to ask your question?

>> KATJA FORBES: I’ll do my best.

>> ELIZABETH WOODWARD: Sure. It is an unexpected treat to see you are using agile methods or at least user stories as a framework for research here. I noticed you described the natural language statements, which is the card part of the three Cs for user stories.

You haven’t talked much about the conversation piece, and about the confirmation.

I was looking at this. I was wondering, are you looking at potentially having the standards guidelines serve as the confirmation piece for the user stories? And the other part to that was, I’d be really curious to see how you would describe a user role in this scenario.

It looks like some, a really interesting avenue for research here.

>> KATJA FORBES: Yeah. It’s something that is a really big part of what is going on in user experience in the industry at the moment, looking at lean UX and agile UX. I’ve gone through a number of steps in user research to understand what it is that, at least the users, and the people who have the requirements actually needs, which is going through one on one structured interviews and looking at contextual inquiry and see how they use in the environment. To get that data, I create users along (inaudible) another thing I create is fictional representation of the user. It is a journey to describe how they achieve their goals, action (inaudible) touch points.

It is from there that I construct the user stories, and as you said, as a user I can do this task so that I get this benefit.

The user role in this instance I think is partly determined by the requirement for the particular accessibility need, whether it’s visually impaired accessibility need, or motor impairment accessibility need. So you can look at it from that lens, because when you are using user stories, saying as a mobile user, it just doesn’t cover the spectrum of what it is that we need.

I use those stories to make sure we understand what those requirements are, and I think if I understand your question correctly, I think the guidelines that would come out of those requirements, which is the user story, would be confirmation. I think that would be how the guidelines could be looked at.

The other point of that is, I’m trying to look at user stories and gain requirements like that, to keep the guidelines as moving guidelines, because as you discover with each new technology or each new exciting development, there is no set of user stories that will come out of that. And if you look at those user stories, you can see what requirements and needs, and so what guidelines we need to put in place.

That would be, I think that is how I’m trying to look at it, as something that we can keep alive and changing. Does that answer your question?

>> YELIZ YESILADA: Thank you, Katja. I think we are now over time. I would like to thank Katja again, and also for the great discussion, and the questions.

And now, I would like to actually open the floor for questions and comments from our participants. If there is any question that you would like to ask directly, you can do so. Or if you have any comments and questions, general comments and questions, you can also ask your questions. Anybody who would like to comment or ask a question?

(Pause.)

Peter? I think I see Peter on the queue. Would you like to ask your question?

>> PETER THIESSEN: Sure. A quick one, I think. If I mention the frame consumability, using mobile and sort of way to getting funding for an accessibility project, has anyone, I guess I’m asking, has anyone had any success, success stories in using both mobility and accessibility as a strategy for making a business case? So I guess success or failures. Thanks.

>> YELIZ YESILADA: Anybody would like to comment on that? Great question. Thanks, Peter.

(Pause.)

I think Jan Richards wants to comment on this. Jan? I see Brian as well. Jan?

>> JAN RICHARDS: There we go. There is many different ways that you can, you can come at this from. We have the tool I mentioned, Tecla access that we developed here has been commercialized by a company called Komodo access, they are looking at it more from an assistive technology perspective. They have secured financing.

That’s been a success.

As well, we are working with a phone company, mobile phone company in Canada, around how they can expand and continue markets, the senior markets and that kind of thing by doing a better job of presenting the accessibility of their mobile phone offerings. There definitely are business cases that can be made.

>> YELIZ YESILADA: Thank you, Jan. I see Brian on the queue. Brian? Would you like to comment on that?

>> BRIAN CRAGUN: Yes. I would just echo everything that Jan has said. We have found that the word “accessibility” tends to sometimes close people’s minds. They tend to think of accessibility as being a small group of people, and actually, I get the impression sometimes an irritant to the system, because people want to move forward with new exciting projects, and accessibility sometimes slows that down, because they have to accommodate these different needs.

But as you shift the view from accessibility to usability, and the word that our management team has used on several occasions, for good software to customers, is consumability, in other words, the ability to actually get it in, and get it in the hands of the users, the customers, and actually make them productive with that, this consumability is resonating.

And we have felt as we have presented to the people who do funding that they are much more, they listen much better when we start talking about usability and consumability.

And illustrate to them that we are not talking about just a few people who have disabilities, but actually everyone has these kinds of problems, and nothing makes that more apparent than mobile, because almost all the people can relate to being in a place where they can’t hear their mobile phone, or being in a place that the sun is too bright and they can’t see the screen. Or they are in a car and they want to be able to contact someone and they can’t do it safely because they can’t take their eyes or hands off the road.

>> YELIZ YESILADA: Thank you, Brian. I see Elizabeth on the queue. But before we take Elizabeth’s comments or question, I would like to remind everybody that if you would like to ask a question, please type Q+ into the chat room or 41 hash on your mobile, on your phone dial pad. Elizabeth?

>> ELIZABETH WOODWARD: Yeah, I think so Brian and I work together. So I completely agree with everything that he has said, and the speaker before. A couple of things that I would add is that when we can start talking about the magnitude of the impact, and how many people get, are potentially impacted by the inaccessibility or the inaccessible security or inaccessible Cloud, whatever it may be, when we can start talking about those numbers and people see the magnitude of the impact, it’s no longer just about, and I put “just” in double quotes but about this one part of the business, all of a sudden we are affecting hundreds of thousands of users. That raises some interest.

I think too when we start talking about selected legal inquiries and litigation and are able to speak to some of the big names that are trying to be accessible and have run into some challenges, and the magnitude of the legal actions that have been placed, that those numbers really seem to catch attention.

I think too, when you can speak to the challenges being addressed by everyone in ways that people can identify with the purse holders, the people who are making the decisions, the money decisions, if they see themselves in it, I think that there is a greater chance, that it will be successful.

I think too, if we can speak to the increase in productivity, the fact that we do want for people or would like for people to be able to participate while they are in a noisy airport, for example. Those kinds of things seem to really be resonating.

I think that the increase in scale of impact is the big thing now. Those numbers are catching attention.

>> YELIZ YESILADA: Thank you, Elizabeth. I see Katja in the queue. I think this will be our last question. We are about to finish. Katja?

>> KATJA FORBES: I was actually going to speak to that, that same question, that I think that Elizabeth might have covered it. So it’s nearly 3 in the morning here. So I don’t want to take up time (chuckles) if somebody else wants to ask another question. Mine was just about, there is a litigious aspect to it, which does unfortunately make a business case, especially in large organizations.

But the other thing that I’ve found in that is trying to build empathy with the situation that gets put in place, when we create something for someone who is, who has accessibility needs. I would say as part of that business case, I would definitely suggest that people look at building that entity either by research, by testing or by interview.

(Pause.)

>> YELIZ YESILADA: Thank you, Katja. I would like to ask a question to Shadi. Can we, do we have time to take another question or the call will automatically finish in six months?

>> SHADI ABOU-ZAHRA: No, the call won’t finish, but I guess people may start stepping off. So go ahead.

>> YELIZ YESILADA: I would like to take, I think (audio breaking up) like to take that question as the last question and then we can finish the call. Simon.

>> SIMON HARPER: Hi. My question is purely this. Through all the pieces of work that I’ve been hearing today, and they have been really interesting to listen to, I’m interested in how you feel about the information that’s both presented on mobiles and on the information that people seem to want to use mobiles for. To me the information that is presented seems to be very, people seem to want that information to be quite constrained so that they can get an overview of all the information in a glance and especially with regard to the Microsoft active tiles.

And people seem to want to use the mobile for input in very constrained ways so that they don’t want to be typing large amounts of the text into them, but they want to be doing this quite succinctly.

Do we think that accessibility itself will be useful for, obviously for everybody who wants to actually, who is operating in these kind of constrained modalities? Do we also think that mobile has, we need to be thinking more intelligently about the kinds of things we present, and when we present them, and whether we can actually get more information into the typed message, than intelligent typing if you like, than we currently do, if that makes any sense.

>> YELIZ YESILADA: Is there anybody who would like to comment on this? Or ask a follow-up question? Anybody? I guess Simon, your question was about how people perceive, I mean when developers are actually trying to develop something, they may make assumptions about the user requirements. But does this actually conflict with the actual user requirements, is that right? Was that your question?

>> SIMON HARPER: My thinking, more succinctly, people seem to want small amounts of information on their screen but in the output, so that can they see who, what the last mail was, can they see what the title of the last tweet was and those kinds of things. They also seem to want to only type in small amounts of information at the time.

They don’t want to be typing great big E-mails. Just a little bit. I wondered if we can see with regard to the accessibility aspects, whether there is any way we can do this intelligently, so that we don’t need to be typing so much, so that we can look for maybe auditory glances and bits of information so we are catching fragments that allow us to form an understanding of the mobile environment, but we don’t need to go into as much detail as we might do on a desktop system. That was really what I was trying to get to.

>> YELIZ YESILADA: Okay. Gene says on my ISC that plus 1 for the comment users want limited input and not extended typing. I see Jan on the queue.

>> JAN RICHARDS: That is a good comment. You are right that a lot of the interaction is probably small snippets. But then somebody will be on a mobile and realize they have to make an update to an entire Google document or they want to do something big.

So although that might be a frequent user case I don’t think we can make the assumption that that is the way to go. It would be of course great work to optimize those small interactions while keeping, while not cutting off the longer ones is what I’m trying to say.

>> YELIZ YESILADA: Okay, thank you. Elizabeth? Would you like to comment on this?

>> ELIZABETH WOODWARD: Yes. I was just typing that, I agree. We saw the same with mobile education. Users didn’t want to read tons of information on their device. Our first step was a quick conversion of text to make it usable on the device. There is so much research and so much opportunity there on how to make the information more concise and valuable.

I think partly it depends on the form factor too, right, if we are talking about a smart phone, people are going to be interested in doing maybe one set of activities. But if they have a larger tablet or as we evolve into devices with different form factors and different input devices, there is likely to be a different set of use cases or user stories of how people want to use the devices. So I agree.

>> YELIZ YESILADA: Thank you, Elizabeth. I see Katja on the queue.

>> KATJA FORBES: Hi, just to that point, one of the main things that I do suggest for accessible user experiences is that you don’t just look at one way of getting the information in there. You look to the device that you have got, look at what that device capability is, and see how you can utilize that to input, cameras, geo rotation, speech, all those possibilities that have opened up to us with the smart devices. Use everything that you can, so that you don’t have to have people sitting there typing, and it causing those kinds of tiring issues.

That’s the comment that I wanted to put there.

>> YELIZ YESILADA: Thank you, Katja. Is there any other question? Or comment? I think we can now close the call. I would like to thank all our presenters for their great work. Thank you.

I would like to thank Shadi for all the technical support. I just wanted to note that, as I said at the beginning of the call, the results of this symposium will be published in a technical note. When the technical note is ready, we will publicize it widely. Thank you all. Thank you very much.

We are also encouraging you to participate in the following upcoming research and development working group symposium. Thank you all.

It would be great if you also would like to join research and development working group, we can, if you are interested, we can send you further information about this, and also, please remember that if you have any questions that you did not have any, you did not have chance to ask in the symposium today, you can send it to our public mailing list.

>> SHADI ABOU-ZAHRA: Thank you, Yeliz, thank you, Simon and Peter for chairing this symposium, and thank you, everyone for attending. Good-bye.

>> YELIZ YESILADA: Bye.