The mobile world is rapidly evolving with increasingly technical devices, platforms, Application Programming Interfaces (APIs), applications, and Web browsers. Further, the complexity of Web content and applications provided via mobile devices is likewise increasing year on year. At the same time, the many sensors and gauges deployed on mobile devices are being utilised to provide new interaction paradigms such as touch events and gestures. Mobile devices are becoming increasingly important and are already the primary form of accessing the Web in many parts of the world.

While these developments provide vast opportunities for people with disabilities they also entail accessibility challenges. In particular, it is not sufficiently clear how well existing standards such as Web Content Accessibility Guidelines (WCAG), User Agent Accessibility Guidelines (UAAG), Accessible Rich Internet Applications (WAI-ARIA), Mobile Web Best Practice (MWBP), Mobile Web Application Best Practice (MWABP), Touch Events, and others address accessibility in the mobile context, despite some initial work on mapping between WCAG and MWBP. There is a need to better understand these challenges and to develop a roadmap to guide further research and development activities.

This note presents the major findings of a symposium, which was organised by the Research and Development Working Group (RDWG) to bring researchers and practitioners together to discuss these challenges and possible solutions. This symposium brought together researchers and practitioners to discuss these challenges and possible solutions, to help develop a roadmap for future research and development in the field.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This [dd month yyyy] [First Public] Working Draft of Research Report on Web Accessibility Metrics is intended to be published and maintained as a W3C Working Group Note after review and refinement. The note provides an initial consolidated view of the outcomes of the Mobile Web Accessibility Online Symposium held on 25 June 2012.

The Research and Development Working Group (RDWG) invites discussion and feedback on this draft document by research and practitioners interested in metrics for Web accessibility, in particular by participants of the online symposium. Specifically, RDWG is looking for feedback on:

Please send comments on this Research Report on Mobile Web Accessibility document by [dd month yyyy] to public-wai-rd-comments@w3.org (publicly visible mailing list archive).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document has been produced by the Research and Development Working Group (RDWG, as part of the Web Accessibility Initiative (WAI) International Program Office.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

  1. Introduction
  2. Related Work
  3. Current Research
  4. Emerging Themes
  5. Future Directions
  6. Conclusions
  7. References
  8. Proceedings
  9. Acknowledgements
  10. Appendix - Paper Question and Answers

1. Introduction

Mobile devices are changing the way people access the Web and it is no longer safe to say that a specific device or set of devices will be used to access content. We scope mobile devices as those that conventionally carried by the user and act as that users personal computation and connection resource. Indeed, we subscribe to the Wikipedia definition which classifies a mobile device (also known as a handheld device, handheld computer or simply handheld) is a small, hand-held computing device, typically having a display screen with touch input and/or a miniature keyboard and weighing less than 0.91 kg. Apple, HTC, LG, Research in Motion (RIM) and Motorola are just a few examples of the many manufacturers that produce these types of devices. In this case screen size, and computational resources will vary including support for gestures, voice commands, and a variety of gadgets. Beyond the devices themselves, the software that allows a disabled user to access Web content is as varied as the devices themselves. Operating system, browser, API and so on are usually specific to a mobile device with each resulting in a unique user experience. Vendor's take on accessibility also varies with for example Android and iOS having different accessibility API's, though this is more of a developer issue. Standardisation seems the only way to create a shared user experience and take much of the burden of learning new accessibility software and techniques from the user. Indeed, we note that directions for standards have already published in this space including the "Standards for Web Applications on Mobile: current state and roadmap" as well as those developed by the WAI and oriented to the practitioner.

Initial work has been done by the W3C on creating some mappings between WCAG (Henry 2012) and MWBP (Rabin and McCathieNevile 2005) but a gap exists for a more complete set of best practices for accessible mobile Web content. Indeed as alluded to in the abstract, the mobile world is rapidly evolving with increasingly sophisticated devices, platforms, APIs and applications. While mobile Web provides vast opportunities for everybody, especially to people with disabilities, the mobile Web also entails many accessibility challenges. There are many areas where further research needs to be conducted to understand what is the scale of accessibility issues on the mobile Web for people with disabilities. For instance, to address accessibility issues, mobile devices provide alternative interaction methods, and their combinations such as voice and gesture interactions, but again not much is known how these new interaction methods support accessibility. Furthermore, it is not clear how well existing standards such as Web Content Accessibility Guidelines (WCAG) (Henry 2012), User Agent Accessibility Guidelines (UAAG) (Jacobs et al. 2002), Accessible Rich Internet Applications (WAI-ARIA), Mobile Web Best Practice (MWBP) (Rabin and McCathieNevile 2005) and others address accessibility in the mobile context. Despite some initial work on mapping between WCAG and MWBP (Yesilada et al. 2008, Chuter and Yesilada 2008) there is relatively little work bringing together cross-platform standards. Conversely there is a growing trend in the accessible and mobile communities for more flexible and device generic content solutions; to some extent addressed by the new WAI Indie UI Working Group. While a mobile Web practitioner might call this flexibility, responsive design or progressive enhancement with a focus on layout and bandwidth, an accessible practitioner might call this a ubiquitous design or simply a complete design with a focus on content access. Both views aim at a generic client solution within a browser that can be reused across standards supporting devices.

What can be seen in real world use is that mobile devices have popularised a number of interaction models such as gestures and voice commands. With these new interaction models come new accessibility challenges and opportunities. Gestures for example open a previously flat and "lifeless" touch screen to a blind user by allowing quick swipe scanning for input and feedback output by auditory summaries and queues via a screen reader. A recent study shows the growing popularity of screen reader use on mobile devices (WebAIM 2012) which is also confirmed by a more recent study (WirelessRERC 2013), especially iOS. On iOS the default screen reader that comes with the OS is VoiceOver which is well known among the accessible community and offers a number of accessibility features (Apple 2012). Though with multiple platforms and devices the mobile landscape is fragmenting, Android for example has a different set of accessible interactions (API) (Android 2012) than say iOS. In an effort to begin standardising accessible gestures provided by these devices, initial work is currently underway by the WAI Independent User Interface Task Force Work Statement Working Group to look into abstracting the interaction events with Indie UI: Events 1.0.

While these new interaction forms are gaining popularity more traditional forms of interaction, such as keyboard input, continue to be used in mobile devices and have changed little from the original qwerty keyboard layout. However, as with a small screen, a small keyboard poses many challenges for disabled users, especially users with motor and visual disabilities. A small keyboard may increase input errors by users accidentally hitting the wrong key, holding down a key too long, keying two keys at once, and so on. Additionally a visually impaired user using an onscreen keyboard will rely on scanning methods to determine the key to press which will often take longer than a sighted user doing text input, and applications that are time sensitive may be unusable by disabled users.

We can see that this novel mix interaction paradigms create a very specific set of problems and opportunities, and imply that further research is needed to understand the challenges associated with the mobile Web.

In order to investigate these challenges, the Research and Development Working Group organised a symposium on mobile Web accessibility. The primary objective of this symposium was to gather, analyse, and discuss practical experience with mobile Web accessibility. For an overview of existing related standards, please visit the W3C WAI Guidelines and Techniques and the W3C Mobile Web Initiative The Web and Mobile Devices resource pages. In particular, the objective was to investigate:

This research note aims to present the findings of this symposium which constitutes the basis from which to further explore a research and development roadmap for mobile accessibility. This note presents an overview of the state of the art regarding mobile Web accessibility, introduces the papers that were presented at the symposium and provides a roadmap to future research on mobile Web accessibility.

2. Related Work

Mobile Web access can be challenging for everybody (Brewster 2002, Yang and Wang 2003, Chae and Kim 2004, Roto and Oulasvirta 2005, Oulasvirta et al. 2005, Wobbrock 2006, Cui and Roto 2008). Compared to accessing the Web on desktop, mobile devices can have many limitations and contextual constraints. Mobile devices have much smaller screen which means the visual field is restricted (Reisel and Shneiderman 1987, Dillon et al. 1990, Chae and Kim 2004). They might be used in ‘off-desk’ environment (Carter et al. 2006), poor lighting conditions, such as sunlight reflection on screen, which may affect a mobile user’s colour perception and contrast sensitivity, increasing reading difficulty. They might also be used in noisy environments, which means one might have problems in hearing the voice prompts. They might also have limited input capabilities which can cause input difficulties (James and Reischel 2001, MacKenzie and Soukoreff 2002, Brewster 2002). Mobile users who usually participate in multiple activities (e.g., talking, walking, way finding, sidestepping, etc. (Oulasvirta et al. 2005, Lin et al. 2007) while accessing the Web also have to distribute cognitive resources between multiple tasks. These are specific examples of situational disabilities that anybody can experience when accessing the mobile web (Sears and Young 2003).

Disabled Web users experience many difficulties due to inappropriately designed content (WAI, 2005), and these barriers can persist on desktop, mobile or any other browsing platform. However, considering the context of use discussed above, the difficulty for people with disabilities might be much worse on the mobile Web. Even though there are user studies that investigate the difficulties experienced by mobile users without disabilities (Nielsen Norman Group 2011), there is not much research that investigate the difficulties experienced by disabled mobile Web users (Trewin 2006). With this literature review, we first look at the state of the art in mobile Web technology including accessibility APIs, assistive technologies on mobile devices, etc. We then look at the state of the art in mobile Web accessibility guidelines and finally we review the new interaction paradigms on mobile devices.

2.1 Mobile Technology and Accessibility

One can simply define mobile Web as being able to access the content on mobile devices. With the existing mobile technology, this can be achieved by using Web applications, native applications or hybrid applications. Web applications are browser based applications built with technologies such as HTML, CSS and Javascript. Native applications are downloaded applications that take the advantage of phone capabilities and hybrid applications are also built with technologies such as HTML, CSS and Javascript, but they are downloadable and can also access phone capabilities. Considering these, when one talks about accessibility of mobile Web, all of these application types have to be considered. Compared to Web accessibility on Desktop, therefore having all this variety in applications bring extra complexity. Mobile Web accessibility is not only about browsers and interaction through browsers, but needs to consider mobile devices and their support.

Accessibility APIs are very important components for achieving accessibility. They are used to communicate accessibility information about an application to assistive technologies (Faulkner et al. 2012). For example, screen readers access the content on a given application through these APIs. Compared to Desktop, accessibility APIs on mobile devices are not as developed. For instance, iOS (Apple 2012) and Android (Android 2012) provide APIs, but there are many other mobile platforms that do not provide accessibility APIs. However, most mobile platforms provide accessibility tools that in general supports alternative input and output. For example, Android has TalkBack which is a mobile screen reader (Android 2012), iOS has Voiceover and other accessibility tools (Apple 2012), Blackberry has a screen reader (Blackberry 2012), and Nokia has Mobile Speak (Nokia 2012). These mobile platforms also have accessibility settings, for example zooming, font resizing, custom gestures, colour settings, haptics, etc. However, all these platforms show differences in their accessibility design and for developers it can be challenging to design a Web page or application that can be accessible in all platforms. Regarding the Web technologies, these platforms also support different technologies such as HTML, CSS, WAI-ARIA, Flash, etc. On Desktop, the platforms are not so divergent, however mobile platforms bring extra challenge to developers because of differences in supporting different technologies and platforms.

2.2 Mobile Accessibility Guidelines

There is no unified, universal set of mobile accessibility guidelines that include techniques and methods for developing accessible mobile Web pages and applications. The most related guidelines to mobile accessibility are obviously W3C’s Mobile Web Best Practices (MWBP) (Rabin and McCathieNevile 2005) and Web Content Accessibility Guidelines (WCAG) (Henry 2012). MWBP mainly provides techniques that can be used to design user friendly Web pages on mobile devices and WCAG includes guidelines for developing user friendly mobile pages (Harper and Yesilada 2008). There are also some work that look at the overlaps and relationships between WCAG and MWBP, which shows that there are significant overlaps (Yesilada et al. 2008, Chuter and Yesilada 2008). Platform specific guidelines have also been developed, for example, Android (Android 2012), Blackberry (Blackberry 2012), iOS (Apple 2012), Nokia (Nokia 2012) and Windows Mobile (Microsoft 2012) have specific design guidelines for their platform. Other organisations have also published mobile accessibility guidelines (AB 2012). However, a gap exists for a more complete, unified, universal set of best practices for accessible mobile Web content. A growing trend in the accessible and mobile communities is for more flexible and device generic content solutions. A mobile Web practitioner might call this responsive design or progressive enhancement (Marcotte 2011) with a focus on layout and bandwidth while a accessible practitioner might call this a ubiquitous design (Ubiquitous Web 2012). Both views aim at a generic client solution that can be reused across standards supporting devices. This raises several questions about the coverage of mobile accessibility by existing standards such as WCAG (Henry 2012), UAAG (Jacobs et al. 2002), WAI-ARIA (Craig and Cooper 2011), MWBP (Rabin and McCathieNevile 2005) and others.

2.3 Mobile Accessibility Evaluation

When we look at the evaluation of mobile web accessibility, not much research has been conducted. There are many inspection tools, emulators that can be used for testing but these are mainly vendor dependent tools, for example Opera emulator (Opera 2012) or iOS (Apple 2012) based developer tools. W3C Mobile Web Initiative (MWI) also introduced machine test sets based on MWBP 1.0: W3C mobileOK Basic Tests 1.0 (Owen and Rabin 2008). Passing these tests means that the evaluated content provides a functional user experience for users of basic mobile devices whose capabilities at least match those of the Default Delivery Context (DDC). DDC can be considered as the minimum common denominator device profile. However, those tests are not based on specific requirements of disabled mobile Web users. There are a number of automated mobile evaluation tools that use W3C mobileOK Basic Tests such as W3C mobileOK Basic checker (Marin and Rionda 2008), TAW mobileOK Basic checker (TAW MobileOK Checker 2010b) and ready.moby (ready.mobi 2010a). There are also some tools that are directly based on MWBP 1.0 including EvalAccessMOBILE (Arrue et al. 2007). MWI also developed an open-source library which can be downloaded and tested using its Web interface. There are also some automated evaluation tools that are based on this open-source library such as MokE online evaluation tool (Garofalakis and Stefanis 2008). All these mentioned automated evaluation tools evaluate Web pages against the DDC, however there are also some tools that aim to consider different device specifications such as the tool described in (Vigo et al. 2009). However, none of these tools particularly focus on testing accessibility of mobile Web for disabled users. Mobile Web accessibility evaluation is also an area that further research needs to be conducted.

2.4 Mobile Interaction Models

Mobile devices such as the iPhone 5 and iPad, Google Nexus, BlackBerry Z10 and PlayBook, Nokia Lumia and so on, are usually quite small and as discussed above they are typically used in constrained environments. Traditional forms of interaction such as keyboard or mouse interaction, on mobile devices sometimes does not exist or it is almost impossible to use them (Butts and Cockburn 2002). Therefore, new interaction forms are emerged on mobile devices. Some of these techniques and methods have developed for improving accessibility for people with disabilities on Desktop and they are migrated to mobile devices. Similarly, some techniques are developed for mobile devices and are now widely used by people with disabilities on Desktop.

To address the limitations of keyboards, different input techniques have been proposed which include Soft Keyboard, joystick, voice input, tablet, touchscreen and multi-modal interface. The Soft keyboard is a program that presents keyboard icons in a graphical user interface and allows a user to input by tapping on-screen icons with fingers or stylus. Soft keyboards are very popular on mobile devices (Hinckley 2003) and they can also be used to enter text by tapping the touchscreen with a stylus (Myers et al. 2002). In addition, joysticks, tablets and touch-screens are used for text entry as alternative methods to keyboards (keypads) (Silfverberg et al. 2000, Mankoff et al. 2002, Wobbrock et al. 2004, Chau et al. 2006). Especially with iOS, MultiTouch (Apple 2012) and soft keyboards, technologies are very popular alternative methods to conventional keyboards/keypads. Voice control, referring to both speech recognition and non-speech vocalisations1 is also used on mobile devices for speech dialling or editing text messages (Karpov et al. 2006). Multi-modal interface which combines a number of modalities such as head movement and speech for mobile users (Serrano et al. 2006) have also been suggested as possible input solutions, and are gaining popularity (Koslover et al. 2012). Auditory feedback and haptic feedback are also used to improve text entry performance for mobile users (Brewster et al. 2007). Recently, some work has also been proposed that combines audio and vibration to allow users explore graphical information on touchscreen interfaces (Giudice et al. 2012).

Regarding pointing which is the result of a lack of a mouse, a number of novel techniques and methods exist to improve the problems of target acquisition. Auditory feedback (Brewster 2002) can be used to improve pointing performance for mobile users. Screen magnification software assists visually impaired users with pointer manipulation in two ways: by improving the pointers visibility and also by tracking and locating the pointer on the fly. Many user studies on small devices show the benefits of haptic feedback (Pirhonen et al. 2002, Poupyrev et al. 2002, Brewster et al. 2007). Similarly, vibro-tactile display feedback benefits mobile device users (Williamson et al. 2007). Novel techniques have also been proposed to address target acquisition on mobile devices. For example, the barrier pointing method was introduced by (Froehlich et al. 2007) to improve pointing accuracy. This method uses the screen edges, corners, and the screen surface to support faster and more accurate touch screen target acquisition. However, the evaluation of this method shows that the overall target acquisition times were not statistically significantly different between the normal mode of interaction on most mobile device touch screens with stylus. Further, as the memory and processing capacity of current small devices are sufficient to support client-side speech recognition, command and control (C&C) technology has been applied (Paek and Chickering 2007).

3 Current Research

The ecosystem for the accessible mobile landscape is fairly complex, involving the device, carrier, operating system, APIs, applications, and for disabled users, assistive technologies. The combination of these elements makes syncing the rapid changes between them often seem like a moving target. Standards can go a long way to help solidify or at least abstract the changes in API/application/AT away from the developer and user. The papers presented at the Mobile symposium cover a broad range of topics including security, the Web as a platform, keyboard input, and Web standards. A common theme shared across each paper is the rapidly changing mobile landscape creating both new challenges and opportunities for accessibility, with neither very well understood at present.

PAPER 1: Accessible Security for Mobile (E. Woodward and B. Cragun 2012)

A Bring Your Own Device (BYOD) policy allows for a broad range of mobile device usage and with it, the need for an organization to support and secure all these mobile devices. The authors present several examples of organizations struggling with this problem, such as iOS's Siri AT exposing sensitive information without properly informing the user. Another example is a disabled user finding the software provided by the organization so difficult to use that s/he resorts to asking third parties, such as a friend, to enter authentication details. In this scenario both the disabled user and the organization are left vulnerable as the intent of the security system, to verify the identity and restrict access to a specific user, could then be circumvented by the third party. The authors also mention that the need for accessible security reaches all users, as many mobile users now face situational disabilities from loss of focus, poor lighting conditions, keypad operation, and so on. To meet the needs of users on one platform could be difficult enough but the problem can quickly grow in complexity when considering device, operating system, API, AT, and application with each potentially having accessibility and or security issues. The authors suggest the solution is for organizations to create a well defined strategy to cope with this dynamic and vast landscape. The strategy outlined includes defining and choosing appropriate standards, enforcing them, and especially testing them.

PAPER 2: Enabling Flexible User Interactions with Mobile Media (M. Wald, E.A. Draffan and Y. Li 2012)

Releasing a mobile application often requires targeting one or a few platforms such as iOS or Android because of the amount of developer resources to build an often complex application. The idea of targeting all mobile platforms for an app release has become difficult and often unrealistic as the number of devices/platforms continues to grow rapidly. The authors discuss this as a reason for taking advantage of the new native-like API/specification found in HTML5 that allows cross platform app releases. The intended benefit would be reaching a wider range of users rather than focussing development efforts on a single platform specific app. The project in development is a mobile Web based video/audio annotation tool. The goal for the tool is to act as a learning aid for disabled students to, for example, take notes during lectures and later more efficiently navigate the video/audio using the annotations. The main limitation noted by the authors is the bandwidth available to mobile devices. To work past this limitation the authors use HTML5 media fragments to only buffer a small portion of the video/audio being watched by the user and in doing so help work past the bandwidth issue. The authors also mention frustrations from the current lack of consistent browser support for media fragments and the general state of ongoing flux found in HTML5 making application development a struggle.

PAPER 3: Accessibility in Multi-Device Web Applications (C. Gonzalez, J. Rodriguez and C. Jay 2012)

The authors cover common types of errors users make while entering input on mobile devices. This work builds on past research from the RIAM project that looked at common input errors made by users on desktop computers such as input timing, key combinations, miss keying, and pointing/drag and drop. With the past findings from RIAM, the authors can apply these findings to mobile user data and using a tool, extrapolate relationships between desktop and mobile input errors. The goal of the study is to better understand these errors and and in doing so take a step towards being able to identifying methods to reduce user error from mobile input. With a greater understanding of mobile input errors the authors hope to help all users improve mobile input efficiency, and in particular increase the mobile adoption rate of user groups such as elderly users.

PAPER 4: Assessment of Keyboard Interface Accessibility in Mobile Browsers (J. Richards, A. Benjamin, J. Silva and G. Gay 2012)

Both physical and virtual keypads can be accessible. Virtual keypads for example use exploratory audio feedback to help a user navigate a UI. The authors use the UAAG2 standard as a basis for creating a checklist for mobile keypad accessibility in a study looking at the combination of device, OS, browser and popular Websites/apps representing the typical ecosystem for users accessing the Web. Requirements include checks such as focus indication, sequential navigation, operable controls, and so on. The authors findings are summarized in a table outlining where different combinations failed the check list. No combination of device, OS etc. passes the test without at least some minor quirks, mainly to do with focus highlighting. The authors emphasize throughout their paper the need for awareness of mobile keypad accessibility and express fears that developer awareness appears to be trending away from this.

PAPER 5: Inclusive Mobile Experience: Beyond the Guidelines and Standards (K. Forbes 2012)

The W3C process is in place to create broadly applicable and authoritative standards that go as far as defining legal requirements used by governments and organizations. The time to create consensus and test guidelines can be lengthy because of the need for accuracy and guiding principles that can apply in many contexts. The author argues that W3C standards need to adapt and become more agile to keep current and continue to meet the needs of the end user. For example, she notes that mobile gestures are now a common interface in smart phones and WCAG2 has not yet incorporated gestures into the guidelines. Without defined accessible gestures, application developers are left without any choice but to pioneer accessible gesture techniques. The solution she presents is guidelines that move away from focussing on devices and applications and instead focus on more general principles. The idea being that these principles would apply in a more general/timeless way regardless of the speed of change in technology. The author also argues for the need of a more agile method of testing the guidelines as they change through user based testing.

4 Emerging Themes

From the abstracts, symposium discussion, and the authors answers to questions (see Appendix), it is clear that there are three main interconnecting themes running through both the mobile and accessible domains. These three themes are mentioned, or alluded to, across all abstracts and so we can see that there is a degree of independent agreement between experts that they are important for the future. Firstly, we see that accessibility is possible on mobile devices, not just from a situational perspective, but also from a conventional accessibility perspective. Secondly, we see the concept of device convergence being important across both domains. Finally, if this accessibility is possible and devices do start to converge, we see the need for a more inclusive and harmonised set of guidelines and standards.

4.1 Convergence of Fields

The overarching theme of the abstracts in this symposium was their discussion, both implicit and explicit, of the convergence of requirements, technologies, guidelines and best practice. We will deal with the harmonization of guidelines and best practice in the last section. However, it has become increasingly clear that the concerns of our community are beginning to run through that of the general mobile audience. As our phones, and the interfaces we use to control them, become increasingly complex; the situations of their use become increasingly extreme; and the technology becomes increasingly divergent; application developers are increasingly looking to harmonise. This desire for harmonisation is leading to the decoupling of the programme functionality from the lightweight interfaces which connect to it. This means that mobile convergence can be supported - and this convergence between accessibility and mobility is becoming increasingly real and attainable. Further, the requirements of use are also starting to mimic those faced in the accessibility domain, with outside environments requiring a degree of auditory interaction, with the increasing need for voice control to deal with more complex interfaces on small screen real-estates, and with multi-step interactions requiring increasing cognitive load; our frames of reference are becoming increasingly conjoined. But our converging domains benefit each other; accessible developments can be transferred and repurposed in the mobile domain, while the mobile domain forms the convergence business case for all devices, accessible ones included.

4.2 Accessibility in Use

Our participants shared the realisation that there is a perception in the community of mobile practitioners and developers that practical mobile accessibility is not really possible. This is obviously not the case, with developments such as mobile VoiceOver leading the way in accessible mobile technology. Further, smartphone accessibility technology is also making general tasks such as notetaking more accessible with the development of Braille touch type technologies on multi touch screens. And these advances include novel gestural as well as more conventional ‘Perkins’ style interactions and inputs. Indeed, technologies for ‘mainstream’ interaction - in more properly to overcome some situational impairments - such as voice control (eg Apple’s Siri) are also having knock on benefits for disabled users and more general accessibility applications.

However, there has been a perception that keyboard interaction is not possible when soft keyboards are displayed on smartphone touch screen style devices. But, we feel that this is not the case. Certainly our participants have demonstrated that mobile keyboard access is both possible and effective. However, we also realise that while practically possible it may be necessary to a certain amount of guidance and best practice regarding keyboard accessibility in the mobile domain. In this case we feel, that platform developers need to include explicit statements about keyboard interface accessibility being part of the platform’s intended look-and-feel, and for platform developers to include keyboard interface testing tools in their Quality Assurance suites. Indeed, from a purely marketing perspective it is in the platform developers best interests to maintain interoperability on their platforms. For instance Google might want to remind its developers that some Android phones continue to include physical D-pads. Apple might remind its developers that external Bluetooth keyboards are supported. Accessibility, is one way of maintaining good interoperability - because if it is accessible then it is also more likely to be interoperable and device independent.

4.3 Harmonised Standards

Our participants think that integrating findings from accessibility research to the Mobile platform is timely, and there is an opportunity to transfer lessons learnt and experiences gained to the Mobile domain which should not be missed. Indeed, there is great potential for reciprocity and interoperability between the Mobile and Accessible domains, and many of the lessons learnt from accessibility will be relevant to the Mobile platform. Accessibility practitioners have been researching, innovating, and building device independent resources, in all but name, for many years; their expertise can, and should, be leveraged in efforts to develop the Mobile platform. Conversely, the Mobile platform is important from an accessibility standpoint, because it addresses many of the same issues, and is likely to be the focus of a significant development effort. Our aim should be to address these issues by identifying this reciprocity and the ways in which the mobility and accessibility can interoperate, developing a harmonised set of guidelines and standards to support it.

Work in the fields of accessibility and device independence (the interaction domain) is extensive. By gathering and summarising relevant research within these fields, we can provide the basis for future integration. By finding the intersection of current work from these different, but related, fields we can start to investigate just how both mobility and accessibility can interoperate. Studying these commonalities will enable us to delineate the extent of the intersection, identify techniques which may be useful in one field but are missing in another field. Because the Mobile platform is younger than accessibility, guidelines and best practice may at first flow into the Mobile domain, thereby speeding its development. However, as the mobile platform is developed, and extended set will flow back into device and platform accessibility due to the greater research and development effort focused on the mobile domain.

5 Future Directions

We are interested in capturing relevant research questions for our future directions as a follow-up to our Web symposium on mobile accessibility, and to help highlight potential questions for future RDWG Web symposia. We would note that there are other versions of roadmaps already published in this space including the "Standards for Web Applications on Mobile: current state and roadmap" as well as those developed by the WAI and oriented to the practitioner.

Based on the RDWG Mobile Web symposium findings and also the discussions conducted in the RDWG, here we provide a research roadmap in terms of areas that need attention and approaches that need to be taken. Areas that needs attention can be summarised as follows:

Unified Web: There is a wide variety of devices used to access the web, and this is envisioned to increase in the future. However, we do not know how Web accessibility would be affected by these variety of devices. People with disabilities have accessibility problems in accessing the Web on the Desktop and they have similar problems when they access the Web with their mobile devices. It is not known how the accessibility of the new devices used to access the Web will affect the accessibility for people with disabilities. Therefore, it is important to develop a research agenda that emphasises the development of guidelines, methods, and techniques such that a unified Web would be accessible. We need to better research the scope and limitations of the existing guidelines, tools, and techniques and foresee how all these could be extended and adopted to support these new devices.

Convergence of Technologies: The research agenda must also consider that mobile platforms are becoming smarter and smarter. It is envisioned that mobile platforms will soon have the capabilities of Desktop devices, and will be even better in the future. In that case, our research agenda needs to foresee that a convergence is occurring in technologies available on Desktop and mobile platforms, and needs to investigate how well the existing guidelines, techniques and methods are suited for such converging technologies.

Migration of Technologies: There are many technologies developed for Desktop and migrated to mobile devices. We also see a similar trend in the other direction -- there are many technologies that are mainly developed for mobile devices and are now migrated to Desktop. In our research agenda, we need to always have this in mind and foresee such kind of migration, ensure that accessibility is always part of the migration process.

Based on these areas discussed above, we also need to consider taking the following approaches:

Look Ahead! The pace of development in the mobile Web field is quite fast and this means, as a Web accessibility research community, we need to look ahead. There are many areas where we still need significant developments, for example text customisation, device independent user interfaces, multimodal interaction support, alternative interfaces, integration with Open Web Platform, etc. While these areas are developing, we need to look ahead and make sure that accessibility is thought about from the beginning of these developments, rather than being an after thought.

Don't Lag Behind! As an accessibility community we must always pay attention to new technologies that are developed. As the mobile Web community looks ahead to future technologies in development for the mobile Web ("Standards for Web Applications on Mobile: August 2011 Current State and Roadmap"), we also need to have closer eye on newly developed technologies. As a research community, we need to anticipate what will happen and make sure that the technology adaption is shorter and quicker. We need to shorten the time it takes to develop accessibility support to newly developed technologies. As a research community we need to oversee the technologies that will be built and need to ensure that accessibility is well integrated to them.

Be Agile! Agile and rapid development techniques are very popular in the mobile Web field. That means if we want the accessibility to be part of such development, as a Web accessibility research community we too need to provide rapid and agile guidelines, techniques and methods to support such development processes. This means we need to research how such agile methods, guidelines and techniques can be provided to a mobile Web community and also to the future ubiquitous Web community.

6 Conclusions

Mobile devices provide many opportunities for everybody to access the Web, especially people with disabilities. Likewise, there are many accessibility barriers to mobile access. The RDWG organised a symposium to bring researchers and practitioners together to discuss the accessibility challenges associated with the mobile Web. The primary objective was to gather, analyse, and discuss practical experience (accessibility in use) and in particular to investigate:

This note first discussed the state of the art in mobile Web accessibility, and introduced the papers presented at the symposium. These five papers cover a broad range of topics including input issues, security limitations, alternative views of the Web - the Web as a platform and Web standards. Based on these papers and the discussions conducted in the symposium, three emerging themes came to light:

Based on these emerging themes, this note suggests that the research agenda needs to consider the future directions and adopt new approaches. In summary, the future directions need to consider the unified Web, convergence of technologies and migration of technologies between platforms. Finally, the research agenda needs to look ahead of future developments, don't lag behind and be agile.

7 References

   Shawn Lawton Henry. Introduction to Web Accessibility. W3C, 2005. http://www.w3.org/WAI/intro/accessibility.php.

   Ready.mobi. dotMobi, 2010a. http://ready.mobi.

   TAW MobileOK Basic Checker. CTIC, 2010b. http://validadores.tawdis.net/mobileok/en/.

   Ubiquitous Web. W3C, 2012. http://www.w3.org/UbiWeb/.

   Funka Nu AB. Guidelines for the development of accessible mobile interfaces. Funka, 2012. http://www.funkanu.se.

   Android. Accessibility. Android, 2012. http://developer.android.com/guide/topics/ui/accessibility/index.html.

   Apple. Accessibility Programming Guide for iOS. Apple, 2012. http://developer.apple.com.

   M. Arrue, M. Vigo, and J. Abascal. Automatic evaluation of mobile web accessibility. In Universal Access in Ambient Intelligent Environments, UI4All 2006, LNCS 4397, pages 244–260. Springer, 2007.

   Blackberry. Understanding Accessibility. BlackBerry, 2012. http://docs.blackberry.com/en/developers/deliverables/20100/.

   Stephen Brewster. Overcoming the lack of screen space on mobile computers. Personal Ubiquitous Comput., 6(3):188–205, 2002. ISSN 1617-4909. doi: http://dx.doi.org/10.1007/ s007790200019.

   Stephen Brewster, Faraz Chohan, and Lorna Brown. Tactile feedback for mobile interaction. In proceedings of the SIGCHI conference on Human factors in computing systems, pages 159–162, 2007.

   Lee Butts and Andy Cockburn. An evaluation of mobile phone text input methods. Aust. Comput. Sci. Commun., 24(4):55–59, 2002. doi: http://doi.acm.org/10.1145/563997.563993.

   Scott Carter, Amy Hurst, Jennifer Mankoff, and Jack Li. Dynamically adapting GUIs to diverse input devices. Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, pages 63–70, 2006.

   Minhee Chae and Jinwoo Kim. Do size and structure matter to mobile users? an empirical study of the effects of screen size, information structure, and task complexity on user activities with standard Web phones. Behaviour and Information Technology, 23(3):165–181, 2004.

   Duen Horng Chau, Jacob Wobbrock, Brad Myers, and Brandon Rothrock. Integrating isometric joysticks into mobile phones for text entry. CHI ’06 entended abstracts on Human factors in computing systems, pages 640–645, 2006.

   Alan Chuter and Yeliz Yesilada. Relationship between Mobile Web Best Practices (MWBP) and Web Content Accessibility Guidelines (WCAG). W3C, 2008. http://www.w3.org/TR/mwbp-wcag/.

   James Craig and Micheal Cooper. Accessible Rich Internet Applications (WAI-ARIA). W3C, 2011. http://www.w3.org/TR/WCAG20/.

   Yanqing Cui and Virpi Roto. How people use the Web on mobile devices. In WWW ’08: Proceeding of the 17th international conference on World Wide Web, pages 905–914, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-085-2. doi: http://doi.acm.org/10.1145/ 1367497.1367619.

   Andrew Dillon, John Richardson, and Cliff McKnight. The effects of display size and text splitting on reading lengthy text from screen. Behaviour and Information Technology, 9(3):215–227, 1990. URL http://www.ischool.utexas.edu/~adillon/Journals/bitpaper_files/Display%20size.htm.

   European Union Policy Survey. eAccessibility of Public Sector Services in the European Union, November 2005. URL http://www.cabinetoffice.gov.uk/e-government/resources/eaccessibility/index.asp.

   Steve Faulkner, Cynthia Shelly, and Jason Kiss. HTML to Platform Accessibility APIs Implementation Guide. W3C, 2012. http://www.w3.org/TR/html-aapi/.

   Jon Froehlich, Jacob O. Wobbrock, and Shaun K. Kane. Barrier pointing: using physical edges to assist target acquisition on mobile device touch screens. In Assets ’07: Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility, pages 19–26, 2007. ISBN 978-1-59593-573-1. doi: http://doi.acm.org/10.1145/1296843.1296849.

   John Garofalakis and Vassilios Stefanis. Moke: a tool for mobile-ok evaluation of web content. In W4A ’08: Proceedings of the 2008 international cross-disciplinary conference on Web accessibility (W4A), pages 57–64, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-153-8. doi: http://doi.acm.org/10.1145/1368044.1368058.

   Nielsen Norman Group. Usability of mobile Websites and applications: 237 design guidelines for improving the user experience of mobile sites and apps. Nielson Norman Group, 2011.

   Nicholas A. Giudice, Hari Prasath Palani, Eric Brenner and Kevin M. Kramer. Learning non-visual graphical information using a touch-based vibro-audio interface. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility, pages 103-110, 2012.

   Simon Harper and Yeliz Yesilada. Web accessibility and guidelines. In Simon Harper and Yeliz Yesilada, editors, Web Accessibility: A Foundation for Research, Human-Computer Interaction Series, chapter 6, pages 61–78. Springer, London, 1st edition, September 2008. ISBN 978-1-84800-049-0. doi: http://dx.doi.org/10.1007/978-1-84800-050-6_6.

   Shawn Lawton Henry. Web content accessibility guidelines overview. W3C, 2012. http://www.w3.org/WAI/intro/wcag.php.

   Ken Hinckley. Input technologies and techniques, chapter 7, pages 152–164. Lawrence Erlbaum Associates, 2003.

   I. Jacobs, J. gunderson, and E. Hansen. User agent accessibility guidelines 1.0 (UAAG). W3C, 2002. http://www.w3.org/TR/WCAG20/.

   Christina L. James and Kelly M. Reischel. Text input for mobile devices: comparing model prediction to actual performance. In CHI ’01: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 365–371, New York, NY, USA, 2001. ACM Press. ISBN 1-58113-327-8. doi: http://doi.acm.org/10.1145/365024.365300.

   E. Karpov, I. Kiss, J. Leppänen, and J. Olsen. Short message dictation on symbian series 60 mobile phones. In proceedings of the 8th international conference on multimodal interfaces, pages 126–127, 2006.

   Rebecca L. Koslover, Brian T. Gleeson, Joshua T. de Bever and William R. Provancher. Mobile Navigation Using Haptic, Audio, and Visual Direction Cues with a Handheld Test Platform. In proceedings of the 8th international conference on multimodal interfaces, IEEE Transactions on Haptics, 5(1): 33-38 (2012).

   Min Lin, Rich Goldman, Kathleen J. Price, Andrew Sears, and Julie Jacko. How do people tap when walking? an empirical investigation of nomadic data entry. Int. J. Hum.-Comput. Stud., 65(9):759–769, 2007. ISSN 1071-5819. doi: http://dx.doi.org/10.1016/j.ijhcs.2007.04.001.

   I. Scott MacKenzie and R. William Soukoreff. Text entry for mobile computing: models and methods, theory and practice. Human-Computer interaction, Volume 17:147–198, 2002.

   Jennifer Mankoff, Anind Dey, Udit Batra, and Melody Moore. Web accessibility for low bandwidth input. Proceedings of the fifth international ACM conference on Assistive technologies, pages 17–24, 2002.

   Ethan Marcotte. Responsive Web Design. A Book Apart, 2011. ISBN 978-0-9844425-7-7.

   Ignacio Marin and Abel Rionda. W3c mobileok basic tests 1.0 checker (beta release) user manual. Technical report, World Wide Web Consortium (W3C), http://dev.w3.org/2007/mobileok-ref/mobileOK-Basic-RI-1.0-UserManual.pdf, 2008.

   Microsoft. Designing Applications for Windows Mobile Platforms. Windows, 2012. http://msdn.microsoft.com/en-us/library/bb158602.aspx.

   Brad Myers, Jacob wobbrock, Sunny Yang, Brian Yeung, Jeffrey Nichols, and Robert Miller. Using handhelds to help people with motor impairments. Fifth International ACM SIGCAPH Conference on Assistive Technologies; ASSESTS 2002, pages 89–96, 2002.

   Nokia. Nokia Accessibility. Nokia, 2012. http://www.nokiaaccessibility.com/.

   Opera. Opera Developer Tools. Opera, 2012. http://www.opera.com/developer/tools/mobile/.

   Antti Oulasvirta, Sakari Tamminen, Virpi Roto, and Jaana Kuorelahti. Interaction in 4-second bursts: the fragmented nature of attentional resources in mobile HCI. In proceedings of the SIGCHI conference on Human factors in computing systems, pages 919–928, 2005.

   Sean Owen and Jo Rabin. W3C mobileOK Basic Tests 1.0. W3C, 2008. http://www.w3.org/TR/mobileOK-basic10-tests/.

   Tim Paek and David Maxwell Chickering. Improving command and control speech recognition on mobile devices: using predictive user models for language modeling. User modeling and user-adapted interaction, Volume 17:93–117, 2007.

   Antti Pirhonen, Stephen Brewster, and Christopher Holguin. Gestural and audio metaphors as a means of control for mobile devices. In CHI ’02: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 291–298, New York, NY, USA, 2002. ACM Press. ISBN 1-58113-453-3. doi: http://doi.acm.org/10.1145/503376.503428.

   Ivan Poupyrev, Shigeaki Maruyama, and Jun Rekimoto. Ambient touch: designing tactile interfaces for handheld devices. In UIST ’02: Proceedings of the 15th annual ACM symposium on User interface software and technology, pages 51–60, New York, NY, USA, 2002. ACM Press. ISBN 1-58113-488-6. doi: http://doi.acm.org/10.1145/571985.571993.

   Jo Rabin and Charles McCathieNevile. Mobile Web Best Practices 1.0, 2005. http://www.w3.org/TR/mobile-bp/.

   J. F. Reisel and Ben Shneiderman. Is bigger better? the effects of display size on program reading. In Proceedings of the Second International Conference on Human-Computer Interaction, pages 113–122, 1987.

   Virpi Roto and Antti Oulasvirta. Need for non-visual feedback with long response times in mobile HCI. In WWW ’05: Special interest tracks and posters of the 14th international conference on World Wide Web, pages 775–781, New York, NY, USA, 2005. ACM Press. ISBN 1-59593-051-5. doi: http://doi.acm.org/10.1145/1062745.1062747.

   Andrew Sears and Mark Young. Physical disabilities and computing technologies: an analysis of impairments. pages 482–503, 2003.

   Marcos Serrano, Laurence Nigay, Rachel Demumieux, Jérôme Descos, and Patrick Losquin. Multimodal interaction on mobile phones: development and evaluation using ACICARE. In Proceedings of the 8th conference on Human-computer interaction with mobile devices and services MobileHCI ’06, pages 129–136, 2006.

   Miika Silfverberg, I. Scott MacKenzie, and Panu Korhonen. Predicting text entry speed on mobile phones. In proceedings of the SIGCHI conference on Human factors in computing systems, pages 9–16, 2000.

   Shari Trewin. Physical usability and the mobile web. In W4A: Proceedings of the 2006 international cross-disciplinary workshop on Web accessibility (W4A), pages 109–112, New York, NY, USA, 2006. ACM Press. ISBN 1-59593-281-X. doi: http://doi.acm.org/10.1145/ 1133219.1133239.

   Markel Vigo, Amaia Aizpurua, Myriam Arrue, and Julio Abascal. Automatic device-tailored evaluation of mobile Web guidelines. New Review of Hypermedia and Multimedia, 15(3): 223–244, 2009.

   WebAIM. Survey of Preferences of Screen Readers Users, 2012.http://webaim.org/projects/screenreadersurvey/.

   Wireless RERC (Rehabilitation Engineering Research Center). Wireless Device Use by People with Disabilities, 2013.http://www.wirelessrerc.org/content/publications/2013-sunspot-volume-1-wireless-device-use-people-disabilities.

   John Williamson, Roderick Murray-Smith, and Stephen Hughes. Shoogle: excitatory multimodal interaction on mobile devices. In proceedings of the SIGCHI conference on Human factors in computing systems, pages 121–124, 2007.

   Jacob O. Wobbrock. The future of mobile device research in HCI. In CHI’06 Workshop, Canada, 2006.

   Jacob O. Wobbrock, Brad A. Myers, Htet Htet Aung, and Edmund F. LoPresti. Text entry from power wheelchairs: edgewrite for joysticks and touchpads. In Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility, pages 110–117, 2004.

   Christopher C. Yang and Fu Lee Wang. Fractal summarization for mobile devices to access large documents on the Web. In Proceedings of the Twelfth International World Wide Web Conference, 2003.

   Yeliz Yesilada, Alan Chuter, and Shawn Lawton Henry. Shared Web Experiences: Barriers Common to Mobile Device Users and People with Disabilities. W3C, 2008. http://www.w3.org/WAI/mobile/experiences.

8 Symposium Proceedings

Research Report on Mobile Web Accessibility

This document should be cited as follows:

S. Harper, P. Thiessen, Y. Yesilada, Research Report on Mobile Web Accessibility. 
W3C WAI Research and Development Working Group (RDWG) Notes. (2012)
Available at: http://www.w3.org/TR/mobile-accessibility-report

The latest version of this document is available at:


A permanent link to this version of the document is:


A BibTex file is provided containing:

@incollection {mobile-accessibility-report_FPWD,
  author = {W3C WAI Research and Development Working Group (RDWG)},
  title = {Research Report on Mobile Web Accessibility},
  booktitle = {W3C WAI Symposium on Mobile Web Accessibility},
  publisher = {W3C Web Accessibility Initiative (WAI)},
  year = {2012}, month = {December},
  editor = {Simon Harper and Peter Thiessen and Yeliz Yesilada},
  series = {W3C WAI Research and Development Working Group (RDWG) Notes},
  type = {Research Report},
  edition = {First Public Working Draft},
  url = {http://www.w3.org/TR/mobile-accessibility-report},

Contributed Extended Abstract Papers

The links provided in this section, including those in the BibTex files, are permanent; see also the W3C URI Persistence Policy.

     title = {W3C WAI Symposium on Mobile Web Accessibility},
     year = {2012},
     editor = {W3C WAI Research and Development Working Group (RDWG)},
     series = {W3C WAI Research and Development Working Group (RDWG) Symposia},
     publisher = {W3C Web Accessibility Initiative (WAI)},
     url = {http://www.w3.org/WAI/RD/2012/mobile/},

9 Acknowledgements

Participants of the W3C WAI Research and Development Working Group (RDWG) involved in the development of this document include: [Alphabetical List of Contributors]

RDWG would also like to thank the chairs and scientific committee members as well as the paper authors of the RDWG online symposium on Mobile Web Accessibility.

10 Appendix

10.1 Question and Answers

Written answers to the main questions posed in the Symposium.

PAPER 1: Accessible Security For Mobile

1. What particular security concern(s) are raised by the combination of the mobile Web and users with disabilities for social engineering vulnerabilities? What is the connection between accessibility in mobile devices and the security dangers and problems specifically faced by people with disabilities?

One of the big problems that we've encountered is that devices are made secure, but at the expense of someone who is disabled or situationally disabled being able to actually use their device. As an example, we evaluated a containerization solution--a solution that partitions a smartphone into a work partition and a personal partition so that users and the enterprise can separate work applications and data from personal apps and data. The first challenge we saw was that the container solution couldn't be installed by someone who was blind. If the solution would have been deployed at that stage, a user could not use the Eyes Free shell that they were used to in order to access the content in their work partition because they were given a choice between the Eyes Free Shell and the work partition. And, the navigation of the container system and applications provided in the work partition were not accessible. And when the assistive technology was running, the authentication to the container crashed, so that you could no longer authenticate to get into the container.   And, beyond that, assistive technologies had to be recompiled and installed as a separate namespace in each partition. So, the problem is more likely that the device can be made secure but is unusable by someone who has a disability. Consider biometrics. If I can't use fingerprints and the authentication solution is fingerprint authentication, the mobile solution is completely impractical for me and renders the device useless unless there's an alternative.

From the other perspective, we've seen assistive technologies that created security issues. Consider Siri...Siri is being blocked within IBM due to security concerns. First, IBM has issues with content being sent to the Apple cloud, but also, by default Siri, can (or could) be used even when the smartphone was locked with a passcode. I'm not sure if that’s still true. But it's an example of the impact.

2. The “..key areas that need to be addressed in order to create an accessible mobile security..” list mentioned, are these concerns specific to disabled users?

More and more, particularly with mobile, we see a blurring of the lines between serving those who have disabilities and those who don't. With mobile, everyone is experiencing situational disabilities. People who are driving their cars shouldn't be using their hands when they're trying to text and shouldn't be trying to look at their tiny screens and scroll around while driving. They can't use their hands, they can't use their eyes. They're experiencing situational impairments. Someone who is in a noisy airport can't hear their call very well. That's a situational hearing impairment. If the lighting isn't right, they may have challenges with seeing what's on their screen. If they're traveling, they may only have one hand free to use the device. The list goes on and on. Many of the techniques and concerns that we have in our focus on persons with disabilities are now impacting most people using mobile devices. We have already seen a case where password restriction was too difficult for persons who were blind or mobile. People stopped using the application which was in a limited pilot. So, I would say that the concerns we have identified are specific to disabled users and those with situational disabilities.

3. What steps towards including the needs of disabled users in cross-mobile-platform security standardization best practices would you recommend?

We tell everyone that the biggest step is to engage disabled users or engage people with the expertise to speak on behalf of disabled users. We came into one pilot that had performed a user needs study and had not included any people with disabilities. This is a problem for persons with disabilities, people who will experience situational disabilities and the IT staff, chief security officer and so on who won't get the input they need to make informed decisions about the deployment.

Secondly, build accessibility into the enterprise mobile security solution. Often we see accessibility treated as something to be done to a project after it is complete. We know that making changes earlier is less costly than trying to fix systems after they've been delivered.

We have several other recommended steps as mentioned in the paper which we can go over in more detail, but in the interest of time, we can summarize here.

  1. Steps include:
  2. Updating any local instructions, guidance and education related to accessible and secure mobile computing.
  3. Determining which combinations (OS, device, carrier, assistive technologies) will be supported,
  4. Making decisions about how you're going to support accessibility with ATs, which may require third party assistive technologies, evaluating those technologies
  5. Building accessibility into any new security technologies (biometrics, containers, hypervisor solutions and so on)
  6. and Determining how compliance monitoring, enablement and enforcement will be done.

If anyone is interested in more discussion on any of the steps listed, we can go into those in greater detail.

4. Would you like to see more of a security focus/emphasis in guidelines like WCAG? or Would following WAI (WCAG, ARIA) be sufficient as the guidelines/techniques to be applied?

At this time, we are not calling for any new, special mobile security focus in WCAG guidelines.  The guidelines for accessibility are developing and IBM has made input to the process.  Generally we see the WCAG and WAI-ARIA standards are the right direction.  The new Independent UI standard effort just started will provide another important piece to accessibility on mobile devices.

5. Do you feel there's a particular security concern for disabled users in respect to GEO location tracking?

Not really, but there is definitely a concern in general. Geolocation apps that identify a user's physical location are widely used in many industries--banking, retail, travel. They are even used by some law enforcement and to prevent fraud. They can be a good thing, but they also introduce new risks. If someone knows you, your gender, race, where you work, and so on and can combine that with geolocation, they can figure out where you are which can leave you open to burglary, stalking or worse. I think it's an issue for everyone these days and more so if you can't see who is stalking you or can't hear someone getting close or have a mobility impairment that prevents you from reacting the way you would like.  

6. What do you mean by "weakest links"? Which are they? [from Cristina Gonzalez Cachon]

This phrase "weakest links" comes from a quote by Kevin Mitnick who was the world’s most wanted computer hacker back in the 1990s. The quote reads “Companies spend millions of dollars on firewalls, encryption and secure access devices and it’s money wasted because none of these measures address the weakest link in the security chain.” That weakest link is us. It speaks to the human side of security. We can build the technology, but if we haven’t looked at how the user interacts with the system and accounted for and addressed their interactions with the technology, we still haven’t addressed the security challenges. It further implies that if the security doesn't work well, people find a way to circumvent it, which lessens the security.  The "yellow sticky" solution for secure passwords can make then less secure.

7. Which is the purpose of "remote monitoring" as commented in Open Research Avenue? [from Cristina Gonzalez Cachon]

There are many opportunities for research, including new accessible security authentication techniques, biometric authentications, remote monitoring and personalization of devices, and creation of frameworks for delivering accessible and security mobile applications. This area around remote monitoring focuses on how an enterprise can use mobile device monitoring capabilities, in combination with software inventory and distribution techniques to understand the configuration of the mobile device, the hardware, operating system, assistive technologies installed and what is in use, to be able to either provide guidance to the end user or to actually make that mobile device more accessible. It could be that someone's device is configured perfectly for their personal daily use, but when they try to use it for work, they discover that it doesn't provide the accessibility needed. So, the research here is in understanding the combinations, their interactions with work-related software, how mobile device management and monitoring tools that help with security can be adapted to better meet the needs of the users.

Additional Questions Asked

8.  General 1:  Samantha Bird - Are there any recommended tools to use with mobile accessibility testing ?

Automated tools for testing mobile accessibility are limited.    Apple has the Accessibility Inspector, but it is interactive, not automated.  Web-based apps can benefit from user agent switching to test.  This is an area that needs research and development.

9.  General 2. James Carter - What issues does responsive design generate for accessibility?

This is not something we have researched.  But our development teams are starting to use Responsive design.  Responsive design, in theory should be compatible and even help accessibility.  We assume you're talking about Web apps (we are unaware of responsive design for native apps).   Generally, the WAI-ARIA HTML5 tags for main content, navigation, search, need to be correct regardless of the display of the responsive design.  The user still needs to navigate and find headings and other content properly marked up.

10.  General 3. Shiv D - Importance of Mobile Accessibility and Effort to apply

One need only consider the tipping point of more than half of internet access is coming from mobile devices.  If mobile devices and apps are not accessible, we leave out a large user base.

11.  General 4 Marina Buzzi - I'm very interested in accessibility of dynamic apps on mobile devices for blind users  

When you say dynamic apps, do you mean responsive design?  See General 2 above.    Generally the issues on mobile devices for the blind are finding the data, especially data that updates on the screen.  In  a model where the user has to touch content to hear it, we need to consider the model of HTML5 / WAI-ARIA that surfaces alerts to the user based on a tiered level of importance.

PAPER 2: Enabling Flexible User Interactions With Mobile Media

1. Do you have any sense of the scale of benefits we may expect if annotation of the media is easy and able to occur in parallel to its rendering? and for what user group? especially for people with disabilities?

Streaming annotations synchronised with audio and video provides immediate access to key points of a subject and would not only help the 80% of those who are disabled who have specific learning difficulties by reducing the amount memory load but also provides vital access to the media for those with visual and hearing impairments and may reduce some complexities in content for individuals with other cognitive impairments. Realistically streaming any text with audio and video can help all users and supports the view that multimodal learning can enhance the uptake of knowledge.  This would be applicable to all those in a teaching and learning situation and many who are viewing and listening to items for pleasure.  Providing annotations that are synchronised with the media allows search engines to find information and thus researchers to find sections of media more speedily. The parallel rendering is implemented on some native apps, but on the Mobile Web it is still very difficult. Currently, it relies on the content provider to make the media file accessible on Mobile devices. For example, they need to provide the metadata, closed captioning and a suitable format that could be delivered via lower bandwidth. Integrating user generated content within this process will allow users to customise the media file and enhance the annotation making experience. User with different requirements, such as disabilities, can choose the best way of media delivering and presentation as they like.

2. What are the accessibility benefits in HTML5 that lead you to select it as a solution?

HTML5 is cross-platform Web standard that enables the use of descriptive and navigational elements and accessible media players that should not be browser dependent. So HTML5 to some degree solves the compatibility problems across mobile devices, which is part of the problem that accessibility on mobile platforms is trying to address. HTML5 along with other standards, such as Microdata, WebVTT, Media Fragment URI, etc, enrich the metadata about multimedia resources so that applications could use these metadata (or annotations) to make multimedia more accessible on mobile devices. It also opens possibilities of using semantic Web technology to improve the accessibility of multimedia apps on the Mobile platforms

3. What makes you think mobile media annotation is necessary, especially if mobile viewing is mostly of fragments or shorts?

The fact that you can search for the fragments via the annotations and thus connect with chosen content more easily. Viewing the whole video from the very beginning to the end, for example a 45 minute lecture, most of the time is not necessary, especially on mobile devices. It will be user friendly if the video is indexed by annotations linking to specific time points and users could choose which part they want to see. In addition, mobile access to multimedia resources are largely limited by the bandwidth, battery consuming, screen resolutions, etc. So only delivering the fragments also saves bandwidth and battery.

4. What kinds of annotations (speech, textual, bookmark, etc) are you interested in and how can these be enacted in a constrained - and possibly noisy - mobile setting?

Textual annotations and bookmarks are not impacted on by so called noisy mobile settings and are idea for mobile presentations when enacted in a bite sized way. We are interested in user generated annotations, such as comments, tags. We also need to add transcript to multimedia resources.

5. You say 'once the file is downloaded the media player mobile device is activated and no further interaction between the Web page or any other application can take place.' How does this apply to streaming media?

Mobile applications that include players tend to stream the media in isolation with no chance to pause and add items whilst the process is taking place. Here the "interaction" mainly means that the media player will take up the whole screen and users will not be able to do any other operations until they stop the player. Certainly, mobile device can play streaming media and the file wouldn't be fully downloaded. However, when you play it, the player will still take up the whole screen. One example is the BBC iPlayer. Currently, there is no easy way to get around that except developing native apps.

6. SMIL seemed to offer some solutions to this - why discount this, and why has SMIL fallen by the wayside - don't opportunities exist in the utilisation of these technologies in this new and growing domain?

Yes, SMIL offers some solutions to this problem. But as far as we know, SMIL is not well supported by mobile devices, even though SMIL specification itself defines mobile profile extension in version 2.1 (see http://www.w3.org/TR/SMIL2/smil21-mobile-profile.html). One big problem about SMIL is that it is not supported natively by Web browsers.

Additional Questions Asked

7. Which limitations exists in the implementation of the W3C Media Fragments Specification? As far as I understand you are proposing a mechanism to annotate media while playing it. [from Cristina Gonzalez Cachon]

The W3C Media Fragment Specification requires the cooperation of the client, proxy and server to deliver a certain byte range of the media file. There are many patterns that the byte range could be delivered. The core part of this process is the mapping from time, spatial and track to the correct byte range, which need to be done either on client or server side. Unfortunately, there is only few implementations currently, and they are limited to certain codecs and browsers. On client side, Firefox 10+ supports the temporal fragment. Ninsuna is a server-side implementation of Media Fragment Specification.

8. Is this use case out of the scope of the Media Fragments WG? Is it in the scope, but not yet implemented? [from Cristina Gonzalez Cachon ]

Media Fragment WG doesn't address the multimedia on mobile devices directly. But in the working group's requirement document (http://www.w3.org/TR/media-frags-reqs/#scenario3.5), it is obvious that one important usage of media fragment will be on the mobile devices. And another important statment given by the work group is that " data. There is often a lot of textual information available that relates to the media resource. Enabling the addressing of media fragments ultimately creates a means to attach annotations to media fragments" (see http://www.w3.org/TR/media-frags-reqs/#uc4)

PAPER 3: Accessibility In Multi-device Web Applications

1. How does your work relate to the symposium objectives and research questions?

Our work is focused on a specific technical challenge related to mobile accessibility: analyze techniques which help to solve/mitigate existing text input problems for mobile Web users. We take a broad view of accessibility - as theRIAM project showed, text entry problems affect all users on mobile devices - but people with motor impairment will be particularly affected, so a target group for our investigations will be older users who have trouble with text entry on mobile phones. We hypothesize that by understanding and helping this group, we will be helping users universally, who experience similar problems, to a lesser extent.This challenge is related to the “Guideline 3.3 Input Assistance: Help users avoid and correct mistakes” of the WCAG 2.0. More specifically, it is focused on section 3.3.3: “Error suggestion”. One of the main difficulties of this problem is to deal with the great diversity of hardware and software text input mechanisms. The appearance of new interaction models such as touch interfaces has led to the development of a great number of different keyboards, each of them presenting its own text input problems. Our work will contribute to know what is the state of mobile Web accessibility with regard to text input mechanisms, i.e. detect common problems in different families of devices and suggest possible techniques to solve them.


2. In your abstract you make an assertion that the user needs on the desktop can be directly transposed to the mobile context. Can you please elaborate on this? Isn't this a strong assertion and how do you know if it is a correct assertion?

We should rewrite the sentence saying that SOME of the desktop problems could also appear in the mobile context.  We state that similar problems apply to different user groups in different contexts. Specifically, situationally-impaired (small keys, distracting environment) mobile users experience similar problems as motor impaired desktop users.The correctness of the assertion has been proved as part of theRIAM project (Reciprocal Interoperability of Accessible and Mobile Webs) [1] following the methodology proposed by Trewin & Pain [3] to investigate pointing & typing errors of motor impaired users. It has been formally stated in Chen’s thesis [2]. They have proved it for a set of devices and we are analyzing if the same assertion applies for other family of devices.


3. How close are we to seeing results, and do you have any preliminary results or examples available?

At present, we don’t have any kind of results, since we are still analyzing the gathered data. The first step will be the creation of a complete table relating mobile keyboards types, common errors occurred when using them, and possible techniques required to correct them. The table will be done by analyzing logs of different mobile Web users while interacting with a basic prototype we have developed. Once the table has been completed, we will establish priorities to implement the correction techniques in different families of devices.


4. How does your work in MyMobileWeb relates to people with disabilities and accessibility in general?

MyMobileWeb is a generic framework to build multi-device Web applications. So far, it didn’t pay special attention to accessibility issues apart from basic techniques (e.g. alternative text for images). This work is a first step towards the consecution of a platform which facilitates the creation of accessible mobile Web applications by implementing the aforementioned text input correction techniques.


5. Expand on the way you intend to overcome the keyboard description problem.

We need a way to describe the different keyboards to be analysed. The first approach inRIAM was to define the mapping between physical keys and characters by extension, as the prototype was based on a specific device model. In order to support a wide range of devices, we need to reduce the amount of information to express these mappings and also make them more comprehensible. The initial idea is to use a hierarchical model. For instance, we may describe a general key-to-character mapping for a specific device family (for instance, S40) and then describe model variations, language variations, etc. The idea is to include the keyboard mapping model in a Device Description Repository.


6. Do new smart phones already address some of the issues described?

Some devices incorporate text prediction techniques that help to reduce input errors when typing dictionary words (traditional T9, Swype, etc.) . The main problem is that, due to input mechanisms fragmentation, it is very difficult for developers to guess which features are supported in the keyboard of each device.


7. Over how many users will the MyMobileWeb data harvest be?

So far, we have collected data coming from a reduced number of older users (8 users). The intention of this study is not to cover a big amount of users but do it in depth. Moreover, while we analyze such data in order to obtain some preliminary results, we are still gathering new data from common users (not only older users).


8. Could you expand on the benefits you see for all mobile users.

The benefit is very clear: mitigate the problems of mobile Web users when entering text contents, thus improving the text input throughput and reducing user frustration.



1. Web Ergonomics Laboratory – The University of Manchester. RIAM project: Reciprocal Interoperability of Accessible and Mobile Webs. http://wel.cs.manchester.ac.uk/research/riam

2. T. Chen (2011). Investigating retrospective interoperability between the accessible and Mobile Webs with regard to user input. PhD Thesis, The University of Manchester.

3. S. Trewin, H. Pain (1999). Keyboard and mouse errors due to motor disabilities. International Journal of Human Computer Studies 50:109-144.DOI:10.1006/ijhc.1998.0238

4. Fundación CTIC, Telefónica I+D et al. MyMobileWeb project. http://mymobileweb.morfeo-project.org

5. Media Informatics Group - University of Munich. UsaProxy project. http://fnuked.de/usaproxy/

6. Henze, N. and Rukzio, E. and Boll, S (2012). Observational and Experimental Investigation of Typing Behaviour using Virtual Keyboards on Mobile Devices. Proc. CHI, 2012.

PAPER 4: Assessment Of Keyboard Interface Accessibility In Mobile Browsers

Greg Gay, Inclusive Design Research Centre, OCAD University, Canada

1. In your abstract, you indicate that "mobile software developers will need to be aware of the important accessibility role played by keyboard interfaces and will need to test this functionality as a routine part of QA". In order to achieve this, what do you think can be (needs to be) done?
I think the best way for this to happen is for the platform developers to include keyboard interface testing tools in their QA suites. Frankly, it is in their best interest to maintain interoperability on their platforms. Android might want to remind its developers that some Android phones continue to include physical D-pads. Apple might remind its developers that external Bluetooth keyboards are supported.

2. What can be done to increase the mobile developers' accessibility awareness?
#1 platform developers need to include explicit statements about keyboard interface accessibility being part of the platform's intended look-and-feel #2 QA tools, as I already mentioned #3 a series of videos showing the same actions being accomplished in a variety of ways may help developers to better understand the issue.

3. In your abstract you cross reference requirements for keyboard accessibility on mobile devices with UAAG 2.0, how do these requirements relate to WCAG 2.0 from content perspective?
UAAG is a set of guidelines for user agents, but the line between user agents and operating systems can be easily blurred. E.g. Chrome OS. And UAAG already takes into account that a user agent may even be constructed from Web content and run on another user agent (e.g.  online PDF viewers that render into HTML). Assuming that the platform supports keyboard interface control (which we show is generally the case), the responsibility then ships to content producers, not to break that keyboard accessibility with their content (whether regular HTML, Web apps, or mobile apps).

Additional Questions Asked

4. How do you perform the tests? would it be possible to automate them? To what extent? [from Cristina Gonzalez Cachon]
A. The tests were performed manually. I believe it would be possible to automate them both on devices and within the development environments. Probably 100% automation would not be possible, due to factors such as keyboard traps and the requirement to ensure that a focus indicator is actually visible.

PAPER 5: Inclusive Mobile Experience: Beyond The Guidelines And Standards

1. Do you think there is a tension between the increasing accessibility requirements resulting from the increasing complexity of mobile applications and development, and the need for less complex guidelines? and if there is a tension, what suggestions do you have to address such tension?

There is a great deal of tension between accessibility guidelines and complexity of mobile/touch based interaction design and development. This is not only due to the complexity of mobile applications but also due to the speed of that development and its mercurial nature as different companies and manufacturers apply their own user experience (UX) and inclusive design concepts. I’d like to answer the first part of this question with a current example and include tablets in my discussion as they are also mobile devices.

Having looked at Windows 8 “touch first” strategy and some Metro design patterns which will also be utilised in Windows Phone 8, user experience designers and touch developers are going to be confronted again with a new set of touch design constraints which will present accessibility challenges. Although assured by Microsoft that everything will be possible to do by keyboard that is designed to be done by touch, there are certain gestures (such as some wrist rotation based controls) that simply cannot be replicated in a keyboard/browser context. My statements are caveated with the fact that I have not done extensive accessibility testing on Win 8 Metro or Windows Phone 8 or Tablets (given its not publically released yet) but have gone through the materials available online and some additional Microsoft design templates supplied during a sneak peek workshop. I have had the opportunity to have a play with a Win 8 tablet and do some app UX design for it.

What does this mean for Windows Phone 8 (Apollo)? Industry knowledge suggests that it will be tied closely to Windows 8 and will “share integrated ecosystems, components and user experiences” such as Internet Explorer 10. If that’s not a complex situation then I don’t know what is. That being the case, there should be an improvement in speech to text as Microsoft is introducing a redesigned version of Narrator, the text to speech solution, for Win 8 but the experience cannot focus on the visually impaired accessibility needs alone. Gesture based interactions, which much of the windows 8 and therefore Windows phone 8 ‘touch first’ approach relies on, present obstacles to anyone with motor impairment accessibility requirements. Additionally, how will this Narrator work with the ‘live tiles’ and ‘animated bubbles’ that are so integral to the Windows 8 and Windows Phone 8 experience?

The Microsoft Developer Network (MSDN) has many explanatory articles about how to build accessible Metro apps, most of which rely on implementations of HTML5-ARIA and JavaScript…or a combination of both in some way or another. One of the most heavily used screen readers, JAWS from at least versions 10 and above, even the latest version, JAWS 13, have trouble with HTM5-ARIA. I’m not a techie by any stretch of the imagination but even I can see many difficulties ahead here. The guidelines being referred to and added as references for this developer community are the W3C-WAI versions for HTML5 and nothing about mobile accessibility at all, not even the WCAG to MWBP mapping. They have a highly technical set of 11 ‘accessibility’ guidelines specific to their application development. There is nothing about the user experience. Microsoft is also assuring the disability community they will be working with the Assistive Technology (AT) companies to ensure everything is integrated but acknowledge the Win 8 & Windows Phone 8 will most likely require support to “ensure their ATs don’t break with each release” However a comment on the MSDN states “JAWS is once again broken now because XPDM mirror driver model was removed in Windows 8”5 which I don’t particularly understand what that means…and so on and so on…

I could actually go on and on with more Windows 8 and Win Phone 8 accessibility issues and challenges but hopefully I have answered the question with this example about whether there is tension between complexity of app development and a need for technology agnostic or less complex guidelines.

Now, as to what to suggest for an approach to these kinds of issues. This is really hard and I certainly don’t have all the answers.

As defined by Horst Rittel in 1973, I’d say we have symptoms of a pretty ‘wicked problem’. Some things that make up wicked problems include but are not limited to:

  1. Have no definitive formulation
  2. Have no criteria upon which to determine solving
  3. Solutions to wicked problems can be good or bad
  4. There are no complete lists of moves for a solution
  5. There is always more than one explanation for a wicked problem
  6. Every wicked problem is a symptom of another problem
  7. No solution of a wicked problem has a definitive scientific test
  8. Every wicked problem is unique.

Rittel also outlined some strategies to approach Wicked Problems, a collaborative approach; one which attempts, "…to make those people who are being affected into participants of the planning process . They are not merely asked but actively involved in the planning process…"

I would definitely advocate this as part of a strategy to solving or at least moving towards a solution of the wicked problem of trying to create standards in a mobile landscape made of quicksand. That being the case, user research to form the basis of these guidelines, added to the existing stable pillars of WCAG 2, which are relevant to the mobile and touch interactions.

The challenges with this approach are achieving a shared understanding and commitment to solving a wicked problem is a time-consuming process. That said, I believe it is time well spent.

I can make links to all of the Windows Phone 8 resources I have bookmarked online available to interested designers and developers.

2. What strategy would you recommend to help keep guidelines like WCAG up to date with mobile technology?

Continual engagement with users who have accessibility requirements is the key to maintaining living standards. In the first instance, a baseline needs to be established which is possible through detailed user research which I’ll talk about a bit later in answer to Question 4.

Cataloguing the new input mechanisms such as gesture, speech input and utilisation of mobile device native functions such as the camera or geo-location and then identifying the accessibility challenges and subsequent requirements for design and development to meet those requirements. This first attempts to define the problem space which then provides the stage for solutions. Half of the issue we face is that the problem space isn’t well defined. We know it’s a huge issue but breaking it down into more manageable components would be a strategy for future standards development.

Standardising gestures across platforms would go a long way towards assisting with good guideline creation. Luke Wroblewski has an excellent reference guide that covers all devices with gestural controls, such as Microsoft Surface and Wacom touch tablets, not just mobile devices. However, like everything in this dynamic area, it’s out of date already with Windows Phone 7 only being covered, not Windows Phone 7.5 and that will all get thrown out again when Windows Phone 8 turns up, which has a significantly reduced number of gestures and different concepts such as ‘semantic’ zoom which is activated using the ‘pinch’ gesture, currently identified as a gesture which will zoom in and out of a content area rather than ‘semantic’ zoom which displays all tiles in the Windows 8 screen landscape.

Using basic heuristics can also be of assistance with keeping guidelines up to date, as long as they are technology agnostic and specific to their sphere of use.

For example some gestural specific heuristics from Brian Pagan at UXMag:

  1. “Gestures must be coarse-grained and forgiving - People's hand gestures are not precise, even on a screen's surface. In mid-air, precision is even worse. Expecting too         much precision leads to mistakes, so allow a wide margin and low cost for errors.         
  2. “Users should not be required to perform gestures repeatedly, or for long periods of time, unless         that is the goal - Repetitive or prolonged gestures tire people out,         and as muscle strain increases precision decreases, affecting performance. If the idea is to make users exercise or practice movement though, this doesn’t apply.
  3. Gestural and speech commands need to be socially appropriate for user's environmental contexts - The world can see and hear a user’s gestures and speech commands, and users will not adopt a system that makes them look stupid in public.         For example, an app for use in an office should not require users to make large, foolish-looking gestures or shout strange verbal commands. On the flipside, including silly gestures and speech into a children’s game could make it more interesting.

These heuristics can apply to general usability but can also easily be adapted to mobile accessibility such as not requiring speech from a user for passwords

Another idea would be to create a dedicated online area for users to input their challenges and requirements for mobile accessibility to have them considered by the W3C-WAI as the foundation for guidelines or changes to WCAG 2. There are forums online but they are usually technology specific and uncoordinated. For example AppleVis. This kind of ‘crowdsourcing’ could form part of the input for a working draft of these guidelines. People want to participate, they just don’t have a simple and well publicised method of doing so. This is not suggested as a free for all but would be utilised in a structured fashion, such as putting out a call for input around a specific guidelines area over a fixed period of time as not to required 365 days a year moderation.

3. Was the mentioned WCAG to MWPB mapping approach beneficial to help you get up to speed on related best practices and or for research and development?

Yes it did in demonstrating that there are many aspects of WCAG 2 that can apply to mobile devices. Colour considerations for colour blindness, contrast, use of understandable language that can be applied to accessibility hints that can be included in iOS apps. The gaping hole in this mapping was the lack of a full set of guidelines to cover gestural input which is required for us to manipulate what is effectively “pictures under glass”

Again standardising gestures across platforms would be a great step forward to having something to baseline guidelines on.

4. A study is mentioned to help aid in the creation of new guidelines, could you describe your methodology in more detail?

The proposed study is a straightforward user research project, understanding what people do and why they do those tasks in that way. It will be looking for the challenges and pain point people with accessibility need face and from that we can possibly create some standards to alleviate the pain points.

Methods proposed to achieve this goal include

  1. One on one semi-structured         interviews with users who have accessibility needs from their touch devices. This allows for the researcher to closely question how and         why a person uses their device. These will be used as inputs into the contextual inquiry.
  2. A contextual inquiry research study which involves ‘shadowing’ users in the environment that they would use their mobile device. The intent is to observe the         usage without intruding and so ensuring the data gathered is as         naturalistic as possible. Questions are asked of the user only when absolutely necessary about things that have occurred.
  3. Data analysis and synthesis to make meaning through abductive sense making and reframing the creation of personas (fictional representations of actual users), their user journey which describes their actions, emotions, thought processes and touchpoints.

This research will be pan-disability to cover a broad spectrum of user experiences.

5. What strategy would you recommend for testing guidelines’ effectiveness with disabled users?

Exactly that. Initially lab-based usability testing would be undertaken with real disabled users utilising mobile apps or Websites that are deemed to be compliant. This would not just be ‘checkbox’ compliance testing (assuming there are agreed guidelines to test against) but also testing whether the product is usable and desirable for someone who has accessibility requirements.

The next stage is testing in the physical environment with users using observation techniques to see whether a design following particular guidelines is successful in delivering the right accessible experience in its actual context of use.

Additionally, if you as a designer or developer have a mobile in your pocket, you have a screen reader. Test it yourself first. There is no excuse.

Question 6: What is your WCAG X.0 wish list?

Full coverage of a consolidated set of gesture based interactions with guidelines in their design for people who have disabilities.


1 Paul Thurrott:         http://www.winsupersite.com/article/windows8/windows-phone-8-preview-142154

2 http://msdn.microsoft.com/en-us/library/windows/apps/hh452681.aspx                 

3 http://msdn.microsoft.com/en-us/library/windows/apps/hh700325.aspx                 

4 http://blogs.msdn.com/b/b8/archive/2012/02/14/enabling-accessibility.aspx                 

5 http://blogs.msdn.com/b/b8/archive/2012/02/14/enabling-accessibility.aspx - comment from user xpclient 15 Feb 2012 4:29 AM

6 Rittel 1972

7 http://en.wikipedia.org/wiki/Wicked_problem                 

8 Touch Gesture Reference Guide by Luke Wroblewski April 20, 2010         http://www.lukew.com/ff/entry.asp?1071         9 http://uxmag.com/articles/new-design-practices-for-touch-free-interactions    &nb/sp;            

10 http://www.applevis.com/applevis-forum/accessibility-advocacy                 

11 Brett Victor, Apple.

12 John Kolko, Thinktiv

13 Henny Swan, BBC