Keyboard Access/Modality Independent Control/IndieUI

From Mobile Accessibility Task Force

The success criteria use keyboard access to mean universal access. How do we reconcile this with Mobile, where the main input is touch and the keyboard may not exist, and an increasing number of input methods, including keyboard, mouse, speech, touch, gesture and braille.

Resources

IndieUI

IndieUI: Events 1.0 Abstract

IndieUI: Events 1.0 is an abstraction between physical, device-specific user interaction events and inferred user intent such as scrolling or changing values. This provides an intermediate layer between device- and modality-specific user interaction events, and the basic user interface functionality used by web applications. IndieUI: Events focuses on granular user interface interactions such as scrolling the view, canceling an action, changing the value of a user input widget, selecting a range, placing focus on an object, etc. Implementing platforms will combine modality-specific user input, user idiosyncratic heuristics to determine the specific corresponding Indie UI event, and send that to the web application in addition to the modality-specific input such as mouse or keyboard events, should applications wish to process it.

UAAG

Note at the beginning of Principle 2

Modality Independence Note

Principle 2: Ensure that the user interface is operable

Note: Users interacting with a web browser may do so using one or more input methods including keyboard, mouse, speech, touch, and gesture. It's critical that each user be free to use whatever input method or combination of methods works best for a given situation. If every potential user task is made accessible via modality independent controls that any input technology can access, a user can use what works best. For instance, if a user can't use or doesn't have access to a mouse, but can use and access a keyboard, the keyboard can call a modality independent control to activate an OnMouseOver event. See Independent User Interface: Events for additional information on APIs and techniques for modality independent controls.

Glossary References

keyboard

The letter, symbol and command keys or key indicators that allow a user to control a computing device. Assistive technologies have traditionally relied on the keyboard interface as a universal, or modality independent interface. In this document references to keyboard include keyboard emulators and keyboard interfaces that make use of the keyboard's role as a modality independent interface (see Modality Independent Controls). Keyboard emulators and interfaces may be used on devices which do not have a physical keyboard, such as mobile devices based on touchscreen input. keyboard interface Keyboard interfaces are programmatic services provided by many platforms that allow operation in a device independent manner. A keyboard interface can allow keystroke input even if particular devices do not contain a hardware keyboard (e.g. a touchscreen-controlled device can have a keyboard interface built into its operating system to support onscreen keyboards as well as external keyboards that may be connected). Note: Keyboard-operated mouse emulators, such as MouseKeys, do not qualify as operation through a keyboard interface because these emulators use pointing device interfaces, not keyboard interfaces.

WCAG

Discussion 14 March, 2014

Difficulties

  • Confusion between keyboard and on-screen keyboard
  • Lots of input devices: Bluetooth keyboard, switch devices, touch, gesture, voice activation, camera for tracking head movement, braille displays, braille input device
  • Even iportal device to power wheelchairs, but we probably don't want to get into those types of details
  • Android and iOS are different
  • Different devices dependent on the actual device
  • Physical keyboard can only be used with input controls, but switch control, voiceover (also assistive touch) but clearly switch control and voiceover are looking at the programmatic aspects, switch allows it to be focused, voiceover allows it to be interacted with.


Practical Considerations

  • We can't change success criteria. The success criteria is always going to reference keyboard. Given that do we define what we mean by keyboard which is what was done by UAAG, or modify all?
  • We can change understanding, we can add a definition.
  • What does accessibility support look like?
  • How do we address it somewhere in understanding, sufficient, failures?
  • How do we go about saying what is sufficient for a particular technique?
  • Touch, if it's not available to a screen reader, is not equivalent, but what do we say to a developer – you have to support this or is it as simple as an equivalent method for all users, so if Touch is available it's available both for screen readers and not screen readers, but what about people with mobility impairments who can't do Touch?
  • Do we want to say the keyboard is also supported, not exclusive, but also supported. Gestures should also be supported by keyboard, but they don't have to have a keyboard, but if they do


IndieUI

  • If we do have the independent UI that would solve my concern
  • Timeline for when browsers will implement IndieUI, I'm concerned that it may not be implemented for two or three years down the road