Proposed revision of 2.5.4

From Mobile Accessibility Task Force

PROPOSED: 2.5.4 Target Size

Previous version as of 2016-02-18

A touch target is at least 44px by 44px of the visible display at the default viewport size. (Level AA)

Proposed 2.5.4

The size of the target in relation to the visible display at the default viewport size is at least: (Level AA)
  • 44px by 44px for touch inputs
  • 20px by 20px for mouse/pointer inputs

Note: In situations where both touch and pointer/mouse input mechanisms are present, without manual or automatic input detection, controls must follow the larger minimum dimensions for touch input."

Note: This success criteria applies when platform assistive technology (e.g. magnification) is not turned on.

Note: We are researching the 20px value for mouse/pointer and 44px for touch. We are seeking research on this and outside input. We also have to define the difference between a touch event and a mouse event, particularly in html and responsive environments.

Proposal for a Glossary definition of pixel (px)

Add a definition of CSS pixel for OUR purposes.

[Proposed text for Understanding] Intent of this Success Criterion

The intent of this success criteria is to help users who may have trouble activating a small target because of hand tremours or other reasons. Mice, and pointing devices can be hard to use for these users, and a larger target will help them greatly in having positive outcomes on the web page. If the target is too small, it may be difficult to aim at the target. This can be further complicated on responsive sites that double as mobile content where the same input will be used with touch. A finger is larger than mouse pointer, and needs a larger target. Touch screens are a primary method of user input on mobile devices. User interface controls must be big enough to capture finger touch actions. The minimum recommended touch target size is 44px by 44px, but a larger touch target is recommended to reduce the possibility of unintentional actions. This is particularly, if any of the following are true:

  • the control is used frequently;
  • the result of the touch cannot be easily undone;
  • the control is positioned where it will be difficult to reach, or is near the edge of the screen;
  • the control is part of a sequential task.
Specific Benefits of Success Criterion 2.5.3
  • Users with mobility impairments, such as hand tremors
  • Users who find fine motor movements difficult
  • Users who access a device using one hand
  • Users with large fingers
  • Users who have low vision may better see the target
Examples of Success Criterion 2.5.3
  • Interface elements that require a single tap or a long press as input will only trigger the corresponding event when the finger is lifted inside that element.
  • The user interface control performs an action when the user lifts the finger away from the control rather than when the user first touches the control.
  • A phone dialing application has number keys that are activated on touch down. A user can undo an unwanted number by hitting the backspace button to delete a mistaken digit.
Related Resources

Resources are for information purposes only, no endorsement implied.

(none currently documented)

Techniques and Failures for Success Criterion 2.5.3
  • M029(wiki) Touch events are only triggered when touch is removed from a control
  • FM001 Failure of SC 2.5.3 due to activating a button on initial touch location rather than the final touch location
  • Failure: Actions are only available through long press or 3D touch

Comments

Patrick says: Relating to the "visible display": how would this affect, say, scrolling...what happens if a control is only half scrolled into view (but the user CAN scroll further, i.e. it's not cut off)? Is it unambiguous enough that this (presumably) relates to things being fully reachable/scrollable into view?

I think the angle here should be more along the lines of what I grafted onto the SC above...talking about inputs, multi-inputs, etc.

If we're thinking ahead to techniques, worth mentioning some techniques for multi-input scenarios:

- provide some form of switching mechanism [ed: Office 2013 on Windows, for instance, has a slightly awkward way to switch the ribbon menus manually between touch and mouse optimised size - see the screenshot on one of my slides http://patrickhlauke.github.io/getting-touchy-presentation/#134]

- use CSS Media Queries Level 4 Interaction Media Features [ed: but use them correctly/beware of false assumptions - see https://dev.opera.com/articles/media-features/ and my slides http://patrickhlauke.github.io/getting-touchy-presentation/#127 through to http://patrickhlauke.github.io/getting-touchy-presentation/#133]

- use script-based input type detection, such as what-input [ed. http://patrickhlauke.github.io/getting-touchy-presentation/#135 through to http://patrickhlauke.github.io/getting-touchy-presentation/#137]

And of course the "Understanding" will also need quite a bit of background context about the whole viewport, ideal viewport, inability of authors to define actual physical sizes as measured on screen, variability due to device/screen size/OS/UA/user settings etc. I may be able to boil it down into something semi-usable soon.

David Comments: "On platform where there is no distinction between touch and pointer, the pointer size prevails unless it can't be used for touch..."

Patrick responds: I think the angle here should be more along the lines of what I grafted onto the SC above...talking about inputs, multi-inputs, etc.

Definition

Platform assistive technology that remaps touch gestures: Software that is integrated into the operating system, ships with the product, and/or is updated or installed via system updates. This software changes the characteristics of the touch interface when turned on. (e.g., a system screen reader may remap a right swipe gesture to move focus from item to item instead of it's default behaviour when the assistive technology is not on).