Discussion: Touch and Force Touch

From Mobile Accessibility Task Force

This is a discussion of Force Touch and Requiring Touch and how it all relates to mobile, accessibility, general input etc. This is a preliminary step to including information on Force Touch in WCAG 2.1.

Comments

[Patrick] First of all, as discussed in one of our calls, I propose we drop the unhelpful Apple wording of "Force Touch" and use something more general, like "pressure-sensitive touch". More generally, there are also stylus/pen devices that support pressure, so perhaps "pressure-sensitive touch/stylus". Going even further, some stylus/pen inputs also support tilt/twist/orientation - where do these attributes fit in? More generalised hardware features? Fundamentally, this all boils back down to the extended "keyboard operable" idea that things can be controlled using a two-step "set focus to something, then activate it" approach - making all functionality (including any functionality mapped to pressure-sensitive touch/stylus interactions, tilt/twist, etc) available to keyboard-like interfaces (which include the above mentioned two-step interfaces)

Discussion 2016-06-09

  • We need a definition: Force touch is Apple name, we should call it pressure sensitive input
  • Pressure sensitive could be done through touch or stylus, amount of pressure on a screen
  • Normal touch versus pressure sensitive touch
  • Functionality needs to work just with touch, just with keyboards, can't assume that Ipad Pro will have pencil
  • Pencil stuff – art, music, maybe games, many levels of touch, can't translate that reasonably with regular touch/keyboard,, but poductivity type applications provide alternatives for
  • How many levels of pressure? It's only practical to provide alternatives for a few levels
  • The word pressure – we're reactive to specific way it happens to work right now
  • WCAG exception for path and pressure

except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints. (Level A)"

  • Idea of selecting something and navigating is not intrinsic
  • A navigation menu that has been built based on twists and tilts could be an example of what not to do
  • Many devices today don't support pressure sensitive
  • Actual technology isn't ubiquitous enough for widespread adoption
  • With the exception of Wacom tablets and latest android devices majority of touch enabled devices or stylus enabled devices out there do not provide touch information
  • Two different things: Force touch, just a couple levels Apple pencil, many levels plus tilt
  • Drawing is an exception just like keyboard has an exception for drawing
  • Continuous pressure information 0-1 iOS Core System decides that there will be certain pressure points. Pressure over this point a touch, pressure over this point is a force touch. Nothing stopping developers from accessing the raw touch information.
  • Are we widening from pressure to tilt – also sensors – sensors would fall under device manipulation. Very hazy no man's land between the two particularly with styluses that can recognize more than just pressure

Simple Use Cases: just a couple levels

On iOS, just providing context menus. Force touch gives you the distinction between activation and something else. Good simple initial use case – having shortcuts. Maybe alternative is three steps, but not blocked from doing that.

Complicated use cases: more levels and/or tilt + pressure

  • Drawing (tilt plus pressure = darkness and width at the same time)
  • Music (many different attributes)
  • Gaming (use the position of your stylus or pen is a joystick with tilt, for instance flight simulator) (mentioned in the call: my simple experiment using a pressure/tilt capable stylus, pointer events, and WebGL to show pressure/tilt on screen - these values are available on devices, and can conceivably be plumbed into controlling a plane/character in a game https://www.youtube.com/watch?v=1VlQzagF8PA )
  • Signature to detect fraud – using path, and path is intrinsic, required part of the functionality (not really a pressure/tilt use case: only path is important, not how thick strokes are etc)

Challenges

  • Users that don't know what levels of pressure they're applying
  • Tilt and pressure at the same time is very powerful – how do these translate quickly, not in many steps that are impractical
  • Dexterity issues – users that only have coarse control

Further thoughts from Patrick

still think it all (pressure/tiwst/etC) falls under the same large pot of "needs a special sensor/input, and ability to then use it" same as device manipulation and co. because the end result for all these cases is in principle: yes, your content/app can use these (to provide shortcuts, context menus, etc) but you should not RELY on them and provide something that's baseline keyboard accessible (with expanded definition of "keyboard" i.e. to include things like moving focus sequentially and activating things, a la TAB/SHIFT+TAB + ENTER, swipe left/right + double tap). distinction needed between the pedometer/step counter use case and other pressure/tilt/twist sensors: it's still about a user ... doing something explicitly (i.e. tilting their phone, pushing harder on a stylus/touchscreen) vs sensors picking up activity that is not necessarily the user trying to interact, i.e. "ambient" user activity if you will (pedometer case) the user isn't "taking a step to activate something"- "sensors picking up activity that is not necessarily the user trying to interact". the device is happening to pick up the motion and interpreting it as a step to count. however, when a user is actively trying to input/interact, sensors are of course still involved. but it's the user intent that matters, perhaps.

maybe we need to change the name from "device manipulation", not sure - to something like "additional input capabilities" or something. i.e. extra hardware switches, tilting the phone, using a stylus with pressure sensitive tip, etc making it accessible would mean: a mechanism is available to achieve the same thing when these additional input capabilities are not present (i.e. the user DOESN'T have a pressure-sensitive touchscreen, a tilt-capable stylus) or the user is not able to use them (e.g. not being able to tilt/shake their phone) or lacking dexterity required to use them properly

the mechanism could be many things. for instance, if these additional input capabilities are only used to provide shortcuts to functionality/content that is actually in the app/site already, then one option is to say that they DON'T need to provide an alternative, as a keyboard/baseline user without these additional input capabilities can still access the same content/trigger the same functionality, albeit by going via a potentially more circuitous route (the point here being that, in case of shortcuts, it's not functionality/content that's EXCLUSIVE to needing to have those additional input capabilities AND being able to use them). another mechanism could be: provide an option to enable a different mode of operation - if the user has, and can use, say a pressure-sensitive touchscreen, offer 2-3 different functions based on pressure of a touch; for those who can't/don't have one, they can enable a mode of operation where with their first touch they're "setting the point", and then bring up a context menu to say "which of these 2-3 functions did you want to activate on this point?") and yes, then writing exceptions, which will be the toughie. e.g. for pedometer scenario, it'll be the "it's not a user action". drawing application has been discussed as something that is an exception, but i'm not quite sure. certainly, there are ways (though more laborious) to provide separate controls for line thickness etc that a user without pressure-sensitive input can still operate will it be difficult for devs to do? yes, but if they don't, they'll fail that SC. now, do we want to provide them with an easy(er) get-out-of-jail clause where they can claim that having these additional input capabilities, AND being able to use them, is intrinsic to the activity / not having them/using them would negate the activity?


Further Discussion 2016-06-16

Patrick summing up:

  • Cover the baseline – accessible by minimum of being able to move sequentially.
  • Another SC, additional input capabilities – take advantage of these, but ensure that if
    • the device isn't enabled
    • or the user can't take advantage,

still different way to do the same things. Fundamentally when the user can't use pressure, tilt, swiping, gestures that some mechanism is available for them to still get to the same end result, whether it be two or three menu points to get there to trigger shortcut, but fundamentally still have access.

  • Maybe a different aspect: you don't have functionally accessible that are extremely difficult practically.
  • Name? Additional input capabilities? If we could come up with something generic it would handle new things. Definition is important.
  • Not keyboard? Name must be clear. Clear that we are covering everything, including future.
  • Essential functionality versus not that.
  • Keyboard is loaded term, doesn't cover keyboard like. Non keyboard is also mouse. Non pointer input?
  • Vocabulary isn't there to distinguish well – people say force touch even though that's a brand term,
  • Basic pointer – literally the X and Y coordinates of something that the user invokes with Finger, Stylus versus any further information such as how hard and at what angle.

Looking at 2.6 in light of above

  • 3 levels:
  1. Keyboard and keyboard like, including swiping, explicitly include sequential here
  2. Basic pointer/mouse
  3. Fancy pointer/mouse needs to fall back to pointer, but also work with keyboard and keyboard like
  • Cascade: use 3), but make sure it works with 2), but for 2), also make sure it works for 1)