This document enumerates Use Cases (UC) and requirements (Reqs) for specifications being developed by the Web Events Working Group.
Some of the UCs and Requirements in this document may be out of scope of the WG's Charter and in that case, those UCs and Reqs will be explicitly identified as out of scope.
This document is non-normative i.e. informative and, at the moment, should be considered a Living Document that continues to evolve.
Comments on this document are welcome and should be submitted to email@example.com.
Touch Events Specification
Most websites and applications today use the metaphor of a mouse for user interaction. In a modern touch-enabled device, user interacts with the application via one or more fingers. The interaction paradigm requires application developers to be able to react to multiple touch events.
For touch input, two levels of information may be required:
- Real time tracking of the position of finger(s), the pressure and/or size of the fingertip touching the screen, velocity and direction.
- The action the user wants to perform to an element in the screen, for example rotate, zoom or select
The following use cases illustrate the use of high-level APIs for direct element manipulation.
Use Case #1 - image application
An image application presents several images together in a stack and allows the end user to move them around, rotate and zoom them.
Additionally, the resolution of the image is controlled by the application, providing initially low-res images, and as the end user zooms them, loads a higher resolution image.
Examples of such an application: http://scripty2.com/demos/touch
Use Case #2 - children's puzzle
A children puzzle game provides a stack of puzzle pieces and lets the child to freely move and rotate the pieces to find the fit.
Use Case #3 - mapping application
A mapping application provides a map pane that is constructed by tiling data from the map server. The user pans and zooms the web element containing the tiled map with the rest of the screen staying intact.
Zooming the map results in first zooming the image and initiating a fetch from the server for higher resolution map tiles. Once the tiles are received the image is replaced.
Panning the map results in drawing adjacent tiles that had already been cached and initiating a fetch from the server for additional caching.
1. The action that the user performs to a screen element (zoom, pan, rotate, select) can be implemented differently by different vendors. To mitigate this, the high level API needs to be agnostic to the physical action and to be tied to the action the end user wants to perform.