TPAC/2011/Adjusting to explosion of input methods
Appearance
	
	
Interests of the group members
- Input methods and bugs in Webkit
 - Input and operability with new input modalitities
 - Mapping new input methodologies
 - Dealing with the web today and creating a hardware agnostic future
 - Less hacking!
 - Making software work with the input explosion
 - How to split the development work so that different groups make all input methods work
 - Interoperability across devices
 - Haptic feedback
 - W3C having a unified approach to standardizing input methods so different working groups address all input types without gaps.
 - Users have a choice
 - Input modalities in a TV remote - no touch when you are sitting 10ft away from the screen. Mouse is difficult to use
 - Machine to human interface
 - Multi-screen input
 - User choice & consistency of user experience
 - Marriage of hardware and software
 - Getting speech on devices with standard APIs
 - Different modalities
 - Virtual mouse/ Virtual keyboard to handle legacy content. Will we have to write virtual touch in the future for future input types?
 - Hardware level screen input standards
 
Examples of Specific Problems
- single letter short cuts in gmail - when using speech input, saying a word can trigger a cascade of unintended effects. Inputting text should not be the equivalent of pressing keys. This is a historical artifact of virtual keyboard. A speech user wants to be able to change what the keyboard shortcuts.
 - applying a user stylesheet to the page, I may use a complete input object, or the objects may be crammed together to be unintelligible. That loses button clicks. The simple button clicks are lost.
 - common navigation menus require hover to open the menu, but touch devices don't have a hover ability
 - focus problems from having unexpected input methods
 - many people use combinations of assistive technologoes, including speech, and there is a very strong need to be able to change the keyboard shortcuts. Different levels of sophistication of users. There is a whole taxonomy of user needs that would be helpful to addressing the broader needs.
 - no keyboard shortcuts (no control keys) in the mobile touch devices.
 
Brainstorming
- Would it be possible to unify all special operations so they are not being interpreted at the hardware level? Would that then reduce the optimization of a specific modality for certain users? Should we address both.
 
- We could have HTML5 <command> to define categories of commands - mouseclick, keyclick.
 
- How do we convert new input methods to map gesture or speech to mouseclick. Everyone does touch to mouse, but we all do them differently.
 
- Some of this is being addressed by the Web Events and Intentional Events. Intentional Events needs a better name. Legacy content is our biggest problem. The concept of mapping is good, but is not future proof
 
New Input Methods
- Trackpad and touch - many Blackberry users use a combination of them.
 - Touch and keyboard - learning the app,
 - Composite modality - handwriting and speech input - example of a teacher drawing on a virtual whiteboard and dictating at the same time. The input methods need to be aware of each other to assist in speech-to-text.
 - Need Touch and Voice Control.
 - Even if we have a higher level commands that cross map, I am afraid that we will break legacy pages.
 - Gestures are so complex that they become very hard to use. Let the user make their own gesture/keyboard shortcuts. Make a good default, and let the user change it.
 - Need a comprehensive taxonomy of user requirements
 
What's Next?
- [MultiModal EMMA 1.0] - adding semantics to events
 - Web Events
 - Documenting and reaching out to authors
 - Reach out to authors
 - Discussion and polishing the higher level command standards
 - Look at the [Intentional Events Proposed Charter]
 
Comments and Links from IRC
- could start by implementing the input methods defined in the abstracted text event from DOM3 Events: http://www.w3.org/TR/DOM-Level-3-Events/#events-textevents
 - [Intentional Events proposed charter]
 - here's a link to the MMI Working Group http://www.w3.org/2002/mmi/
 - EMMA represents the semantics of a user input in a modality-independent way