See also: IRC log
<Ryladog> Ryladog = Katie Haritos-Shea
<scribe> ScribeNick: Lachy
janina: the important thing is the intent: zoom, pan, etc. rather than click, key press.
… so we have james craig from apple and rich from IBM who will dive into technical details
janina: Rich is first
rich: A few years ago we started with ARIA 1.0
… the original purpose was to make rich internet applications accessibile.
… It put semantics into the DOM that the browser would then take and expose to the assistive technology
… One thing out of scope was platform independents of devices. e.g. pull up your mobile device and you don't have a keyboard.
… We're going to address this in Indie UI
… Second thing is user context.
… It's been external to the W3C, but has been worked on elsewhere for a few years
… as we go to mobile where there's a definiate need for the web app to adapt to the users needs, how do you need content to be delivered at the time it's being processsed.
james: The next bit is how we've split the deliveralbes
… these are two independent sections, somewhat unrelated, but both needed.
… the first is the events module.
… Some background.
… events currently notify when something has happened. Click event, focus event, DOM Mutation events.
… These particular events are not intended to notify of a change that has already happened, but to notify an event that notifies the users request to control the application
… One example in the current draft is a dismiss request.
… e.g. On a page with a modal dialog, the page might dismiss the dialog by listening for the Esc key
… It listens for different things on different platforms.
… That doesn't work on devices without keyboards, or without the specific keys.
… These events work in context where the device understands the intent in a device specific way, and so can convey that intent directly to the web page.
… e.g. the user wants the next page of the results, or change the value of a control.
… the de facto key press bits defined in the ARIA authoring guide are a stop gap measure
… It relies on device specific events.
… some of the more complex parts of ARIA only work if you have these keyboard events.
… These events are intended to abstract out how the user controls the web app.
… Then the second deliverable is the user context module.
…part of this is to allow introspection into the user's needs.
… A common request that has real privacy concerns is whether or not the user is using assistive technology.
… we're aware of the privacy concerns, but some of the information is about which screen reader is being used, etc.
… type of colours they prefer, preferred fonts, etc.
… obviously, some of the things more related to disabilities need to have some way for the user to opt in
… The other part proposed is additions to the nagivator object.
<Steven> Some history - https://www.w3.org/MarkUp/Forms/Group/Drafts/1.1/intent-based-events.html
… e.g. An event has happened at a particular location of the screen where the keyboard focus is elsewhere.
… Basically, the overview of these events are things that allow assistive technology and mainstream browsers to provide addtional control to the web app, and provide some introspection into the user's needs.
… Rich wanted to mention some existing work.
Rich; In the education space: Something called Access for All; APIP used for learning assessment.
… Define a set of user needs, and have metadata about them
… e.g. might have a need to have something captioned.
… I have this video that is not captioned, but I have a related video that is, which could be a substitute.
… it doesn't have the world wide adoption.
… I may not have a fixed set of prefernces. e.g. one of the things we can't do today, if I go into a noisy room and all of a sudden my device could detect that the noise is so high, the web app should know to turn captions on.
… I could have an HTML 5 video element. The device could adapt to the environment and turn on the captions for the user.
… The user didn't have to do it manually
… in terms of education, this would really help the education space as well.
… Those are some things we're trying to do.
… If we had other types of input, what are the use cases that assistive technology could do.
james: e.g. The first of two examples I mentioned earlier was an ARIA slider.
… The value change request could be done in a number of ways.
… The other I mentioend was escapting from a dialog
… Who is familiar with VoiceOver on iOS?
… It is a screen reader
… Because it is a touch screen, the screen reader intercepts the user's touch events.
… Knowing the user's intent is part of the operating system and the assistive technology
… So if a user did a gesture that indicates that they want to exit from this view, or speaks a command, as in the context of other assistive technologies like Dragon, the system could interpret that and send the appropriate event to the web app
… the web app wouldn't have to be concerned with physical events.
… One more example, some of the keypress events that were defacto defined as part of ARIA... (???)
rich: Advantages for mainstream developers. say my android device has voice recognition commands. I can move to that platform without having to recode my web app to respond to those events.
… I can have higher level commands that fire on different platforms
… regardless of the specific device or platform in use
james: I'd really like to address any confusion you may have?
magnus: I was wondering about the context. If that's something could come from an external source, or is it always within the browser.
… Could it be injected to the browser, from an automotive context, or other use.
james: it's not just specific to the browser. e.g. the example I metioned for the screen reader or voice control. Those ATs run outside the context of the browser, but use the browser as a gateway.
… The AT understands what the user wants to do and passes it to the browser, which in turn passes it to the web app.
Janina: the bottom line is that it doesn't matter how you collect the data.
james: specifically, how the OS or AT collects that data is out of scope.
magnus: It's all an abstraction you impose on the browser.
janina. yes, there is a way to inject that into the browser.
james: System level APIs for communicating between the AT and the browser or device is out of scope.
magnus: We have the web driver initiative where we have an a system to drive the browser itself, simulating a user
raman: One way is coming at it with a set of high level events, and it's up to the implementation to decide how to map to those
…. The other end of it is that there are a set of frameworks that allow the a system to inject a set of events, (simulating a user)
… But at the end of the day, the author hasn't captured in the application is that what he's actually interested in is what the user is trying to do.
james: So we did anticipate in the short term, the most accessible web apps are relying on a series of keyboard focus events or press events to respond to the user
… Web apps would want to respond to both those events and the indie ui events in the short term, but transition to more of these events in the long term.
markus: In terms of the user profile and the fields within it, how do you handle...
… There's potentially an infinite number of preferences.
… How do you handle the tension between the obvious ease of a finite set of fields vs. the need for extensibility
james: One of the things is the ability for a web ap to make a js call on the navigator object to obtain a user preference.
… we want to keep the common events defined in the specifciation, but make it possible to incorporate the larger taxonomies from the groups that rich mentioned earlier.
… The specifcs are yet to be worked out.
… maybe a namespaced key. Might allow for vendor specific keys.
… Safari has a check box to change the way, e.g. a tab works. That would probably need a vendor type prefix.
… The idea is to incorporate any preference, while defining the common ones.
… Certain preferences have privacy implications.
… The UA might ask the user to grant permission.
rich: this would be great for epub too.
gz: The two topics of indiie ui are events and context. One a high overview, there are two strateies.
… For input, we want to take the burdon off the author and let them code in abstract ways. For output, we want to load the burden on the author and let them query the user's needs and tailior the output.
rich: yes and no. There could be services to do this for them, but at the end of the day, it's up to the application.
james: For input, we want to abstract as much as possible.
… Take the burdon off the author.
… But there are a variet of documented reasons where we can't know all of the details.
… In cases where we know there is a gap, we want to provide a way for authors to get that information.
gz: In the responsive images session, one of the things we discussed was can the author forsee the needs ot the user. There, we said, no probably not, becasue the author doesn't forsee all possibilites.
… media queries was ruled out there because the author dictates the outcome of the decision. Whereas if we postpone the decision to run time, we have higher flexibility.
… Especially in accessibility, if we rely on the author, we are lost.
janina: I don't think we're changing the requirements on authoring.
… YouTube videos mostly won't have captions.
… But education content, medical content, etc will become more accessible.
… The content will be driven as in the past with socaial action.
… What we're talking about is taking the burden off the user and letting the device and the web app suit the user's need.
james: Setting preferences in the user style sheet, such as colour, never works out well because the author doesn't know about it.
… Font size is one that's pretty easy to adapt to.
… you can easily adapt to this by specifying units relative to the font size.
… e.g. Changing the background and foreground colour. If the web app knew the user wanted high contrast, it could adapt to that.
… we want to provide that flexibility for the ones that need it.
janina: That flexibility is a curb cut. You don't need a disability to benefit. The machine could adjust the contrast based on the user's environment.
… Author provided only.
james: But a publisher that's publishing lots of epubs could deelop something for this.
Magnus: is there any plans to have some kind of mapping between these events and physical events.
james: Not really. That's mostly
... ATs may change the way those key presses work.
rich: We could publish a note that was per-platform.
james: An informative listing of suggestions could be published.
RRSAgent: make minutes
RRSAgent: make public
RRSAgent: make logs public
This is scribe.perl Revision: 1.137 of Date: 2012/09/20 20:19:01 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/ST/St/ Succeeded: s/or speaks a command,/or speaks a command, as in the context of other assistive technologies like Dragon,/ Succeeded: s/Aria is a stop gap measure/the de facto key press bits defined in the ARIA authoring guide are a stop gap measure/ Succeeded: s/ARIA only works if you have/some of the more complex parts of ARIA only work if you have/ Succeeded: s/The two examples I mentioned earlier were aria sliders./The first of two examples I mentioned earlier was an ARIA slider./ Succeeded: s/voice over/VoiceOver/ Found ScribeNick: Lachy Inferring Scribes: Lachy Present: Steven_Pemberton TVRaman Katie_Haritos-Shea Gottfried_Zimmermann Lachy ddahl Sylvie David_MacD_Lenovo smaug Javi Gottfried richardschwerdtfeger Judy jcraig Ryladog cris Steven bjkim MichaelC_ ethan_ shepazu hober Got date from IRC log name: 31 Oct 2012 Guessing minutes URL: http://www.w3.org/2012/10/31-indie-ui-minutes.html People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]