Web Apis and Accessibility

02 Nov 2011

See also: IRC log


Ramen, Charles_Chen


<JF> trackbot, start meeting

<JF> raman: the set of apis that are available in the browser are growing fast

<JF> raman: speech input and output are coming

<JF> raman: there are web apps you can create that are innovative

<JF> raman: such as lightweight, accessible twitter client application

<JF> raman: in this generation user interfaces are very easy to build

<JF> raman: it used to be very hard to create UIs

<JF> raman: then you had hypercard and visual basic and things got a bit easier

<JF> raman: now for the web you can introduce multi modal UIs and we can build UIs in the cloud

<JF> raman: what are the interesting things we can build?

<JF> raman: I want to change the mindset about what UIs we can build

<jallan_> apps are UAs

<JF> raman: we don't need a one size fit all approach

<JF> jfoliot: so target more UIs targeted at multiple modalities?

<JF> raman: so you could build a very good speech UI or a simple html UI that speaks

<JF> jfoliot: so intentional events?

<JF> raman: don't pigeon hole the issue

<JF> raman: this is a building block that will help developers build a UI that runs in different contexts

<JF> raman: intentional events has been bubbling around for some time

<JF> raman: steven pemberton and I worked on a draft in Florida in 2003

<JF> raman: the API specs are moving very quickly

<JF> scribenick:rschwerdtfeger

<JF> raman: moving the web a step forward

<JF> david: how do you couch these in an MVC design?

<JF> raman: in xforms we did this and it scared a lot of people away

<JF> raman: the way google docs works is that it uses a very rich javascript architecture that maps itself onto the HTML canvas

<JF> rich: not to be confused with HTML canvas element

<JF> raman: these web apps can deliver a very rich experience

<JF> charleschen: Demos google docs speaking

<JF> raman: you don't get this massive soup of html

<JF> raman: spreadsheets are even more interesting as there is an MVC in the backend

<JF> raman: before we tried to do this with a rich xml stack

<JF> raman: what we are discovering is that web developers are trying to preserve their own sanity and they have moved rich modeling in the javascript layer

<JF> raman: web services turned out to be a giant rat hole

<JF> raman: service however have not and are flourishing

<JF> raman: you can take the youtube api and you can build a half dozen uis in 2-3 hours

<JF> raman: these things are very easy to build

<JF> raman: it is very empowering

<JF> raman: charles can you show a very quick demo?

<JF> charleschen: i am navigating through the various cells

<JF> raman: 5 years ago you would have had to fight a lot of issues as you were dependent on how it was drawn

<JF> charleschen: you can now drive this with javascript apis

<JF> raman: when you are building this you can control accessibility with the api

<JF> raman: today we can do much of this with ARIA markup

<JF> markfromATT: we need to define something that can deliver an app that can adapt to a broad range of users

<JF> raman: we have things for sip and puff interfaces

<jeanne> chair: Raman, Charles_Chen

<JF> raman: i believe accessibility by design will happen

<jeanne> Mark: then you plug in at the time of execution, you execute the UI options that you want.

<jeanne> raman: In 2003, when you asked developers to put in IDs, they laughed, but then they found that they needed those ideas.

<jeanne> ... when it doesn't work, it gets fixed.

<jeanne> ... when the developer can just address the api, then he is more likely to do it.

<jeanne> Rich: Raman talked about building blocks. ARIA is a piece. Intentional Events is coming - where you can use different input methods across different devices. Next is creating the APIs that will give UI individualized for the user.

<jeanne> Mark: I am in HTML5 Speech group, and we have complaints from accessibility community of people who can't press a button without causing pain, and need to use speech for everything.

<jeanne> ... if you can't hear audio, why have the audio downloaded.

<jeanne> raman: It would save a lot of bandwidth if the blind person only downloaded the audio, not the video.

<jeanne> Rich: We need context aware applications, to give the person what they need at that time - a different UI to adapt to the users.

<jeanne> Matt: End user device discovery

<jeanne> Raman: Interesting to think it is device discovery. It is more capability discovery.

<jeanne> ... it gets boiled down to accessibility like a boolean value - which is not valid, because what is accessible to a deaf person, may not be accessible to a blind person.

<jeanne> ... there is a situational context, - if you are in a noisy location, you can't talk to your phone, it won't work. It is situationally determined.

<jeanne> ... when you have apps in the cloud and you know the user context, you can deliver a user interface that works for you in that context.

<jeanne> Matt: Device capability

<jeanne> raman: it's a combination of device capability and what you can do in that moment. YOu may be able to see a screen, but not be able to look at it at that moment.

<jeanne> Rich: This is some work we have been looking at for a while in Accessibility for All. It's a simple change to speak driving directions. Possibly through an API in the browser or local data storage in HTML5, so you can map to the capabilities turned on in the device, like captioning.

<jeanne> Raman: The platform is getting richer. Managing preferences sounds daunting, but it isn't that hard

<JF> scribe:Rich

<JF> raman: the user interface in android delivers user targeted experiences

<JF> raman: you can buy the chrome book at amazon and you click the button and it adapts to you

<JF> raman; ultimately what you want is at the login screen you can specify what you want

<JF> raman: once you sign in and the system knows what you want you are on your way

<JF> raman: being able to do this in a generic machine is tremendously liberating

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2011/11/02 23:20:38 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found ScribeNick: rschwerdtfeger
Found Scribe: Rich
WARNING: No scribe lines found matching ScribeNick pattern: <rschwerdtfeger> ...

WARNING: 0 scribe lines found (out of 91 total lines.)
Are you sure you specified a correct ScribeNick?

WARNING: No "Topic:" lines found.

WARNING: No "Present: ... " found!
Possibly Present: JF Mark Matt Raman Rich YW charleschen david jallan_ jeanne jfoliot markfromATT myakura scribenick shunan shunan_ youenn
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Got date from IRC log name: 02 Nov 2011
Guessing minutes URL: http://www.w3.org/2011/11/02-winteract-minutes.html
People with action items: 

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report

[End of scribe.perl diagnostic output]