W3C

- DRAFT -

User Agent Accessibility Guidelines Working Group Teleconference

25 Oct 2012

See also: IRC log

Attendees

Present
Jim_Allan, Jeanne, Greg_Lowney, Kim_Patch, +1.609.734.aaaa, Jan, Markku
Regrets
kford
Chair
JimAllan
Scribe
jallan

Contents


<trackbot> Date: 25 October 2012

<Jan> URI?

http://www.w3.org/WAI/UA/2012/ED-IMPLEMENTING-UAAG20-20121012/

News

<scribe> scribe: jallan

digital publishing new hot topic

do we have it covered in UAAG

mh: ereaders are UAs
... ereaders using html rendering engines (webkit, mozilla)

jr: ones i've seen, are all wrapped in security blankets that limit the flexibility and accessibility of the content

<mhakkinen> www.readium.org

opensource ereader

mh: 1 day workshop on web and epublishing sponsored by w3c and idpf

open item 1

<Jan> Starting at SC?

<Jan> JA: 2.1.1

http://www.w3.org/WAI/UA/2012/ED-IMPLEMENTING-UAAG20-20121012/

<Greg> 2.1.1 $$ Karen has dyslexia and uses gestures to navigate her mobile phone. As focus moves from one element to another, there is a visble focus indicator. -- Doesn't make clear what her dyslexia has to do with the focus indicator.

<Greg> Re 2.1.1 Jeremy example (etc.) is "webpage" as a single word the W3C standard term?

could be an 'attentional disorder' the visible focus box helps karen orient

<Jan> 2.1.4 "Ari uses Voiceover on his iPhone"...seems very product-centric....would we say Ari uses Talkback on his Samsung Galaxy?

2.1.1 Karen example seems to fit better in 2.1.2.

<mhakkinen> Alternate 2.1.1 Karen has muscular dystrophy and cannot easily use the onscreen keyboard to navigate Web pages on her mobile phone. Instead, she uses simple gestures to move between elements on the page. As focus moves from one element to another, there is a visble focus indicator.

2.1.4 could we just say screen reader on a smart phone to make it totally generic?

<Greg> Re 2.1.4 "$$ Ari uses Voiceover on his iPhone to navigate a webpage. He selects an item and is able to activate the element using gestures. This requires sufficient screen real estate to perform gestures without changing focus." -- I don't think it makes clear how the screen real estate issue is tied to the rest. Perhaps say "He is able to use a series of gestures to select the correct item,...

<Greg> ...hearing feedback as he does so, and can then use gestures to activate or manipulate it without having to target it precisely."

<Jan> 2.3.1 Mary example....I think this may have gone too far into the operation of the mobile's OS

<Jan> 2.3.3 same comment as 2.3.1

2.1.4 Ari example. seems irrelevant. in my use of voiceover, I have not seen it take up any more screen realestate than the original content.

+1 to greg on 2.1.4 addition

<jeanne> 2.1.1 proposed: $$ Karen has dyslexia which often causes her confusion with directions. She uses gestures to navigate her mobile phone. As focus moves from one element to another, there is a visble focus indicator, which allows her to find the focus easily.

<Greg> Re 2.1.6 "$$ George is blind and uses the gestures on his mobile device to move focus to the top of the page, return to the previous web page and activate links." It's not explained how gestures relate to the "keyboard access" which is the topic of the SC. The term gesture occurs nowhere else in the discussion of 2.1.6.

<KimPatch> I think we should use both Mark and Jeanne's 2.1.1 examples

<mhakkinen> +1 to jim's comment that 2.1.1 Karen example fits better with 2.1.2. Can we split karen example into 2.1.1 and 2.1.2, inability to use onscreen keyboard and so use gestures (2.1.1) and focus highlight shifting via gestures for 2.1.2?

2.1.1 Karen has muscular dystrophy and cannot easily use the onscreen keyboard to navigate Web pages on her mobile phone. Instead, she uses simple gestures to move between elements on the page. As focus moves from one element to another, there is a visble focus indicator.

<Jan> 2.6.1 Maybe change as capitalized.... $$ Ingrid has low vision. When navigating a page with a smartphone, she can use both keyboard and gestures to OPERATE ALL OF THE CONTROLS within the page.

<jeanne> +1 to Jan's comment for 2.6.1

2.1.1 Karen has muscular dystrophy and cannot easily use the onscreen keyboard to navigate Web pages on her mobile phone. Instead, she uses simple gestures to move between elements on the page.

2.1.2 Karen has dyslexia which often causes her confusion with directions. She uses gestures to navigate her mobile phone. As focus moves from one element to another, there is a visble focus indicator, which allows her to find the focus easily.

+1 to Jan's 2.6.1

<mhakkinen> +1 to 2.1.1 and change 2.1.2 from Karen to Erin, then +1

<Jan> 2.7.4 seems like a stretch...that somewone confused by new interfaces would be comfortable connecting a mobile to their computer

<KimPatch> editorial tweaks to Jim's:

<KimPatch> 2.1.2 Karen has dyslexia, which often causes her confusion with directions. She uses gestures to navigate her mobile phone. As focus moves from one element to another, a visble focus indicator allows her to find the focus easily.

<Greg> Re Re 2.2.4 "$$ Jeff has a mobility impairment. He uses gestures to navigate the page. When he reaches the last active element on the page there is an indicator that the end of the page is reached before changing focus (e.g. wrapping to the top, switching pages)." I'd recommend an actual example instead of the vague "an indicator", and something about why this helps him. For example, "Jeff...

<Greg> ...has a mobility impairment, and uses gestures repeated gestures to move focus through the links and controls on the page so he won't have to accurately target the individual controls. If he makes the 'next' gesture when focus is already on the last link on the page, the browser "jiggles" the page contents briefly before visibly scrolling to the top of the document and then to the first...

<Greg> ...link. This is important because, unlike a user who flicks the screen upwards when they've reached the end, he would otherwise have no feedback indicating that he's started over at the top."

<jeanne> 2.7.4 - but on the iOS with iTunes, it is plugging in one cable and pressing one button to reset to defaults.

+1 to Jan 2.3.1 comment. is a smartphone screen a UA? think we should stick to UA content on the phone not the OS

<Greg> Re 2.3.1 "$$ Mary cannot use the mouse or keyboard due to a repetitive strain injury. On her mobile phone, Mary uses a single speech command to select the app, rather than having to use multiple commands to page through screens to find the app icon and activate it." That doesn't seem to be about UA, but rather about the OS/shell.

<mhakkinen> 2.5.2 $$ Armand is blind. When he is using the screen reader on his smartphone to listen to a Web page, Armand can use gesture commands to select that he wants to navigate from heading to heading using left and right swipes across the display. One Armand finds at heading of interest, he changes the navigation mode so that the swipes now move between paragraphs.

<mhakkinen> Corrected 2.5.2 follows: 2.5.2 $$ Armand is visually impaired and uses a screen reader on his smartphone to listen to a Web page, Armand can use gesture commands to select that he wants to navigate from heading to heading using left and right swipes across the display. Once Armand finds a heading of interest, he changes the navigation mode so that the swipes now move between paragraphs.

2.3.1 Mary cannot use the mouse or keyboard due to a repetitive strain injury. She uses speech input with a mouseless browsing plug-in for her browser. **She is able to use the same plug-in on her smartphone.** The plug-in overlays each link with a number that can then be used to directly select it (e.g. by speaking the command "link 12"). This prevents Mary from having to say "tab" numerous...

scribe: times to select a link.

hew text between **. remove second mary example in 2.3.1

2.3.3 use the same text as proposed jallan 2.3.1

<Greg> Re 2.3.2 Present Direct Commands in Rendered Content, "$$ When reading email on her tablet, Mary touches a control which opens a toolbar with a setting to display the accesskeys and other direct commands that the author created. She sees that a 3-finger swipe will delete the current email." Probably better to say "delete the current message". Also, does anything actually do something like...

<Greg> ...this, showing specific meanings of gestures like three-finger swipe? Is the infrastructure set up today to allow content to define that kind of mapping in a way the UA can recognize?

good question

<jeanne> I am pretty sure in 2.3.2 that Kathy was showing an example in Gmail on iOS.

are the gesture indicators an author requirement? I don't know enough about gestures!

<Greg> Jeanne, is that Gmail in a browser, or a Gmail app?

<Greg> Re 2.3.3 Mary example, the same as with 2.3.1, this seems to to guidance for the OS rather than for UA.

scribe: seems having "2 finger swipe left" or "3 finger double tap" appear in a menu or next to a web app control is a lot of real estate.

<Greg> Jim, having a help screen pop up that lists these mappings, which is then dismissed, seems reasonable. (Not having it persistent on a small display.)

<jeanne> I think there are many ways to go about it -- see the iOS custom gesture example which is a one touch menu that opens additional menu.

thank you

2.6.1 New Ken - Ken is a speech input user. In order to get his work done in a reasonable amount of time and without overtaxing his voice he uses a single speech command phrase to move the mouse up, left and click. He uses similar functionality on his wireless tablet to navigate web pages.

<Greg> Re 2.3.4 Present Direct Commands in User Interface, "$$ Neta has a repetitive strain injury. She relies on gestures and shortcuts to complete tasks. Using a specialize command on her mobile device, she can pull up a list of all the commands that can be completed in that context." This seems identical to 2.3.2. Also "specialized" rather than "specialize", although it really isn't specialized...

<Greg> ...since we recommend every UA implement it. And this is not just for mobile, as Windows 8 is promoting use of gestures on desktop/laptop machines.

<Greg> Jim, in your new 2.6.1 example, are you imagining a command that moves the mouse up and left a predefined distance and then clicks? Or somehow targets the next enabled element up and to the left?

greg - 2.3.4 should we remove 'specialize' all together. Also, there have been gestures on laptop touchpads for years.

greg - 2.6.1 the latter, next/previous element

+1 to Jan 2.7.4

<Greg> Jim, might rework to be a bit more specific. I'm a bit skeptical that Ken would define specific commands to move and click in each of the directions, but Kim could probably give us a good, specific example to use.

<mhakkinen> 2.7.1 (revised) Betty has low vision user and customizes her mobile browser's color and font settings to make make text much easier to read. Her browser incorporates a cloud-based profile so that her settings are retained across browsing sessions, and also available on her desktop and tablet browsers.

<KimPatch> 2.6.1 Works both ways -- I use it both ways

<Greg> Re 2.5.2 Navigate by structural element, " $$ Armand is blind. When he is using the speech feature on his smartphone surfing the web, Armand can navigate from heading to heading using gesture commands." Might clarify "speech feature" to be "speech output feature" (to distinguish it from speech recognition).

2.7.4 the Jan example is more related to OS functionality not the browser. new - Jan is easily confused by new interfaces. Using the screen reader capabilities on her mobile phone she changes the interface of the updated browser. But, then can't figure out how to undo them. She uses an app from the browser developer to reset the browser settings to default.

<Greg> Re 2.6.1 Access to input methods, " $$ Ingrid has low vision. When navigating a page with a smartphone, she can use both keyboard and gestures to navigate within the page." I don't think this example applies to this SC, as navigation within a page is generally not "input methods explicitly associated with an element", but rather a feature of the UA as a whole.

discussion of writing.

<scribe> New 2.7.1 (revised) Betty has low vision and customizes her mobile browser's color and font settings to make make text much easier to read. Her browser incorporates a cloud-based profile so that her settings are retained across browsing sessions, and also available on her desktop and tablet browsers.

<KimPatch> +1

<jeanne> +1

2.7.4 Jan is easily confused by new interfaces. Using the screen reader capabilities on her mobile phone she changes the interface of the updated browser. But, then can't figure out how to undo them. She connects the device to her computer and restores the default settings.

$$ Ingrid has low vision. When navigating a page with a smartphone, she can use both keyboard and gestures to OPERATE ALL OF THE CONTROLS within the page.

use the Ken Example above in 2.6.1

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.137 (CVS log)
$Date: 2012/10/25 18:51:17 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.137  of Date: 2012/09/20 20:19:01  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found Scribe: jallan
Inferring ScribeNick: JAllan
Default Present: Jim_Allan, Jeanne, Greg_Lowney, Kim_Patch, +1.609.734.aaaa, Jan, Markku
Present: Jim_Allan Jeanne Greg_Lowney Kim_Patch +1.609.734.aaaa Jan Markku
Regrets: kford
Found Date: 25 Oct 2012
Guessing minutes URL: http://www.w3.org/2012/10/25-ua-minutes.html
People with action items: 

[End of scribe.perl diagnostic output]