W3C

- DRAFT -

Accessibility Guidelines Working Group Teleconference

29 Jun 2017

See also: IRC log

Attendees

Present
(no, one), jasonjgw, david-macdonald, MichaelC, shadi, Detlev, MikeGower, KimDirks, steverep, Pietro, alastairc
Regrets
Chair
SV_MEETING_CHAIR
Scribe
alastairc

Contents


<AWK_> +AK

<AWK_> +AWK

<AWK_> -AK

<lisa> stuck with the webex password again

<lisa> sorry

<lisa> can someone send it on private chat again

<lisa> ah wrong webex

<lisa> thank u, almost in

Concurrent Input Mechanisms (cont) https://www.w3.org/2002/09/wbs/35422/MATFSC_june/results#xnew1

<lisa> in

<Kim> Andrew: no meeting next Tuesday

<david-macdonald> Someone is breathing

<Kim> Andrew: Concurrent input mechanisms – we talked with Kathy and Shadi yesterday. The questions that people were having were around the overall testability about

<Detlev> is that the right survey link for concurrent input mechanism?

<Detlev> ah ye ssorry

<Kim> Andrew: requiring testability for every potential input – questions around with the use cases are. How users with disabilities are benefiting from this – whether this is actually implementing everyone.

<Kim> Andrew: is this a general usability concern that impacts all users or is there a specific hook that makes it differentially applicable to users with disabilities

<Kim> Shadi: I do believe this is coming from research – use cases. Some users may need to switch and use may be voice at times or typing and mixing these depending on the type of input and amount of input and what's needed

<Kim> Shadi: so there's user research behind it. There's always a question of degree and there are larger benefits beyond that

<Kim> Andrew: touchscreen enabled laptop, mobile phone with Bluetooth keyboard, allow users to switch between them. My guess use case – apt tuned to mobile, must recognize needs keyboard access. That's already part of 2.0. If just confusion that need keyboard that needs to be hammered home more. If it's more than that then we need to do more

<david-macdonald> https://github.com/w3c/wcag21/issues/63#issuecomment-302276983

<Kim> Shadi: there is the aspect of ensuring that the keyboard can be used – not only relying on input devices. Kind of the inverted keyboard requirement that is already in, but also the other way around so it is really about the mixing – being able to use different input mechanisms at different times depending on the current need. For instance there might be certain touching that I can do on the...

<Kim> ...screen but then if it's more input a larger text field or something I will not be able to do that through pointing device and I will need a keyboard. It's both of the options that you just said as far as I understand

<AWK_> Kim: for example a speech user

<AWK_> ... can use speech for a lot but then need to use the keyboard for other things

<AWK_> ... probably low-vision use cases also

<AWK_> ... there has been a lot of trouble in the speech world because there is something that you can't do well. Examples of someone assuming that you won't want to use the kybd if you use speech.

<AWK_> ... they need to be able to be used simultaneously

<Kim> Jason: still have same concerns about use cases, specifically accessibility. I wouldn't mind if WCAG required enhancements that were good usability anyway, but I don't think people reviewing this are going to be sympathetic for requirements that are not well grounded in accessibility use cases. I think the vagueness and the imprecision of the use cases so far lead to the question whether...

<Kim> ...this requirement meets the conditions that it needs to.

<Kim> Jason: one concern is a lot of the use cases could be addressed at the operating system level. Organizations affected would be very clear content level use cases and where those use cases are more than hypothetical – that would help the progress of this proposal.

<Kim> Shadi: Jason, there's a little bit of a contradiction – if it's operating system the question is is it a requirement or not so it's not a question of operating systems versus content because content can lock into one mode saying just touch your keyboard rather than allowing the possibility backed by the operating system. The author also needs to consider that it is available as well on the...

<Kim> ...content level.

<Kim> David: if I understand correctly the success criteria would require all functionality to work with a mouse, all functionality work with speech recognition, and all functionality work with keyboard – and whatever the next thing is. We had a success criteria that said everything had to be functional with the mouse – it didn't get very far. This is wider than that if I understand correctly. Do...

<Kim> ...I have that right?

<Kim> Shadi: no – it's about the switching. Whatever is supported, you are able to switch from one to the other. So I can use the on-screen keyboard, touch, and then I can continue with the keyboard seamlessly. It doesn't require me to use only one. There are other requirements that ensure access by voice and keyboard. Assuming that these are supported, that these are usable – it's that you can...

<Kim> ...switch between the two.

<Kim> David: you say there's other places in the success criterion that require a mouse

<Kim> Shadi: keyboard particularly – under operability support different input mechanisms. These take care of the particular input mechanisms. This one is about the switching between – it doesn't say support all input mechanisms – just switching between.

<Kim> David: I'm more trying to figure out what this success criteria requires.

<Kim> Shadi: this allows switching between them – you should not deliberately exclude mouse, that's the point here

<Kim> Detlev: real-life example from site test. Check what kind of input was present – laptop which has a touchscreen, linked to the monitor. I couldn't use it at all with mouse, but I could use touch on a second monitor. Author was assuming touch was small device. Question remains if that's an accessibility issue. My testing for accessibility found the problem.

<Kim> Detlev: is a general accessibility issue because mouse is supposed to work anyway or is it an accessibility issue

<Kim> Mike: These are all flagged in issues. to the degree to which you have to support inputs – keyboard already. Hardware versus author– how does tester know – lots of situations where you plug in a new device and operating system doesn't smoothly transition – plug in headphones. Are those failures? Third issue – supported inputs – if you've offered something that only supports one...

<Kim> ...kind of input...

<Kim> ...it's reasonable to assume that you're not going to support all types of inputs. Keyboard is already a requirement from 2.1.1, should be covered by that. Mouse versus touch thing interesting. Is this more strongly for someone with accessibility or general issue.

<Kim> Jason: examples: cases where content is responsible for the problem, if input methods are supported individually the user can't move from one to the other in the course of the interaction. That seems to be the scenario which this proposal is trying to address. I'm not satisfied that we have good this is what the code be examples of this kind of problem with demonstrated accessibility...

<Kim> ...implications that are really significant. So I'd like to see more work done on the use cases, the requirements behind this proposal before it moves forward. I can just see critics being unsympathetic to the objectives or wanting to draw distinction between usability and accessibility. Should spend time working on the use cases and requirements and seeing if this is the right way to solve...

<Kim> ...the problem.

<lisa> time reminder

<gowerm> I covered four things: 1) how does a tester in particular distinguish between an OS-based "lag" or failure to switch between inputs, and an author-caused issue? 2) Keyboard 2.1.1 already captures the need for concurrent input. With some changes it could capture any perceived short-coming in covering switching inputs 3) the language does not protect an author from a legit desire to restrict...

<gowerm> ...input mechanisms to a subset 4) to what degree is this a usability issue, and not confined to an accessibility issue?

<Detlev> I guess we can put it on the back burner for now... accept proposal to work on use cases more..

RESOLUTION: send back to Mobile task force for better examples

Support personalisation (minimum) https://www.w3.org/2002/09/wbs/35422/COGA_new/results

<lisa> For pages that contains interactive controls or with more then one regions, one of the following is true:

<lisa> - a mechanism is available for personalization of content that enables the user to add symbols to interactive controls OR

<lisa> -contextual information or context sensitive help is availible for regions, form elements, main navigation elements and interactive controls is programmatically determined.

<Kim> Lisa: I've been mulling over the comments – I sent some new wording to the list We still had some problems with it so I'm trying something else. It's got a little bit of Alastair's suggestion to it. Part of the problem is people didn't want too much work on the author. With the old way we had feedback – difficult to know.

<Kim> Lisa: that's giving people a third option so it's easier to comply. Also suggesting defining context sensitive help.

<Zakim> AWK_, you wanted to disagree that "region" is well-established

<Kim> Andrew: can you tell us what you think a region is

NB: WCAG 2.0 has 'section' that could work for that

<Kim> Lisa: that's defined in ARIA. It's a section of content that belongs together

<Joshue108> https://www.w3.org/TR/wai-aria/roles#region

<gowerm> "A perceivable section containing content that is relevant to a specific, author-specified purpose and sufficiently important that users will likely want to be able to navigate to the section easily and to have it listed in a summary of the page. Such a page summary could be generated dynamically by a user agent or assistive technology."

<Kim> Lisa: when you are using landmark regions, make sure something is not orphaned

<Kim> Andrew: not everything is going to be using ARIA, not everything is going to be using landmarks and regions, so we need to know what a region is

<Joshue108> and not e'thing will use them according to spec.

<Kim> Mike: if you don't support personalization every field is going to have to have a tab stop. So for keyboard user to have context sensitive help the going to have to have some kind of mechanism to open that which typically is a tab stop for some universal nonexistent form of launching help on a context basis

<Kim> Lisa: Function1 will do it – it's that kind of thing

<david-macdonald> Interested in where ARIA requires that if landmarks are usde ALL content needs to be wrapped in a LANDMARK. I think that may be an internal IBM guideline but not in WAI-ARIA.

<Kim> Lisa: I just stuck in context-sensitive help to make it more flexible, but we could take it out if it's a problem

<lisa> -contextual information is availible for regions, form elements, main navigation elements and interactive controls is programmatically determined.

<david-macdonald> https://www.w3.org/TR/wai-aria/roles#landmark

<Kim> Mike: just to correct myself – wouldn't necessarily require a tab stop, but that's how it's typically implemented right now

<lisa> For pages that contains interactive controls or with more then one regions, one of the following is true:

<lisa> - a mechanism is available for personalization of content that enables the user to add symbols to interactive controls OR

<lisa> -contextual information for regions, form elements, main navigation elements and interactive controls is programmatically determined.

<Kim> Detlev: I don't understand how the "or" is linked up. One addresses the need to have the regions clearly marked up, and the other case is navigational elements where you might want to replace navigational terms with icons. I think these are two different cases and should be jumbled together in one sentence. It's very hard to parse for me

<Kim> Lisa: I've put in wording which takes out part of the problem – takes out contextual help

<Joshue108> +1 to DavidMacD on nesting landmarks..

<Kim> Lisa: I think here it's really to exclude very simple pages that don't have interactive controls and that are just one region – like in an app is a simple thing

<Kim> Detlev: special case if you want to leave out a link. You would absolutely have controls that go forward or backward

<marcjohlic> https://rawgit.com/w3c/wcag21/support-personalization_ISSUE-6/guidelines/terms/21/contextual-information.html

<Kim> Marc: simplified to get buy-in? Definition for contextual information – I would think developer would say how can I meet this, and if they boil it down the easiest way contextual information for navigational, regional and textural controls. Would that not already be covered – providing relationship and role information already

<Kim> Lisa: but not the concept. You've said this is a button and given it text that says next or whatever, but text is not programmatically meaningful so the concept is missing. What might happen is this might get merged with something as 1.3.1 or 4.2.1 . You can just add the word concept into name role value and we're done. For now we are making separate SC, then after August we can merge....

<Kim> ...Problem with 1.3.1 is "or text" disqualifies all our use cases. Text is not programmatically determinable.

<Kim> Detlev: what do you mean by not programmatically determinable – if it's a proper button, says next, and in the next page you are supposed to fill in your shipping address, next would be appropriate. Any extra information you supply may help some but also confuse others, so I don't see the point

<Kim> Lisa: see the demo – the text on the next button doesn't change so if you want to have the next button text there that's fine, the problem is that text is not programmatically meaningful especially if were not allowing controlled language. If we were allowing controlled language on that text that would be problematically meaningful, but we don't allow it. So therefore we don't know what...

<Kim> ...symbols to add for people who don't know the words

<KimDirks> *where's the demo Lisa just mentioned? Is there a link?

<Kim> Lisa: the context of the button

<Pietro> +1 to Lisa

<Kim> Lisa: it's not machine understandable text it's descriptive to a person. But it won't consistently if you're parsing it and adding a symbol to it that won't work. People won't know if this is a send. Let's say you had an application that added context-sensitive help. And someone had a button and they put a link – instead of saying homepage they say home. That links to the homepage. But it's...

<Kim> ...not always a link to the homepage it could be a link to my home. You don't know from a programmatic point of view that when someone has the word home in a link it's a link to the homepage. So you can't consistently add an icon, link to help, because you don't know. You know because of the role it's a button you know because of the label that it has text that it can serve as a label that...

<Kim> ...people should be able to understand but a machine can't understand it consistently and at a symbol or move it to a place that's why it's really important for it to be programmatically determined what it is – not just it's a button to be pressed, but what it is

<Kim> Lisa: that I can add an icon to it, I can use terms that people understand – not what a designer assumes someone understands – or a tooltip – it will open the door to all kinds of things. It will open it to huge audiences of people who cannot participate because they have a disability. We could add the separate success criteria and then merge it later.

<Kim> Alaster: from the proposed text I find it very difficult to assess scale. I'm a bit wary about the first bullet aspect and I can't think of a way to provide a scale on that unless we go down a similar route – offloaded to the aria spec in terms of name role value – suggested in email earlier today.

<Kim> Andrew: so Alastair your concern is just the scope of the first button might allow someone to minimally conform

<Kim> Chris: impossible to make the web accessible to everyone if you can't read a page. Let's assume that we want to try to do this and make this happen I don't think that it's something that can happen at the application developer level or at least not with the current operating system and UAAG requirements. Programmatically determine a button – things that have to happen to make that plausible.

<Kim> Chris: the technical capabilities to let that happen are just not there. They just don't exist to do that consistently. Maybe in one contrived example, sure, but consistently across every app and website in the world, no. This is something that has to be tackled at a much much bigger level.

<Kim> Lisa: I wanted to check if you seen the techniques – have you read through all the techniques. We've got techniques that deal with this. We've got a spec and aria in the first working draft.

<Kim> Chris: it's a cool idea, I'm sure it can happen in the long run. But right now that cannot be done consistently.

<Kim> Lisa: I don't understand – can you explain why not.

<Kim> Andrew: can you say how you would identify a home button is a home button – what's the semantics that would be required

From the spec: <a href="home.html" coga-destination="home" >our main page</a>

<Joshue108> Chris knows his onions, so I'm interested in why he thinks this requires a deeper architecture.

<Kim> Lisa: I put in a few different alternatives because people didn't want to just rely on aria. There's a coga destination=home using aria. There's also a way using micro data – from Symantec.org. That's well understood semantics. That would be the mechanism of how you do it. Aria – here's the universal way of describing it. And then because people have concerns with different platforms like...

<Kim> ...PDF we've given them – they can do an icon – that's why we have the second bullet point, to further allay the worries.

<Kim> Andrew: we are over time and there's no way were going to get through the queue. We're going to leave this one open

RESOLUTION: leave this one open

<AWK_> Jason: WRT to controls

<gowerm> I believe Marc has had to drop from the call, as do I.

<scribe> scribe: alastairc

Jason: Regarding a taxonomy of relevant concepts, needs to be there. It is an interesting direction to take, good research proposals, not so much for a widely applicable standard.
... if the research is done and it can be used, then we have a good case to put into WCAG.

<Zakim> AWK_, you wanted to clarify that we don't know that anything is "very likely" to be merged. That decision has not been made

<Zakim> marcjohlic, you wanted to ask if the items in the definition are AND statements. Meaning - is everything separated by semicolon required for "contextual information" to be met?

AWK: If something gets in, we can't say it is "very likely" to be merged. Could say it is possible, but not determined yet.

<chriscm> +1 to Coga being a new concept. Terrible support for it!

David: Think I understand, we have the coga attributes from the spec, which is new. Something that everyone can rally around, and then people can use it. I've read through the coga spec, it's really interesting and a good direction.

<chriscm> oops....

David: from WCAG, we aren't meant to be that creative, we should follow things which have momentum. Until we see something that has momentum, very wary of trying to create requirements from WCAG.

lisa: Putting the comment together, we have a chicken & egg situation. There's a huge amount of discrimination against people with cog issues. The longer this is delayed the worse it gets. Trying to find a way to solve the situation. There have been symbol browsers, we've done test pages.
... There was a strong objection last time to forcing people to use this extra vocab, which is why the first bullet is there. Not as good as the second bullet, but if even 1% of sites use that second bullet then it would make a huge difference.

<chriscm> -1 1% of people who comply. This is not good logic. WCAG is "legal requirement" for many organizations. This is a VERY dangerous thought process.

Lisa: there are three companies using this tech in different places, no in US but in my local. If it was a requirement it would be used straight away (in some places). Having this first bullet with the 'out', we're doing it so there is a comfort level there.
... everyone can have a personalisation button to get a conformant version

chriscm: There are a lot of SCs I would be passionate about if the current ecosystems, tech capabilities were there to use them. But, when releasing an SC that doesn't have solutions there, then developers will come up with hacky solutions.

<lisa> have lefti

chriscm: For places which have to conform to WCAG, then those solutions can make things worse, more confusing. Support the concept, love the passion, but my field is the technology side, and it is hard to see how that would work.

<lisa> i draw the lone at being yelled at

trackbot, end meeting

Summary of Action Items

Summary of Resolutions

  1. send back to Mobile task force for better examples
  2. leave this one open
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2017/06/29 16:49:13 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.152  of Date: 2017/02/06 11:04:15  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/a little bit of Alice's/a little bit of Alastair's/
Succeeded: s/implemented right now/typically implemented right now/
Default Present: AK, AWK, jasonjgw, david-macdonald, MichaelC, shadi, Detlev, MikeGower, KimDirks, steverep, Pietro, alastairc
Present: (no one) jasonjgw david-macdonald MichaelC shadi Detlev MikeGower KimDirks steverep Pietro alastairc
Found Scribe: alastairc
Inferring ScribeNick: alastairc

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 29 Jun 2017
Guessing minutes URL: http://www.w3.org/2017/06/29-ag-minutes.html
People with action items: 

[End of scribe.perl diagnostic output]