W3C

- DRAFT -

March 6, 2019 ARIA and Assistive Technologies Community Group Telecon

06 Mar 2019

Attendees

Present
Matt-King, shadi, michael_fairchild, shimizuyohta, Wilco
Regrets
Chair
Matt King
Scribe
Matt-King

Contents


<shimizuyohta> Matt: Have a conference on Thursday.

CSUN

<shimizuyohta> ARIA adoption and overview of the project(scope, challenges)

<shimizuyohta> Matt: Hopefully recruits more stakeholders

<shimizuyohta> Wilco: I will be there

<shimizuyohta> Matt: I have presentation in repository and happy to share if anyone interested

Scope of assertions

<michael_fairchild> Yohta: we had a meeting to discuss assertions and we have a few questions

<michael_fairchild> Yohta: what do we expect to happen in reading mode vs interactive mode in JAWS? What application is enough to define support in application mode?

<michael_fairchild> correction: what information is enough to define support in application mode

<michael_fairchild> Matt gave examples of grid and menubar interactions. Generally, the same information should be announced.

<michael_fairchild> matt: expectations will likely be widget specific

<michael_fairchild> matt: this is an area that we might not be able to get consensus on these expectations with different screen readers

<michael_fairchild> Matt: one approach would be document what we think is correct, then ask screen reader developers

<michael_fairchild> Yohta: Are we expected to list the exact output of screen readers?

<michael_fairchild> Matt: we don't need exact output, but 'dimmed' in voiceover vs 'unavailable' in JAWS is important info to know

<shimizuyohta> Matt: We need to document expected bahaviors

<shimizuyohta> Matt: Compare expected behaviors for each A/T,

<shimizuyohta> Michael: Screen readers should be consistent in one assertions

<shimizuyohta> Maichael: Assertions will be generalized, but expectation for annoucement would vary depending on each screen readers

<shimizuyohta> Matt: There are some functions that screen reader-specific

<shimizuyohta> Michael: This discussion leads to Terril's question

<shimizuyohta> Terrill's question:as you're wrestling with what to test for various design patterns: Are screen readers expected to support the keyboard interaction models that are documented in the WAI-ARIA Authoring Practices? And if so, and they already have the prescribed keys mapped to some other purpose, how are they expected to handle that? Since the keyboard interaction models are very clearly defined, perhaps that could be our basis for

<shimizuyohta> testing, unless screen readers aren't necessarily expected to support those models.

<shimizuyohta> Matt: If the screen reader's in interaction mode, then the web page should receive all keys that are documented in the pattern

<shimizuyohta> [Automation assertion]

Test case automation

<shimizuyohta> Michale: I think entirely possible, if we have possible assertion libraries.

<shimizuyohta> Wilco: What's the idea of generating assertions?

<michael_fairchild> lol

<shimizuyohta> Matt: We have documentation for each example.

<shimizuyohta> Matt: Now we have interactive possibility for keys, but we don’t have corresponding list of keys for every screen readers.

<shimizuyohta> Michael: We'll skip meeting next week, and we'll probably run a survey for meeting time for future meeting.

<shimizuyohta> Matt: I’d hope to break into group to work on more specific task in March~April

<Wilco> I have to drop. Thank you all :)

Additional Notes

<scribe> scribe: Matt-King

Adding more notes after the meeting ended to help clarify some of what was captured during the meeting.

On the topic of scope:

1. A precondition of every aria-at assertion will be the mode of the screen reader (reading or interactive).

2. The postcondition of every assertion will be the screen reader response to a keyboard command issued by the tester.

3. The keyboard commands will be either screen reader commands or widget commands implemented by the ARIA widget being tested.

4. When one of the precondition is reading mode, the only commands that can be tested are screen reader commands.

5. When one of the preconditions is interactive mode, both widget commands and screen reader commands can be tested. However, the only screen reader commands that would typically be tested in this condition are commands that report something about the current condition, e.g., JAWS or NVDA insert+tab report the currently focused element and its state.

6. When in interactive mode, every keyboard command documented for an ARIA example will be tested.

7. For screen readers that support automatic mode switching, there will be assertions where the postcondition is the mode of the screen reader, e.g., when tabbing into a grid, assert that the mode switched to interactive mode.

8. For reading mode, there may be some elements where the expected behavior is not clear and we will need to partner with screen reader developers to develop consensus on what is best. We discussed how screen readers read menubars in reading mode.

On the topic of generating assertions automatically:

1. We'll probably be able to automate generation of a large list of assertions that might apply to an example. We will then have to manually edit it down to the list of assertions that actually apply.

2. To generat assertions for interactive mode, we could first walk the table of keyboard commands that are implemented for the example, then for each command, walk the table of roles, states, and properties implemented for that example.

3. Consider menubar: right arrow moves focus to next menuitem. the keyboard table includes right arrow. the roles, states, and props table has documentation for menuitem role, haspopup property, and expanded property. This could generate 3 assertions for right arrow, 3 assertions for left arrow, etc.

4. This approach could generate a lot of assertions that do not apply. Consider down arrow in grid, since moving inside the same column, colindex assertion does not apply but would probably be generated. We'd mannualy remove ones that do not apply and manually add expected out outcomes.

5. Some screen reader commands apply to all examples. We need to document those.

6. Some screen reader commands are specific to examples, e.g., T key for moving to next table in reading mode applies only to tables and grids.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/03/06 20:39:01 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/Shadi/Wilco/
Succeeded: s/Shadi/Wilco/
Present: Matt-King shadi michael_fairchild shimizuyohta Wilco
No ScribeNick specified.  Guessing ScribeNick: mck
Found Scribe: Matt-King

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]