W3C

- DRAFT -

ARIA and Assistive Technologies Community Group Telecon

03 Jun 2020

Attendees

Present
Matt_King, zcorpan, Jemmaku, Joe_Humbert, jon_gunderson, michael_fairchild
Regrets
Chair
Matt King
Scribe
michael_fairchild

Contents


<scribe> scribe: michael_fairchild

Pilot

<zcorpan> https://github.com/w3c/aria-at/issues?q=is%3Aissue+is%3Aopen+%22Tester+issue+report+for%22

Matt_King: So far, the pilot generated many more issues than I anticipated
... a lot of really good and useful feedback
... I've learned two things about the ARIA-AT app itself
... 1. We need to get beyond the 5 assertion limitation
... especially important for posinset and setsize assertions
... very important to get that fixed and remove the limitation on assertions
... 2. When you generate an issue from the reporter, it creates a link to the test file. That's a link to the raw HTML in the repo, but not very helpful for people who want to understand what the problem is.
... we should link to the rendered version of the test (a public version of the test runner)
... did anyone else have feedback on the test itself

michael_fairchild: what's the plan for the prototype, and can we use that as the public version of tests that we can link to

zcorpan: I'd like to get rid of the prototype and get that functionality into the prod runner

michael_fairchild: (discussion about differences between the two repos, prod and test management repos, how to manage this moving forward while making it easy to publicly view tests and author tests)

zcorpan: (asks the group how the runner and tests can be improved)

Jon: it would be good to have a text area for output instead of an input
... an overview of the test plan would be helpful, because its not always obvious how one test differs from another test

Matt_King: so something like a table of contents?

Jon: some instructions for how to copy output would be helpful
... the link to open instructions on how to set up your screen reader should open a new tab/window.

<Jemma> the same issue for me

Jon: there was a save/continue button, but the message that came up wasn't very friendly. There might be some room for improvement there. It wasn't clear to me what was going to happen.

zcorpan: if you start a test and are half way through, then save and close, this warns you that your progress won't be saved since we don't have partial saving finished.

Jon: overall, it works very well

Rob: in a few places it didn't give me feedback that it had succeeded in doing things. For example, I submitted an issue and it didn't do anything.

Matt_King: when you submitted a github issue, where there times when you got positive feedback?

Rob: yes
... I think when I hit submit, nothing happened, so I closed the dialog and retried
... it was unexpected that after closing the issue dialog, a previous draft message was still there
... Sometimes, after starting a test run, and am viewing a test, when I complete a test and click 'submit results', once or maybe twice, nothing seemed to happen.
... also, when I tried to edit previous results, and hit save changes'
... it brought up a dialog that said something like 'your progress will not be saved'

Seth: So I think the edit button is working. It deletes the results and then repopulates the test form with the results. So at that point, you have to click 'submit results', which should then submit successfully. I think this is a UX issue, where we have buttons in the iframe and outside the iframe.
... it might be possible to move the 'submit results' button outside of the iframe, and also re-think the naming of the buttons and ux.

Rob: I think that is what was happening

Matt_King: it is a problem that you can't cancel out of save and close

(discussion of save and close and other interface adjustments)

Rob: one thing that would have been helpful would be pagination controls.

Matt_King: Jon suggested something similar

Rob: there were some tests that were presented to me for VO that had to do with interaction mode

Matt_King: that's probably a bug in the test data, not the test runner. I'll look at that.

Jon: when focus was automatically set in Chrome/VO, the screen reader started reading the entire thing. It didn't read the thing that got focus, instead it read everything in the container.

* Rob

Rob: but when I used keyboard focus, that issue didn't happen

Matt_King: that's likely a browser or screen reader test
... you didn't have that problem in Safari?

Rob: I only did Menubar tests in Chrome

Matt_King: if setting focus is not helpful because it could have side effects, perhaps we shouldn't do it. We will have to revisit this. How should we report this?

(discussion on reporting)

Rob: some commands were "right arrow/ left arrow", but the specific command yielded different results.

Matt_King: yes, most of the time the commands should be separated out. But it would be nice if the runner let us copy results for a previous command.

Rob: If we are not setting focus on a specific thing, it might be helpful to add instruction on how to navigate to a specific target. Tab, arrow, something else?

Matt_King: it shouldn't matter how you get to the starting point for the test as long as you get there.

zcorpan: if a tester notices a problem trying to navigate to the target, that could be a valid implementation bug in the screen reader itself

Matt_King: yes, but there should already be tests for those items.
... any feedback from other testers?

Jemma: I didn't finish the testing, but I did notice the some of the issues that were already mentioned?
... when are we going to apply fixes to these issues?

Seth: we are working on fixes, but I don't know that it would be worth waiting

Matt_King: we do want to make sure that we run every plan at least once
... every test run at least once

Jemma: i'll do nvda+firefox for checkbox

michael_fairchild: are we extending the deadline?

Matt_King: let's extend it thru the weekend
... let's make sure we get at least one set of complete results for every run

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/06/03 20:05:00 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Present: Matt_King zcorpan Jemmaku Joe_Humbert jon_gunderson michael_fairchild
Found Scribe: michael_fairchild
Inferring ScribeNick: michael_fairchild

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]