W3C

- DRAFT -

ARIA and Assistive Technologies Community Group for Sep 25, 2019

25 Sep 2019

Attendees

Present
Matt_King, spectranaut, jongund
Regrets
Chair
Matt King
Scribe
Jean-Francois_Hector

Contents


<scribe> scribe: Jean-Francois_Hector

<jongund> I will be a few minutes late

Test case management systems

V: Started a wiki page to summarise findings so far. Documented 2 options, can talk about the 3rd one

<spectranaut> https://github.com/w3c/aria-at/wiki/Test-Case-Management-System-Research

Looked at 3 options, including a test harness maintained by CSS Working Group.

Kiwi and TestLink are written in the paradigm that they're testing a product. Both have lots of users. They expect that the tests will be written in the application.

CSS Working Group Test harness is much simpler. Tests are all written in HTML files that get loaded in an iFrame.

Kiwi has a hosted version of Kiwi for people to look at and play around with.

The CSS WG Test harness is open to the public.

What's a good way to test these test harnesses for accessibility?

MCK: I don't know how far we want to go with that. These are open source. So we might give them a sniff test: is it a disaster or is it workable? The test harness should be practical for screen reader users to use. But if something is reasonably successful but includes some mistakes we could fork it or improve it.

JG: Kiwi's pull down menus don't have ARIA, but they do seem to support the keyboard (open and close menu, tab to features).
... Kiwi not using landmarks. Might be easy to get them to add those things.

V: You have to create a test plan and page before you can get to the page where you can run tests, which is probably the most important to test.

<spectranaut> https://public.tenant.kiwitcms.org/plan/2429/menubar#testcases

<spectranaut> an example test plan

V: Clicking on this, log in, then click 'run'

JG: Some controls don't seem to be accessible (eg the switchers at the bottom)

MF: Looks pretty good overall, except for some critical instances where it's not looking good
... Ran axe, found a few violations (especially bypass blocks and colour control). This is better than most

V: Each test case is a markdown text field. It should have the instructions and the expectations.

MCK: How customisable is the data that we're collecting (eg pass field and the notes)

V: At the moment it's just 'pass or fail'. To be investigated. There's the concept of extensions in Kiwi.

We can import tests. Kiwi provides an Python API to import or update tests, if we want to save them separately outside the system.

To run a test, there's a span that's a button called run. It activates a dropdown on hover. Couldn't make it work with the keyboard.

MCK: Clicking on run didn't seem to do anything. I had to move the mouse to hover on it, then a button appears.

Then hitting that button, I can't tell what happened.

This is a little problematic.

MF: Yes

V: Should I investigate whether there are ways to extend the view that we have when going through test cases? Should I look into the code and at the extension paradigms that they have and talk to the developers?

MCK: Not feeling super optimisting about Kiwi, given the amount of work it'd take to make it just accessible enough. It may be a fair amount of work.

When it comes to actually running a test, we might want to layer questions so that (eg in the case of failures). I don't see enough flexibility here to do branching

V: I haven't found branching in the open source softwares that I've investigated, that are being actively maintained

What we're doing is branching a bit from what some of these applications have begun as

I can look at TestLink later today, for us to discuss next week.

<spectranaut> css working group test harnesS: http://test.csswg.org/harness/

Let's look at the CSS Working Group Harness (link just above)

This test harness is a lot simpler

it would also need to be extended for our particular use case

it's nice that it's a simple interface doing a simple thing

One active maintainer, who's enthusiastic about it being used

Login is authenticated with the W3C log in system. Logging in only gives you the ability to review test results. The rest doesn't require logging ing

Can mark test as Pass / Fail / Skip / Can't tell

MF: Generally looks pretty good. But there's a keyboard trap (testing on Chrome). Stuff like that might be an easy fix, it depends

V: This is a PHP application (Kiwi was a Django/Python application).

This has little JavaScript in it.

MCK: If you don't need to log in, does that mean that anyone can come in and run test cases?

V: Yes

MCK: We want to know who runs the test. Especially if we contract people to run tests. We may want to only have results from people that we have verified / qualified.

If we don't know who the test results are coming from, that seems like a shortfall.

V: In result of tests, you can see IP address, or if they have logged in you can see their name

MCK: I wonder whether we could modify this to see Github login

V: It's designed for humans to run the test.

MF: Some tests are automated.

V: This works with WPT tests. It's in the same repository

JG: It's be a plus if we were using WPT – if this platform was connected to that.
... I expect there'd be willingness within W3C to make changes for a W3C project.

MCK: If the tests are written in HTML, does it have to be just static test in the HTML or could it include questions, or prompts, or branching? Could you put an HTML form into that HTML?

V: Don't see why not

JG: What would a form do?

MCK: E.g. if there was a failure, gathering more information about the failure. So that we can you 'everything was good', or if it didn't pass, we'd want to collect more detailed information. This would help speed things up.

JG: In WPT, I know that a test can have multiple parts.

V: WPT is mostly for automated tests. There's a JavaScript test harness that'll execute tests.

MCK: E.g. if the test is 'navigating to a checkbox in reading more' and the expectation 'read name role and state': if it read name role and state with up and down arrows, but only role and state with current line key, you'd want to capture that it didn't read the state with the current line key.

If you tested that and none of them failed, it'd be nice to not have to mark 9 passes.

We could require super granular input (eg per command and separately for name, role and state). But if they all work, it seems like a lot of work. I'm trying to think about how to make the testing easier.

JG: E.g. what if there was a 'partial' success button, that'd take you to another page?
... I like the simplicity of the CSS WG harness. Without much orientation I can go to tests and test something

V: Another nice thing is that tests are associated with the specifications and grouped that way. We could make it possible to group all the tests related to an aria attribute, or an example.

Should we focus on the investigating the TestLink option, or on CSS WG Harness?

MCK: Requiring our testers to have W3C logins wouldn't be too big a problem. But I am hoping that we could get some tight integration with Github. E.g. for discussing.

I'm not yet sure at what level we'd want to raise Github issues. But if there was the ability within the harness to press a button, and create a Github issue, to allow discussions of specific results or the design of a test, that'd be good

V: Currently all the tests are in the WPT repository. That'd be a Github repository. If people wanted to file issues on tests, they'd probably do it there.

Every WPT test is a test file in the WPT Github repository.

A test file could be in the ARIA-AT repo.

MCK: If a test is a test file in our repo, ... where would the code for the harness run?

V: It's not in WPT or in Github. it's in Mercurial (note taker: not sure of spelling)

It's a PHP application.

(Not sure, based on comment from MF)

MF: I think that this CSS WG solution is promising. I'd like to see a proposal of what it'd take to modify the CSS platform to our needs

MCK: Agree

V: Kiwi have an orientation that's different from us

Will get deeper into the CSS WG Harness. E.g. more structured information, partial passing.

MCK: Hosting and log-in requirements

V: I think that this would also come with proposing a design for what our tests would look like.

MCK: That's where looking at what both what JF and Michael have done could hopefully be useful.

MF: That you so much Valerie.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/09/25 17:04:45 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Present: Matt_King spectranaut jongund
Found Scribe: Jean-Francois_Hector
Inferring ScribeNick: Jean-Francois_Hector

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]