<shadi> ACT Rules Definition
<shadi> ACT Rules Syntax
<shadi> ACT Rules Format
<shadi> ACT Rules Authoring Guide
<Wilco> ACT Language?
<shadi> ACT Rules Developer Guide
<shadi> https://www.w3.org/TR/test-metadata/
wilco: Is it necessary to have the word
"Rules" in the title?
... It implies there's rules, and that's a core aspect of what we want
to do. I think it's important to have that.
<Wilco> accessibility conformance testing procedure definition
<Wilco> accessibility conformance testing procedure format
alistair: We are defining test procedure
as well, so saying "Rules" in the title is limiting.
... Why not have two documents - one for the rules format and one for
the test procedure.
shadi: I think when people hear framework
they think it's the spec and the rules that auto-wcag developed that we
haven't adopted into this group yet. It seems to imply all of those
things are included.
... Doesn't the test rules include the procedure? So we really are
defining the whole thing in the test rules.
Alistair: Then we can call it ACT Rules Format or something like that.
shadi: We definitely want to start with
"ACT Rules" so it's the second part that we need.
... Should we put the selection in a survey?
wilco: There's not much time to complete this change.
<shadi> ACT Rules Description
wilco: "syntax" implies it is machine readable which it is not, so don't like that term
<shadi> format++
Alistair: One of the problems is
auto-wcag meets once per month and we complete about 10 rules per year.
So the rules that are ready are behind the time.
... I'm hoping this group will start developing the rules.
wilco: I agree, that as we develop the
framework or repository that we create more incentive for organizations
to contribute.
... We don't have sufficient participation and commitment in auto-wcag
to move more quickly in the development of the rules.
alistair: I'm more interested in the test
cases. A light framework is a useful idea, but our effort should be
concentrated on test cases.
... This is off-topic, but in terms of what we are currently doing, ACT
and the rules are behind where we are currently at. It might be the case
for many companies. But if we have a lightweight set of example tests
that test for conformance or non-conformance it would be more useful.
wilco: I know Level Access is looking at machine learning for testing, but I think that this spec is still useful.
wilco: Mary Jo will put out a call for
consensus to the task force on a new title of "ACT Rules Format".
... This pull request is due to John Avila's comment on the selector
definition being too technical. I addressed that with him and he
accepted my proposed change.
<Wilco> https://github.com/w3c/wcag-act/pull/72/files
Alistair: Pull request #73 looks good to me.
wilco: Pull request #72 was due to David's comments.
Alistair: We are aiming for a
'harmonized' method of testing.
... The machine learning methods will cause some problems with that, as
different tools will use different algorithms.
... The only way to completely avoid conflicting results is to have one
tool and everyone uses it.
wilco: This is to get the core set of
rules together, the tests we all agree on and can reasonably write down.
... It doesn't prevent additional tests.
wilco: We have an updated month-by-month planning to match up with the AG WG's schedule because their charter ends at that date.
Alistair: I emailed my ideas to the list
and no one sent any comments on it in the last 4 weeks.
... The broader group is not getting interested in emailed topics.
... The idea is to keep it lightweight. To collect everyone's
understanding of what conformance is through examples.
... Discrepancies on differences in understanding are easier to identify
and get ironed out.
... If we have an understanding of the what tests each tool is using to
test for conformance. Then we can look at the differences and work to
address the differences.
<Wilco> https://github.com/IBMa/Va11yS
alistair: There's no point in defining what framework might be used, don't want to spend too much time on that so keep it lightweight.
wilco: IBM's open project Va11yS is one way we could approach this.
alistair: The simpler the better. If the content forms what IBM believes to be conformant vs. non-conformant content.
wilco: What is Va11yS aimed at - testing validation, or a teaching tool?
maryjom: It's Moe's project, and I'm not exactly sure but it might be a little of both.
wilco: For teaching, you'd want the examples to be simple, but for testing you'd want to add in those edge cases so it shouldn't be both.
alistair: If we have a list and point to a location of each of our sets of tests. I can set that up in a wiki page.
<Wilco> https://github.com/dequelabs/axe-core/tree/master/test/integration
wilco: The above link is were the AXE tests are.
alistair: It would be great to get them
from WAVE and from Jon Gunderson as well.
... We could add IBM's, and any others out there.
wilco: Is the tool that Level Access working on purely machine learning or a mixture of traditional rules and machine learning?
alistair: We are creating algorithms for
things like recognizing suspicious text and are also using data
analytics of comparing known human audited content vs. the non-audited
content.
... We want to help DevOps learn techniques that can be more easily
automated in testing - we identify those techniques and teach them to
DevOps.
maryjo: IBM is also looking into cognitive solutions for both implementation (like content simplification for users with cognitive disabilities), as well as using AI for an increased level of coverage in automated testing.