W3C

Accessibility Conformance Testing Teleconference

10 Apr 2017

See also: IRC log

Attendees

Present
Romain, Shadi, MaryJo, Wilco, Moe, Charu, Kathy, Chris
Regrets
Chair
Wilco, MaryJo
Scribe
Moe

Contents


Issues # 6, 7, 8, 15, 69 in Github on the ACT Framework Requirements https://github.com/w3c/wcag-act/issues

https://github.com/w3c/wcag-act/issues

<Wilco> https://github.com/w3c/wcag-act/issues/6

Issue #6

Wilco: Let's look at Issue #6

https://github.com/w3c/wcag-act/issues/6

Wilco: Should be resolved with Issue #7
... But not sure how these relate

https://github.com/w3c/wcag-act/issues/7

<Wilco> https://w3c.github.io/wcag-act/act-fr-reqs.html

Wilco: Let's look at the requirements document as well
... Quickref is a summary of the list of techniques available
... Move away from the idea of having 1-1 relationship between rules and techniques
... Will be at the same level but let's get some thoughts
... Are we moving away from lining up rules with existing techniques?

Romain: We don't require a 1-1 mapping but the original issue #6, I think the issue is one of clarity of the concerns of the requirements themselves
... We need to clarify that the framework is about the rules format rather than the rules themselves. I think this term format should be used here.

Wilco: We need to take a closer look at the requirements document regarding the rename. But what are your thoughts between the relationship between rules and techniques

Romain: I don't think we should have 1-1 mapping

Wilco: Originally thought that we would connect where feasible. But more and more leaning towards keeping these separate
... Difference in scope and intent
... Anyone who doesn't agree with this?

Shadi: We always said to connect where feasible.
... What's wrong with this?

Wilco: Original idea was that we make rules as test procedures for techniques. I don't think this is the direction we are still moving in.

Shadi: Correct. Failures are sometimes described as techniques. Maybe these do have more of a 1-1 mapping

Wilco: Seems to me that failures will become obsolete as we define rules
... Instead of replacing test sections in the sufficient techniques, I think we should strive to make failure techniques obsolete

Shadi: Ok
... Let's not talk about this right now. Once we build up the repository this should come naturally.

Wilco: I think there is a lot of overlap. This is just a plausible direction we are taking

<Wilco> https://github.com/w3c/wcag-act/issues/7

Wilco: we should move away from the idea of just having negative tests. Automation tends to have mainly negative tests but semi-automation we could have positive tests. And with machine learning as well.
... Makes sense to take out description that all tests are negative

https://w3c.github.io/wcag-act/act-fr-reqs.html#rules-test-for-failures

Shadi: I'm fine with removing this statement. But are there times when it it better to take a certain approach. We want as much consistency as possible.
... Maybe give examples of how to write the tests

Wilco: This is tricky for me. Depends on the scope of the success criteria.
... The reason we do not do positive tests is that the scope of the rule is different from the success criteria. Based on any particular rule we cannot determine if it's passed because scope is different.

Romain: I agree we should remove the restriction
... Positive tests are viable. e.g. testing for alt text that is good

Wilco: Yes. That's true. Let's make this change.

<Wilco> https://github.com/w3c/wcag-act/issues/8

Charu: Rule checks if there is a label or note. If not, we return an error. If there is a label we do nothing.
... Basically rules check for what is needed and if not there a violation is issued.

Wilco: Right so a positive test will tell you if something is done right and negative if something is done wrong.

Charu: Most of our rules do negative tests

Wilco: That's right, most do.

Romain: It has to be important to know the difference though. Passed/failed or cannot tell. Does this mean passed is positive testing and failed for negative. And then all the rest can't tell.

Wilco: Looking at rule aggregation.

Charu: Trying to work with an example.

Romain: If the ultimate goal is to assess success criterion for a positive test passes, then no longer need to test success criteria.
... For negative test, could pass but does not mean success criteria is met

Shadi: I don't recall this as a definition for positive and negative. Positive test says image has an alt attribute and negative checks that it does not
... My understand is about the editing practice.
... Could also test if alt text is bad. and this would be negative.

Charu: Can also say that there is an alt text but it is not meaningful. This would then be a negative test
... Don't positive tests create a lot of noise?

Wilco: What concerns me is that can we pass an SC. The way we wrote it up now, no, we cannot pass an SC. We can only get negative tests. Questions is that do we want to change this?
... I think we can have positive and negative tests but we need to check and rewrite aggregation.

Shadi: Detailed SC like 4.1.1 could say pass or fail. Regardless of manual or automation. But I think we set wrong expectations to say we come to a pass conclusion.

Wilco: I don't think so.
... You can totally determine if audio has sufficient alternative. Whether or not we can write a rule is the question. As we're discussing that might be very possible especially with manual tests.

Shadi: Yes. And we will have manual tests
... I think we were burned in the past where tools were making statements saying we didn't find issues so this passes. We want to make sure not to encourage this.

Wilco: Important that we indicate when a rule may pass an SC

Shadi: Also people check all techniques and say it passes. This is not the full set of Success Criteria.
... People use techniques to meet SC.

Wilco: Right and SC are not exhaustive
... How do we avoid this problem?
... Or do we stick to negative tests and never claim we pass SC on any rules we write?
... Makes sense to consider that.

Shadi: Saying it passes if you can demonstrate that there is a technique.
... Changing my mind. : )
... Even with manual tests are we not looking for mistakes?
... Assuming positive and negative is dependent upon how it is written but always looking for fault.
... To say we meet requirement because we describe an image in such a way. When evaluating, looking for mistakes.
... Exhaustiveness is when checking all possible ways. Then you can make statement that it passes SC. Otherwise, we found 1 possible mistake.

Charu: Easier to find mistake rather than do exhaustive checking.

Shadi: Saying SC passes is not that easy. I can only say I did not find any mistakes

Wilco: If someone uses a technique and the technique was used, I know that passes

Shadi: IF you know what technique the author used, it's a whole different game. If you don't know, black box testing, then you have to guess/test all different ways
... Yes, if we are doing manual, we have more human interaction and intelligence.

Wilco: I changed my mind too.
... I think it makes sense for scoping a little bit. In a way, techniques have positive tests and these will remain valid.
... Even makes sense to stick to negative tests and say techniques are the positive tests.

Charu: Also the question of whether technique is supported

Wilco: Maybe we stick with negative tests but not call it that. Need to make sure that people don't use this as an exhaustive list.

Shadi: Agree with concept but want to read up on positive and negative testing.
... The idea that the technique is how we author something and the test is the check if there is an error or something suspicious

Wilco: Rules as we describe them look more like the failure techniques than we set out to do
... Good discussion. Need to move on from this. Will circle back.

Test case repository https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Test_case_repository

https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Test_case_repository

<Wilco> http://lists.w3.org/Archives/Public/public-wcag-act/2017Apr/0001.html

Wilco: For these repositories, we want to combine them in an easy way to consume but not fork all of them.
... Let's put a format together.
... Look at the archived email.
... This has a suggested format that is easy to consume.
... If we could have a file like this for each repository it would be easy to run against them

Charu: I like the idea. Have to write this for each test case?

Wilco: Yes. That would be the minimal amount of effort to make this consumable. Any suggestions to make it easier?

Romain: I think the format is good and easy to consume. It's pretty straight forward as well.
... Not sure this is applicable to all the test repositories. Depends on the tests out there. Maybe not fine grain enough.
... This format may not apply to HTML test samples. But definitely a good way to go.
... Next step is to see if this matches test format

Wilco: This is from Axe Core.
... For Va11yS?

Charu: My concern is that Va11yS are pretty much examples. They are not in test case format
... How would this apply to those?

<Wilco> https://ibma.github.io/Va11yS/HTML/H4_example2.html

Wilco: Are there ways to automate or add meta data to these files?
... Ideally we get a format that works for as many as possible?
... Could we run a tool against the file and compare against SC?

<shadi> https://www.w3.org/WAI/ER/tests/

Wilco: Could we add Success Criteria to Va11yS?

Shadi: We had some test samples in 2006. We worked on metadata back then. We just used a subset of metadata but was rather laborious. Would be good to look at this.

Wilco: Not meeting next Monday, 4/17 for the Easter holiday.

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2017/04/10 17:48:10 $