13:54:49 RRSAgent has joined #wcag-act 13:54:49 logging to http://www.w3.org/2017/04/10-wcag-act-irc 13:54:51 RRSAgent, make logs public 13:54:51 Zakim has joined #wcag-act 13:54:53 Zakim, this will be 13:54:53 I don't understand 'this will be', trackbot 13:54:54 Meeting: Accessibility Conformance Testing Teleconference 13:54:54 Date: 10 April 2017 13:55:05 agenda? 13:55:08 agenda+ Issues # 6, 7, 8, 15, 69 in Github on the ACT Framework Requirements https://github.com/w3c/wcag-act/issues 13:55:17 agenda+ Test case repository https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Test_case_repository 13:55:28 agenda+ Rules repository https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Rules_repository 13:55:40 agenda+ ACT benchmark https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Benchmark_requirements 13:55:52 agenda+ Next meeting 24 April, skipping one week due to holiday 14:00:16 Kathy has joined #wcag-act 14:01:24 maryjom has joined #wcag-act 14:01:38 rdeltour has joined #wcag-act 14:02:40 MoeKraft has joined #wcag-act 14:03:10 present+ 14:04:32 present+ 14:04:49 present+ MaryJoMueller 14:05:30 present+ 14:05:58 present+ MoeKraft 14:06:09 scribenick MoeKraft 14:06:30 zakim, take up next 14:06:30 agendum 1. "Issues # 6, 7, 8, 15, 69 in Github on the ACT Framework Requirements https://github.com/w3c/wcag-act/issues" taken up [from Wilco] 14:07:17 cpandhi has joined #wcag-act 14:07:29 present+ cpandhi 14:08:07 https://github.com/w3c/wcag-act/issues 14:08:13 https://github.com/w3c/wcag-act/issues/6 14:08:16 Issue #6 14:08:30 Wilco: Let's look at Issue #6 14:08:57 https://github.com/w3c/wcag-act/issues/6 14:09:09 Wilco: Should be resolved with Issue #7 14:09:17 Wilco: But not sure how these relate 14:09:32 https://github.com/w3c/wcag-act/issues/7 14:09:37 https://w3c.github.io/wcag-act/act-fr-reqs.html 14:09:52 Wilco: Let's look at the requirements document as well 14:11:14 Wilco: Quickref is a summary of the list of techniques available 14:11:32 Wilco: Move away from the idea of having 1-1 relationship between rules and techniques 14:11:43 Wilco: Will be at the same level but let's get some thoughts 14:12:10 Wilco: Are we moving away from lining up rules with existing techniques? 14:12:44 Romain: We don't require a 1-1 mapping but the original issue #6, I think the issue is one of clarity of the concerns of the requirements themselves 14:13:20 Romain: We need to clarify that the framework is about the rules format rather than the rules themselves. I think this term format should be used here. 14:13:55 Wilco: We need to take a closer look at the requirements document regarding the rename. But what are your thoughts between the relationship between rules and techniques 14:14:06 Romain: I don't think we should have 1-1 mapping 14:14:30 Wilco: Originally thought that we would connect where feasible. But more and more leaning towards keeping these separate 14:14:41 Wilco: Difference in scope and intent 14:14:50 Wilco: Anyone who doesn't agree with this? 14:15:02 Shadi: We always said to connect where feasible. 14:15:08 Shadi: What's wrong with this? 14:15:34 Wilco: Original idea was that we make rules as test procedures for techniques. I don't think this is the direction we are still moving in. 14:15:58 Shadi: Correct. Failures are sometimes described as techniques. Maybe these do have more of a 1-1 mapping 14:16:15 Wilco: Seems to me that failures will become obsolete as we define rules 14:16:45 Wilco: Instead of replacing test sections in the sufficient techniques, I think we should strive to make failure techniques obsolete 14:16:48 Shadi: Ok 14:17:16 Shadi: Let's not talk about this right now. Once we build up the repository this should come naturally. 14:17:35 Wilco: I think there is a lot of overlap. This is just a plausible direction we are taking 14:18:23 https://github.com/w3c/wcag-act/issues/7 14:20:19 Wilco: we should move away from the idea of just having negative tests. Automation tends to have mainly negative tests but semi-automation we could have positive tests. And with machine learning as well. 14:20:48 Wilco: Makes sense to take out description that all tests are negative 14:21:10 q+ 14:21:18 https://w3c.github.io/wcag-act/act-fr-reqs.html#rules-test-for-failures 14:22:18 Shadi: I'm fine with removing this statement. But are there times when it it better to take a certain approach. We want as much consistency as possible. 14:22:51 Shadi: Maybe give examples of how to write the tests 14:23:07 Wilco: This is tricky for me. Depends on the scope of the success criteria. 14:24:08 Wilco: The reason we do not do positive tests is that the scope of the rule is different from the success criteria. Based on any particular rule we cannot determine if it's passed because scope is different. 14:24:22 Romain: I agree we should remove the restriction 14:24:51 Romain: Positive tests are viable. e.g. testing for alt text that is good 14:25:05 Wilco: Yes. That's true. Let's make this change. 14:25:17 https://github.com/w3c/wcag-act/issues/8 14:26:24 Charu: Rule checks if there is a label or note. If not, we return an error. If there is a label we do nothing. 14:26:48 Charu: Basically rules check for what is needed and if not there a violation is issued. 14:27:11 Wilco: Right so a positive test will tell you if something is done right and negative if something is done wrong. 14:27:22 Charu: Most of our rules do negative tests 14:28:00 Wilco: That's right, most do. 14:28:47 Romain: It has to be important to know the difference though. Passed/failed or cannot tell. Does this mean passed is positive testing and failed for negative. And then all the rest can't tell. 14:30:14 Wilco: Looking at rule aggregation. 14:30:22 Charu: Trying to work with an example. 14:30:46 q+ 14:30:55 Romain: If the ultimate goal is to assess success criterion for a positive test passes, then no longer need to test success criteria. 14:31:14 Romain: For negative test, could pass but does not mean success criteria is met 14:32:19 Shadi: I don't recall this as a definition for positive and negative. Positive test says image has an alt attribute and negative checks that it does not 14:32:33 Shadi: My understand is about the editing practice. 14:33:05 Shadi: Could also test if alt text is bad. and this would be negative. 14:33:33 Charu: Can also say that there is an alt text but it is not meaningful. This would then be a negative test 14:34:01 Charu: Don't positive tests create a lot of noise? 14:34:57 Wilco: What concerns me is that can we pass an SC. The way we wrote it up now, no, we cannot pass an SC. We can only get negative tests. Questions is that do we want to change this? 14:35:25 Wilco: I think we can have positive and negative tests but we need to check and rewrite aggregation. 14:36:18 Shadi: Detailed SC like 4.1.1 could say pass or fail. Regardless of manual or automation. But I think we set wrong expectations to say we come to a pass conclusion. 14:36:28 Wilco: I don't think so. 14:37:15 Wilco: You can totally determine if audio has sufficient alternative. Whether or not we can write a rule is the question. As we're discussing that might be very possible especially with manual tests. 14:37:23 Shadi: Yes. And we will have manual tests 14:37:59 Shadi: I think we were burned in the past where tools were making statements saying we didn't find issues so this passes. We want to make sure not to encourage this. 14:38:22 Wilco: Important that we indicate when a rule may pass an SC 14:38:44 Shadi: Also people check all techniques and say it passes. This is not the full set of Success Criteria. 14:39:08 Shadi: People use techniques to meet SC. 14:39:15 Wilco: Right and SC are not exhaustive 14:39:25 Wilco: How do we avoid this problem? 14:39:58 Wilco: Or do we stick to negative tests and never claim we pass SC on any rules we write? 14:40:21 Wilco: Makes sense to consider that. 14:40:52 Shadi: Saying it passes if you can demonstrate that there is a technique. 14:41:10 Shadi: Changing my mind. : ) 14:41:23 Shadi: Even with manual tests are we not looking for mistakes? 14:42:36 Shadi: Assuming positive and negative is dependent upon how it is written but always looking for fault. 14:43:12 Shadi: To say we meet requirement because we describe an image in such a way. When evaluating, looking for mistakes. 14:43:49 Shadi: Exhaustiveness is when checking all possible ways. Then you can make statement that it passes SC. Otherwise, we found 1 possible mistake. 14:44:06 Charu: Easier to find mistake rather than do exhaustive checking. 14:44:29 Shadi: Saying SC passes is not that easy. I can only say I did not find any mistakes 14:44:51 Wilco: If someone uses a technique and the technique was used, I know that passes 14:45:26 Shadi: IF you know what technique the author used, it's a whole different game. If you don't know, black box testing, then you have to guess/test all different ways 14:45:52 Shadi: Yes, if we are doing manual, we have more human interaction and intelligence. 14:45:57 Wilco: I changed my mind too. 14:46:26 Wilco: I think it makes sense for scoping a little bit. In a way, techniques have positive tests and these will remain valid. 14:46:46 Wilco: Even makes sense to stick to negative tests and say techniques are the positive tests. 14:47:35 Charu: Also the question of whether technique is supported 14:48:21 Wilco: Maybe we stick with negative tests but not call it that. Need to make sure that people don't use this as an exhaustive list. 14:48:39 Shadi: Agree with concept but want to read up on positive and negative testing. 14:49:13 Shadi: The idea that the technique is how we author something and the test is the check if there is an error or something suspicious 14:49:33 Wilco: Rules as we describe them look more like the failure techniques than we set out to do 14:49:49 Wilco: Good discussion. Need to move on from this. Will circle back. 14:49:55 Zakim, take up next 14:49:55 I see a speaker queue remaining and respectfully decline to close this agendum, MoeKraft 14:50:06 ack s 14:50:13 q- 14:50:22 Zakim, take up next 14:50:22 agendum 2. "Test case repository https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Test_case_repository" taken up [from Wilco] 14:50:46 https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Test_case_repository 14:51:48 http://lists.w3.org/Archives/Public/public-wcag-act/2017Apr/0001.html 14:52:31 Wilco: For these repositories, we want to combine them in an easy way to consume but not fork all of them. 14:52:38 Wilco: Let's put a format together. 14:53:10 Wilco: Look at the archived email. 14:53:58 Wilco: This has a suggested format that is easy to consume. 14:54:21 Wilco: If we could have a file like this for each repository it would be easy to run against them 14:54:41 Charu: I like the idea. Have to write this for each test case? 14:55:03 Wilco: Yes. That would be the minimal amount of effort to make this consumable. Any suggestions to make it easier? 14:55:36 Romain: I think the format is good and easy to consume. It's pretty straight forward as well. 14:56:01 Romain: Not sure this is applicable to all the test repositories. Depends on the tests out there. Maybe not fine grain enough. 14:56:17 q+ 14:56:17 Romain: This format may not apply to HTML test samples. But definitely a good way to go. 14:56:36 Romain: Next step is to see if this matches test format 14:56:49 Wilco: This is from Axe Core. 14:56:55 Wilco: For Va11yS? 14:57:11 Charu: My concern is that Va11yS are pretty much examples. They are not in test case format 14:57:25 Charu: How would this apply to those? 14:57:31 https://ibma.github.io/Va11yS/HTML/H4_example2.html 14:58:28 Wilco: Are there ways to automate or add meta data to these files? 14:59:01 Wilco: Ideally we get a format that works for as many as possible? 14:59:30 Wilco: Could we run a tool against the file and compare against SC? 15:00:44 https://www.w3.org/WAI/ER/tests/ 15:00:45 Wilco: Could we add Success Criteria to Va11yS? 15:01:19 Shadi: We had some test samples in 2006. We worked on metadata back then. We just used a subset of metadata but was rather laborious. Would be good to look at this. 15:02:29 Wilco: Not meeting next Monday, 4/17 for the Easter holidy. 15:02:42 s/holidy/holiday 15:03:08 present+ Kathy 15:10:13 trackbot, end meeting 15:10:13 Zakim, list attendees 15:10:13 As of this point the attendees have been rdeltour, shadi, MaryJoMueller, Wilco, MoeKraft, cpandhi, Kathy 15:10:21 RRSAgent, please draft minutes 15:10:21 I have made the request to generate http://www.w3.org/2017/04/10-wcag-act-minutes.html trackbot 15:10:22 RRSAgent, bye 15:10:22 I see no action items