14:05:20 RRSAgent has joined #wcag-act 14:05:20 logging to http://www.w3.org/2016/10/05-wcag-act-irc 14:05:22 RRSAgent, make logs public 14:05:24 Zakim, this will be 14:05:24 I don't understand 'this will be', trackbot 14:05:25 Meeting: Accessibility Conformance Testing Teleconference 14:05:25 Date: 05 October 2016 14:05:33 : scribenick: rdeltour 14:05:37 agenda+ Availability survey 14:05:39 agenda+ ISO take-aways, Alistair & Charu 14:05:42 agenda+ ARIA test harness take-aways, Wilco 14:05:44 agenda+ Web platform test harness 14:05:47 agenda+ Look at github frontend 14:05:51 agenda+ Open Actions 14:05:54 agenda+ ACT Planning 14:06:03 present+ wilco 14:06:05 present+ 14:06:10 present+ shadi 14:06:18 zakim, next item 14:06:18 agendum 1. "Availability survey" taken up [from Wilco] 14:06:30 Topic: Availability survey 14:06:34 https://www.w3.org/2002/09/wbs/93339/availability/ 14:06:35 https://www.w3.org/2002/09/wbs/93339/availability/ 14:06:40 wilco: shadi set up a survey for our availability 14:06:48 ... please fill it out! 14:07:51 shadi: you can keep updating the survey, it will keep the latest entry 14:08:01 zakim, next item 14:08:01 agendum 2. "ISO take-aways, Alistair & Charu" taken up [from Wilco] 14:08:06 ... before or after the call 14:08:16 Topic: ISO take-aways 14:08:49 alistair: it was an overview of the good practice people provide on acceptance test 14:08:55 ... it's an area we're interested in 14:09:02 ... we're looking for features 14:09:13 ... positive or negative features, how to write those up 14:09:29 ... then best practices on how to write those tests 14:09:31 Kathy has joined #wcag-act 14:09:38 ... not rocket science, but good advice 14:09:57 ... basics that software testers learn 14:10:07 q+ 14:10:12 wilco: do we have both postiive and negative feature tests? 14:10:24 alistair: failure techniques are generating negatives 14:10:34 ... success techniques generate positives 14:10:40 ... we can look at negatives first 14:11:00 present+ Kathy 14:11:07 wilco: looks good, we should take that into consideration 14:11:22 present+ Alistair 14:11:22 ... a11y testing does have both pos and neg testing 14:11:36 ... looking at axe and other tools, there's a clear distinction 14:11:42 ... it seems it makes a lot of sense 14:12:08 ... what I still don't think we have is the ability to say "pass" 14:12:20 ... so it's non-conformance testing instead of conformance testing 14:12:28 ... is there anything about that in ISO? 14:12:41 alistair: the ISO stuff doesn't go in these details 14:12:51 ... it's more about writing standards 14:13:13 wilco: so we'd have to decide on whether we want to do that 14:13:26 alistair: in my experience you settle on one way 14:13:55 ... what we want to decide is whether the end product is a claim or rather "you have not done this" 14:14:03 wilco: it seems to me we're aiming at the latter 14:14:10 alistair: right 14:14:27 wilco: did you work with Charu? 14:14:39 alistair: no, just did this 20 mins in the morning ;-) 14:14:56 q? 14:15:02 ack me 14:15:03 ack s 14:15:21 shadi: things we can start putting in requirements: 14:15:29 ... atomicity: how atomic is a test 14:15:49 ... another question is what we just discussed about positives or negatives 14:16:02 ... I feel a lot of this is good stuff to put in some kind of requirements 14:16:19 ... aomicity, relation to SC 14:16:31 ... translate what we mean by atomic in the context of WCAG 14:16:41 s/aomicity/atomicity/ 14:16:53 alistair: the 1st thing is not to build massive tests 14:17:03 ... for SC, look at the techniques you can use to meet that 14:17:12 ... then break it down to smaller tests 14:17:37 ... that means we need to know what techniques we're following 14:17:46 ... another thing is non technique tests 14:18:03 ... e.g. an image: a technique test is test if there's an aria label value 14:18:23 ... a non-technique test is "does the image has a computed name value" 14:18:53 shadi: is this now the right time to discuss this and put that in reqs? 14:19:19 ... so that we have an understanding, and detailed later in spec 14:19:24 https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/ACT_Framework_Requirements 14:19:25 ... right now our reqs are very high level 14:19:52 alistair: ultimately we want to achieve massive amount of test coverage 14:20:02 ... we'll want both approaches 14:20:15 ... top-down: you don't know the udnerlying techniques 14:20:29 ... bottom-up: you know the techniques,, tests if implemented well 14:20:41 wilco: I've looked at other areas, e.g. SVG 14:21:16 ... if you look to make assesments about a11y support requirements 14:21:30 ... you need to consider which exact attribute led to an accessible name 14:21:41 ... it goes back to techniques 14:22:00 ... you may have used a known technique, or another one. we need to report on these differences 14:22:13 alistair: it's almost the difference between warnings and fails 14:22:32 ... you won't be able to say "you've done it wrong" if you look only at aria-label for instance 14:22:58 q+ 14:23:17 ... if you look at the outcome, it's about the SC 14:23:43 ... if you look at the techniques, you can say "yes you pass" but maybe a better way is another technique (???) 14:23:49 ack me 14:24:17 shadi: what I don't know in our fwk is if we're expecting some kind of a11y support as input of our test 14:24:23 q+ 14:24:24 ... assumptions as input 14:24:36 ... for that baseline you define which does or doesn't work 14:24:51 ... I don't want to get into the specifics right now 14:25:09 ... is this the stufff we want in our req document? 14:25:24 q+ 14:25:28 kathy: on the a11y support side we'll always have difficult times 14:25:38 ... to say this technique is supported or not 14:25:46 ... so many different components are part of it 14:25:59 ... we can't really have that in here. w/b great if we could 14:26:17 q+ 14:26:19 ... but I'd be concerned if we tried to put a11y support there, if only for maintenance difficulty 14:26:24 ack kathy 14:26:37 wilco: we've discussed a11y support before, when we started ACT 14:26:49 ... we don't want to have that baked into rules, for the reasones you mentioned 14:26:59 ... you need to decide on granularity 14:27:21 ... if we can achieve that, ideally, is to get to a point where the user has some sort of baseline 14:27:30 ... (probably up to the tools developers) 14:28:04 ... so you can say "given this support", maybe a matrix, as input in a rule 14:28:15 ... so we're not stuck in a11y support issues 14:28:32 alistair: I agree 100% with you both 14:28:37 ack aga 14:28:56 ... definitely not put that into the tests themeselves, but make sure we have the broader coverage 14:29:16 ... the tools developer can apply some weightings, but that's not for us to hardcode in the tests 14:29:33 wilco: what I want is to put a11y support data as input, and build results based on that 14:29:47 alistair: it looks like something tools developers should put in 14:29:58 ... we're talking about how to write tests 14:30:05 ... how to utilize them is different 14:30:15 shadi: isn't it part of how to write tests? 14:30:31 ... I agree that we shouldn't hard code a11y support in tests 14:30:59 ... whether we want a mechanism that tool developers or a user puts a database of assumptions in 14:31:21 alistair: it's still metadata related to the test, so you need to maintain it 14:31:37 shadi: let's take a simple example, aria-label 14:31:45 ... right now, the test is to check for aria-label 14:32:04 ... if we're not considering a11y support, it's just a test to see if aria-label exists 14:32:38 ... a 2nd approach, is to have somewhere else in the test rule a statement to say "this rule relies on aria-label to be implemented" 14:32:54 ... so when you implement, you can look at these statements 14:33:22 ... in the reqs, we want to come to an agreement and write that down 14:33:34 ... how our fwk deals with a11y support statements 14:33:53 ack me 14:33:56 alistair: I agree 14:34:16 ... you need to push the information in a kind of "relies upon" section 14:34:51 kathy: I think that would work 14:35:06 ... I agree with shadi 14:35:10 ... the more information the better 14:35:27 ... we just need to be careful about not including things that would be impossible to maintain 14:35:58 wilco: should we add a resolution? 14:36:14 shadi: my suggestion is to put that in the requirements document, or record a resolution to do so 14:36:41 ... and for people to think about other headings to put in the requirements document 14:37:04 ... you can send an email and request comments 14:37:54 romain: to what extent do we want to say in the reqs how the rules will look like 14:38:05 q+ 14:38:35 ...like reqs for reqs 14:38:56 wilco: the way I see it is: what features would I want the rules to have, what are the quality aspects 14:39:09 ... the way to deal with a11y support is one of them 14:39:11 ack me 14:39:24 shadi: we can take a middle way 14:40:02 ... I don't want to wordsmith, but we can find a way to turn that in a req 14:40:18 ... we don't have to take a decision right now 14:40:28 ... we might later have other areas that influence this decision 14:40:46 ... for now, we need to gather these topics, to have a kind of skeleton for our fwk 14:40:51 +1 14:40:55 zakim, next item 14:40:55 agendum 3. "ARIA test harness take-aways, Wilco" taken up [from Wilco] 14:41:08 https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Testing_Resources#Take-aways_from_WPT_.26_ARIA_Test_Harness 14:41:20 topic: ARIA test harness 14:41:28 wilco: little documentation, but interesting ideas 14:41:49 ... WPT have 5 test types 14:42:09 ... some of them pretty similar to what we'll be doing 14:42:35 ... ** goes over the 5 tests ** 14:42:46 s/tests/test types/ 14:43:32 ... about the requirements, we're aligned on atomicity, being short minimal, cross platform 14:43:47 ... e.g. "no proprietary techniques" 14:44:04 ... they also talk about self-contained tests 14:44:14 ... all their tests are written in HTML 14:44:33 ... we don't use a format in auto-wcag, but WPT is explicit about it 14:44:46 ... do we want to say what format we want the rules to be written in? 14:45:12 shadi: can you elaborate? 14:45:24 wilco: all their tests are in HTML 14:46:21 ... their tests are really atomic, which is very powerful 14:46:42 alistair: self-contained tests is very important 14:46:52 ... in auto-wcag we have some dependencies between tests 14:47:08 wilco: the reason for that is atomicity 14:47:14 ... it results in very big rules 14:48:13 ... point 6 is about "accessible name", "role". we had that disucssion 14:48:25 ... all tests are "pass" or "fail" tests 14:49:04 ... do we need an output format? or do we only want a pass/fail 14:49:13 alistair: I think you'll need more than that 14:49:30 ... non-applicable, pass/fail, review 14:49:51 wilco: do we really need something like EARL or can we make it work with something simpler 14:49:59 ... (boolean result) 14:50:24 shadi: EARL is trying to describe results, which can come from any model (binary or not) 14:50:38 ... we concluded we have 4/5 types 14:50:47 ... pass, fail, untested, etc 14:51:19 ... my impression is that e.g. "cannot tell" must be in there 14:51:36 ... there may be triggers for a test, but the test won't be able to tell pass or fail 14:51:48 ... "non-applicable" might not be needed 14:52:34 ... in terms of warnings, they are usuall essentially a "pass", just ugly ones 14:52:45 s/usuall/usually/ 14:52:58 ... we may not want a value for "warning", it's just an attrribute 14:53:11 ... some tools like to say "nearly pass" 14:53:25 ... if it's a very minor issue. it kinda works. 14:53:52 ... it's actually a fail. you have a problem, it just doesn't have much impact 14:53:57 ... these are nuances on pass 14:54:08 ... that's why you have extra info in EARL 14:54:37 q+ 14:54:37 wilco: we'll definitely to have this discussion again. 14:54:48 shadi: we can untangle this from EARL 14:55:04 ... for now, we can say we'll need to define the results (with potential candidates) 14:55:12 ... when we get into that, we can discuss it more 14:55:23 alistair: it needs to be totally non-subjective 14:55:35 ... "nearly pass" is very subjective 14:55:47 ... "non-applicable" is not subjective, and very useful to know 14:56:02 ... not having it may lead to wrong conclusions 14:56:14 ... all of these have to be non-subjective 14:56:18 wilco: +1 14:56:34 ... shadi, have you looked at github? 14:56:43 shadi: you all should have access and be able to edit 14:56:53 ... I set up a team 14:57:12 ... I've not set up the automated publishing process 14:57:16 https://www.w3.org/WAI/GL/task-forces/conformance-testing/track/ 14:57:28 wilco: quick look at action items 14:59:00 ... last thing: we have our planning document 14:59:11 https://www.w3.org/WAI/GL/task-forces/conformance-testing/wiki/Month_by_month_plan 14:59:59 ... by december-ish we'll need agreement on the requirement, and a first draft 15:00:45 trackbot, end meeting 15:00:45 Zakim, list attendees 15:00:47 As of this point the attendees have been Wilco, Katie, MaryJo, Charu, Alistair, Jemma, rdeltour, shadi, Kathy 15:00:53 RRSAgent, please draft minutes 15:00:53 I have made the request to generate http://www.w3.org/2016/10/05-wcag-act-minutes.html trackbot 15:00:54 RRSAgent, bye 15:00:54 I see no action items