W3C

Accessibility Conformance Testing Teleconference

23 Aug 2018

Attendees

Present
Alistair, Wilco, Romain, Kasper, Trevor, Shadi, Kathy, MaryJo, Charu, Jey
Regrets

Chair
Wilco, MaryJo
Scribe
Jey

Contents


Issue 241: Review/replace term "aggregation definition" for composed rules https://github.com/w3c/wcag-act/issues/241

Wilco: lots of discussion happening on this, and the term aggregation is not well received
... there was a proposal to rename `composed` to `composition` rules

<Wilco> https://github.com/w3c/wcag-act/pull/247/files

<scribe> Unknown: Can we explain what Anne's proposal is...

<Wilco> https://github.com/w3c/wcag-act/pull/247/files#diff-9ac0a6633720a5535b0a53cba04ababeR199

Wilco: Anne's proposal was to introduce sub-sections to remove ambiguity

Alistair: `chained` rules is a better term

All: Everyone is debating Anne's suggestion on having sub-section...

<romain> +1 on Anne's PR

Wilco: do we need a CFC for this PR?

+1

<Kasper> +1

<romain> +1

Shadi: Have we reviewed in detail?

Alistair: Github = Nighmare :)

Romain: Suggestion for tool to see diffs

<shadi> https://services.w3.org/htmldiff

<shadi> https://rawgit.com/

<romain> the tool I mentionned is "PR Review" by Tobie Langel, see also: https://lists.w3.org/Archives/Public/public-publ-wg/2017Aug/0151.html

Wilco: we will put together a CFC for this change from anne

<romain> (probably based on rawgit and htmldiff)

Pull request 247: https://github.com/w3c/wcag-act/pull/247

Wilco: summarising > no body seems to be doing the accuracy benchmarking, and there are a bunch of problems around it esp. to have comparable results.
... The bigger Q is, what can be done instead, to ensure rules get improved over time.
... Shadi, had some thoughts? Can you explain?

Shadi: The benching section, does not provide any requirements.
... 2 Q's. 1) Proposed measurement technique, and who qualifies as an expert 2) Do we want to have requirements around it...
... How do we define a threshold for benchmarking?
... We have a process in auto-wcag, where in the review process acts as a acknowledgement for acceptance of how the rule is implemented?

Wilco: Should the rule format in auto-wcag introduce sections/ requirements for this?

Shadi: Going back and forth on it. #Mixed feelings :)
... Is there a framework, that is robust, that will ensure the quality of rules produced/ implemented?

Wilco: Summarise from last week, looks like most of the implementers manage quality by keeping an eye on false positives and bugs on rules are they are reported or identified.
... Perhaps add a requirement for feedback mechanism?
... Are we too late into the process, as we need to be in CR soon.

Jey: Perhaps use the test cases to ensure the results from different tools are the same?

Shadi: The tool developers agree and work together to build a rule who are a part of this group. But for any new tool developer who is not a part of this forum, should be able to use what we produced, and arrive at the same results...

Wilco: AG will be good gate keeper to ensure rules are of good quality
... Do not believe we need to introduce a process

Shadi: If we make the claim that the spec., is going to contribute harmonisation. We have to demonstrate that for the CR.
... A simple proposal would be, the rules that are generally agreed upon, must have publicly stated review process and the review process will require 3 different implementations with same results etc...
... Aim to create a framework within the spec, which relies on review process we already have.

Mary Jo: Does not believe we should introduce this into the spec, as different implementers have their own ways, but has no problem in adding a generic section.

Wilco: Agrees with Mary Jo. We have 3 implementation variations, and if we put that down into the spec, then it does not allow for growth in terms of other variations of tools like `epub` etc.,

Shadi: We can add something along the lines of "Atleast X number of implementors..., but also allowing for variations for other communities."

Wilco: Could we instead try the feedback mechanism, where the consumers of the rule have to provide feedback if the rule needs updating...

Kasper: Feedback mechanism works best, and we rely on it for SI.
... Unsure if it is possible to formalise a feedback mechanism? And at the end of the day QA is subjective.

Alistair: A lot of the this will involve around test-cases. So having an email/ org against the test-cases that we will export is useful.
... Overall, does not believe we should add something into the spec. And locking in delivery cycles is not ideal.

Kathy: Agrees as above, that feedback mechanism may be the right way to go. It is hard to identify the procedure.

Wilco: Should we take out the benchmarking section from the spec?

Shadi: It sounds like the consensus, is favouring towards keeping QA agnostic to each organisation. So yes, we can remove the benchmarking section from the spec.
... Saying that we still have to demonstrate interoperability, harmonising, etc. So we need to agree on a threshold on some rules and their implementations yielding the same results. So we will be emphasising this implicitly.

Wilco: Agrees
... Any objections to removing benchmarking section?

Romain: Perhaps enhance the documentation/ spec, to define basic terms for QA.

Shadi: We can always add more definitions/ docs with out requirements etc.

Romain: Agrees, with discussion above.

Wilco: Likes the idea to create a key terms section.
... To create a CFC for the same.

https://github.com/w3c/wcag-act/issues/251

<agarrison> +1 for change

<romain> +1

<Kasper> +1

Wilco: Should we add test cases for composed rules

+1

<trevor> +1

<shadi> +1

<cpandhi> +1

<maryjom> +1

<kathyeng> +1

https://github.com/w3c/wcag-act/issues/248

<Wilco> SC1-2-3-some-description

Wilco: There are currently 2 formats for rule ids

<Wilco> ACT123

Jey: +1 for SC-X-X-X format

<cpandhi> +1 for SC-x-x-x

<trevor> +1 for SC-x-x-x

Shadi: Memory fades :) on why the ACTXXX came to being

Wilco: Away for the next 2 meeting

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2018/08/24 09:47:34 $