See also: IRC log
https://github.com/w3c/w3c-waet/labels/needs%20discussion
CV: no test suite and mechanism to benchmark tools
SAZ: can we find some simple labels like "fully checks SC", "partially checks SC", and "doesn't address SC at all"
CV: not measurable unless we use massive resources
SAZ: so until then, prone to interpretation
... honest vendors will not use because their tools would seem worse then they
really are
... and others may use it to misrepresent their tools
RESOLUTION: not add feature because there is no widely accepted comprehensive test case, but possible mention this somewhere together with the warning that tool support for automated testing varies significantly
CV: maybe can add a note in the formats section
to "media", that they are hardly supported by tools
... prefer information close to where it is relevant in the document
RESOLUTION: add note to 2.1.1 that few tools are known to evaluate media formats
SAZ: the other point raised in this comment, is
that tools provide varying approaches for checking accessibility features and
that they have varying levels of accuracy
... we already address this point in the "testing" section
CV: created new section to separate between "user
emulation" that is currently discussed under "dynamic content", and checking
"rich internet applications" after rendering
... this also resolves comments #5 and #40
RESOLUTION: addressed by rewrite mentioned above
RESOLUTION: addressed together with #18
SAZ: need a new section but need to avoid the word "simulation"
<scribe> ACTION: shadi to check with EO on alternative terms [recorded in http://www.w3.org/2014/09/10-er-minutes.html#action01]
<trackbot> Created ACTION-143 - Check with eo on alternative terms [on Shadi Abou-Zahra - due 2014-09-17].
RESOLUTION: accept feature request