W3C

Evaluation and Repair Tools Working Group Teleconference

15 Apr 2015

See also: IRC log

Attendees

Present
Carlos, Samuel, Wilco, Shadi
Regrets
Chair
Shadi
Scribe
Shadi

Contents


http://www.w3.org/WAI/ER/charter5

SAZ: looking for ways to make our current work more relevant
... also explore the possibility of rule-set for conformance evaluation
... relates to auto-wcag community group

WF: came together to develop common set of tests
... focusing on automated tests
... spent some time on format and approach
... progressing fairly slow
... but reviewing them thoroughly
... about 12 tests available
... and another 12 in progress
... core group of about 8 people
... most of them from EIII project
... EC-funded project that initiated the work
... some may be able to participate beyond the project duration

CV: are there more test cases being developed within the project?

WF: EIII develops an automated and a non-automated tool
... the second tool is for non-technical people
... all tests development within the community group

SAZ: tests in auto-wcag is also called rule-sets
... but there are also test samples
... used to benchmark evaluation tools
... how interesting are these in general?

CV: unfortunately less interest in general
... we are interested in transparency but other less

SAZ: think two sides of the same coin
... need test samples to validate correct implementation of rule-sets

WF: agree ... there is much confusion about tool results
... need a common set of rules

CV: often asked how our tool compares to others
... who verifies that?
... big test suite?
... would help demonstrate quality

SAZ: is this amount of work manageable?

WF: need more people with dedicated time allocation
... work can be done reasonably fast if people invest time
... think lots of organizations interested
... have been talking with some people

SAZ: maybe chunk-down
... like focus on automated and on HTML first

WF: lots of low-hanging fruit
... like syntactic ones
... things like images was tough, for example

CV: we face the same challenge
... but it needs a coordinated effort
... not manageable by a few alone

SM: should be done little by little
... huge task
... should tackle specific techniques and aspects
... automated rules can be validated more easily
... test definitions can be benchmarked
... will fail unless we focus on specific parts
... think should be done within ERT WG
... but needs adequate participation

http://www.w3.org/TR/AERT

WF: would support move of auto-wcag work into ERT WG
... if sufficient organizations are interested
... EARL has been an important aspect in that work, though
... using it as the output format

http://www.w3.org/WAI/ER/charter5

SAZ: how would we document and present the test rules?

WF: like the Techniques
... individual documents that are aggregated into one main document
... can use Github to develop collaboratively

SAZ: what about the test samples?

CV: that's the work we did in BenToWeb
... needs a test definition language
... have an XML schema for that
... could turn it into RDF
... associate every test description with a test rule

SAZ: think there may be similar development by other organizations

<carlos> https://github.com/webcc/bentoweb-wcag20-test-suite-v3

SAZ: so could be like 4 main deliverables
... test rules, test samples, test case description, and test results format
... could be done in task forces too
... will try to develop a new draft charter for next week

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.141 (CVS log)
$Date: 2015/04/20 07:21:42 $