W3C

Evaluation and Repair Tools Working Group Teleconference

02 Oct 2013

See also: IRC log

Attendees

Present
Shadi, Samuel, Carlos
Regrets
Emmanuelle
Chair
Shadi
Scribe
Shadi

Contents


http://www.w3.org/WAI/ER/WD-AERT/ED-AERT

http://lists.w3.org/Archives/Public/public-wai-ert/2013Sep/0008.html

[[- Organization: the table in section 3 seems mostly clear to me. I would

go beyond and suggest reordering the features in section 2, and grouping

them into different subsections, according to the same categories used in

the table.

]]

CV: just wanted to check the categories before implementing it in section 2

[[- Tool audience category: I would explicitly include tool accessibility as

a nother (desirable) feature. I am sure most agree on the relevance of the

accessibility of evaluation tools, which should abide by general authoring

tools accessibility criteria (better described in section A of ATAG 2.0

<http://www.w3.org/WAI/ER/WD-AERT/ED-AERT20130906> ). But there is a more

specific rationale for this point: in many companies, people with

disabilities work as accessibility-specialyzed consultants, and they need

authoring and evaluation tools that fit their ability profile.]]

SAZ: so first point is if we agree that accessibility of the tool is a feature
... second is where to put it
... do we agree with this feature?

CV: yes
... put it under "tool audience"?

SAZ: that's where it most fits

[[- Web testing APIs, I'm not sure if they only apply to "Test

customization", or to "Subject being tested" as well. Tools that offer

this kind of APIs (e.g. Selenium) are also used to bring the web

application under testing to a predetermined state (to access a specific

"Point of Observation"). For instance, APIs can also be used to start a

session on a web site, add some products to a shopping cart, and then go

to the "cart summary" page, which will be a subject under test that could

have not been otherwise generated (as it does not correspond merely to

e.g. a predefined URI).]]

[[- Repair: I agree that automatic repair should be discouraged (basically,

if user agents cannot provide an accessible representation or control,

there is no reason that makes us think other software such as an

accessibility evaluation tool is going to be smart enough to "mend" that).

However, that should not preclude accessibility evaluation tools from

automatically suggesting potential fixes. These fixes can even depend on

the input of the evaluator, yet they are provided with some guidance

nonetheless. Think, e.g. about the "quick fix" functionalities usually

integrated in IDEs, which guide developers on how to fix a code problem,

while still leaving the final choice in the hands of the developer.]]

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.138 (CVS log)
$Date: 2013/10/02 18:35:29 $