22 Apr 2004 - WCAG WG Teleconference Minutes

IRC log - WCAG WG

Present

Wendy Chisholm, Tom Croucher, Bengt Farre, Loretta Guarino Reid, Michael Cooper, Matt May, David MacDonald, Jason White, Mike Barta, Ben Caldwell, Gregg Vanderheiden, Sailesh Panchang, Doyle Burnett, Paul Bohman, Gian Sampson-Wild

Regrets

John Slatin, Charles McCathieNevile, Roberto Scano, Andi Snow-Weaver, Roberto Ellero, WATANABE Takayuki, Yvette Hoitink, Avi Arditti, Roberto Castaldo, Kerstin Goldsmith

Action Items

Assign action items for open issues

Issue 506 - Definition of structure

Action: David write a definition of structure that clarifies difference between using structured layout vs using structural elements (in definition of "structure").

Issue 556 - Guideline is difficult to understand

Action: Tom Write a proposal to clarify the level 1 #1 criterion.

Issue 669 - What is meant by "emphasis"

Action: Tom Write a proposal to address or clarify use of "emphasis" and visual presentation.

Issue 405 - General issue about knowing how to interpret how apply to html w/out reading the techniques.

Action: (not assigned) Write a proposal that clarifies the Level 1 #1 success criterion. Attach this action to the checklist question and revisit.

Issue 704 - In-line warnings and options to deactivate are good, but a User Agent could also handle this in most cases

Action: David This issue could be addressed with some additional text in the benefits describing how user agents may handle this in the future.

Issue 707 - Example is vague

Action: Mike Write a proposal that clarifies the example or propose a different example that will help clarify the intent of the guideline.

Close issue 374?

Issue 374 - How do AT users learn what makes structure distinct and how these distinctions can be specified?

Yvette proposes this is a user agent issue. Does anyone disagree? From yvette's email:

Harvey Bingham's suggestions and concerns are beyond the scope of WCAG. These are user agent issues. This guideline should make structure seperate from presentation, and then the user agent guidelines will make sure the structure is used by the user agents in an appropriate way. I propose to take a vote to close this issue.

Conformance

Current wording in this week's agenda.

does human testable mean "9 out of 10 humans will come up with the same solution"?

gian does not think that all levels should be testable.

tom saying "all sites should be testable" makes it difficult to find work-arounds. concern that people with disabilities will not benefit. is this exact wording proposed? if so, need something better than "human testing." we should define that more specifically. how to define, should be an action item. someone should do research to find out a good number. a recommendation for how many testers should be used to make a decision.

"inter-rater reliability" is not plain language. if we mean 80% or greater, we should say that.

wac hearing several things: 1. that this interpreted to mean sites must be testable. 2. or that how many people are required to test your site 3. too much emphasis on testing 4. HIRR should be more explicit"

possible wording: the working group believes these are human testable in a fashion that is consistent among testers. this is not a description of what you need to do on your site. these are judgements made by this group. it (the wcag wg) either believed it was machine-testable (b/c of knowledge about a test). or believe to be human testable.

it sounds as if the type of thing that gian is doing on her own sites is human testing.

don't like talking about percentages. if talking about HIRR, there is a scale (8 out of 10). but then have to talk about

how many people out of 10 are acceptable.

don't want to pin down an exact number.

instead, describe the process/what we mean.

gian we don't really know the requirements that people have for their sites. if you have a site aimed at helping dyslexics at school audience will be different than a medical site. is "clear and simple" testable in that example. no. are we allowing for people to have different sets of requirements? (depending on audience, requirements, goals, etc.)

not testable in a way that we can clarify in wcag

michael we could test teh guidelines and see how reliable the results are. help us determine if we are correct in our belief that these are reliably testable.

wac yes there is implementation testing phase (CR) but would not have the formality of data that michael describes.

m3m when it comes to human communication, absolute is not an option. there are no broadly applicable rules not knocking the needs of people with cognitive disabilities, but that people with 180 iqs can talk past each other. we can not (via the web) determine when we have lost a user (i.e., they are not understanding us). do not think we will ever be able to test any bit of human communication

gian - but we still want people to try

matt sure. but we can't put it in a normative document

(more about michael's proposal) would like to see for each guideline that is not automatically testable, see a test made (sample web pages), have reviewers review against criteria (stats? not sure how many reviewers need) could be done scientifically by usability expert.

people should send example sites to the list, WG could review and determine which cause problems or not. however, want to ensure reliability to results and formality would help.

paul there are tools/tests (fog, etc.). we could say in one sense it is not testable, in another it is.

do we decide, it's not testable and get rid of it OR kind of testable and keep?

fog - testable, but not proven to be beneficial.

if run fog on medical, accounting, etc. type of site, just by shrinking syllables is not sufficient.,

must look at the words you are using.

it is people's difficulty to communicate. numbers can not dictate what "understandable" is. it changes from person to person.

in determining if we keep, we either drop it as a success criteria or say "on some level testable" and keep or drop testability

gian: propose drop requirement for testability. we can't really say, "you should comply to test x" instead, "make it aimed at your audience." then it is up to the site owner to define what they want to comply with.

doyle: we can't force developers to write for anyone other than whom they presume who their audience is if we do write something, then we need to say that authors know their audience.

does that mean a graphics site does not need to provide text b/c don't expect person who is blind to visit?

don't expect someone on medical site to use non-medical words. e.g., drug names.

if person doesn't understand, not the author's fault

graphics sites, that's a different issue. it's similar in that we derive visual info in a way that needs to be perceived in a different way. apples and oranges.

loretta: if we let go of making things testable, crashes into our goal of guidelines that are usable in legislation.

we could go back and decide which is more important, but tied together.

sailesh: requirement for testability is important. should not drop. for clear and simple: could say level 1 and 2 are testable. level 3 are best practices.

tom: language is unique in that the ability to comprehend complex language, indicates that someone might have trouble interpreting other parts of the site. it's important that if there is a distinction, we need guidance. we can't ignore language, but it may need to be treated differently. it's unreasonable for a site to dumb-down language if the concepts are beyond the understanding of some readers.

gvan: using plain language is currently at level 3. have not talked about "dumbing things down." we do have a provision for things that are not testable (in the gateway) if they are not testable, should not be in guidelines

gian - that's what afraid of. won't be in guideline.

Matt perceivable is perceivable. should image sites say people who are blind are not in target audience? perceivable is own category

gvan concern that people say "people with disabilities are not in the target audience" concern that everything related to cog disabilities relegated to level 3

jason: if a success criterion is not reliably testable then it is not possible for anyone to detremine reliably if they have met it and thus should not be part of conformance scheme. we have to distinguish between testable requirements to determine conformance and providing best advice possible irrespective if testable or not

gian: we need to allow for human judgement

tom: identify that people have to do something, even if they define what that something is. do we have to make sure that any site w/any type of content has to conform to all levels? if someone has a tv station, do we have to make sure they can get level a w/all of their content? all multimedia, programs (captioning), alt-language version of captions, etc. ? do we require all conformance levels for all types of content (including scientific content)? the actual meaning and language? achieve all levels w/all of those?

gv asking question "do things need to be testable" but we're only talking abut one example. this is a fundamental question that we need to answer to move forward. at minimum, a working answer to try on for the next while. however, jason suggests it is not a question we can ask. do we have a choice if requirements need to be testable? can you have "advice" in w3c doc that is separate from what is required? yes, "informative vs normative" sections are common.

gian proposes that success criteria do not need to be testable. she doesn't want to lock out some guidelines. if you lock out guidelines b/c we can not define them in a testable manner, then we run the risk of locking out guidelines that people find useful and that increase the accessibility of content. onus should be on site owner how they apply the checkpoint. those checkpoints that help should be in there, even if not testable. they should not be relegated to highest level (3) because we can not define them in a certain way. they should be defined in way that is most assistive to people with disabilities.

does anyone object to having untestable success criteria? [many objections]

does anyone object to having all success criteria be testable? [gian]

gian: the distinction between level 1 and level 2 is that level 1 doesnot constrain author and level 2 does. the point of "minimum accessibility" is nowhere in defn of level 1, 2, or 3. now, we are adding that. when we categorize as level 1 and 2, was this considered? or do we need to retake a look at the success criteria from that standpoint?

gv: "minimum" came from discussion of last week. if author doesn't do it, no one can do it later. (that's level 1) level 2 - things the author can do to make content more directly accessible


$Date: 2004/04/23 21:46:37 $ Wendy Chisholm