29 November 2001 WCAG WG Minutes

Summary of proposed resolutions, resolutions, issues, and actions

Present

Regrets

Paul Bohman

Conditions for success criteria to be objective

GV One suggestion from the agenda, 8 or 10 people agree. The only thing I would add

CS Can we test that statement?

GV Yes, we will test that 8 out of 10 people agree.

MS Subject matter expert.

GV Didn't want to use the term expert since how do we define who is an expert?

CS Professional?

JW In our techniques documents, WC and I discussed last week, one consequence is that you could specify test methods in techniques documents. Then it is clear what approach should be taken to testing.

GV Good idea, but in all of the standards, the test method is part of the standard. Support docs give clues.

WC We have tons of test suites, but they are not normative.

GV When you say test techniques, where do they go? If put here it will blow the document up.

CS There will be new ones that show up, as with other techniques.

GV Or someone will create a tool to test some aspect.

AP If test techniques part of techniques, I suggest being separate. If someone looking at testing only, then can look just there.

GV Techniques for testing, for implementing, etc.

CS Should be way to print all or see all in checkpoint order.

AP Yes, good idea. Going through multiple links to get full picture of one checkpoint is difficult.

JW What we discussed last week was along those lines. An overview doc, list all checkpoints and success criteria. References to tech specifics if applied, link directly to relevant info. Overview of how the checkpoint implemented. Anything that not tech dependent, and we worked through and realized most success criteria fell into that cateogry, have discussion in core document. Go through every checkpoint in the guidelines.

GV If a checkpoint has a tech specific corrollary, if these are important, how could there not be...it seems to me there would always be a tech specific for each one. even if it said:

  1. in this tech, here's what you do
  2. in this tech, auto. do nothing. default imp satisfies.
  3. can not be satisfied using this tech, must provide in another tech as well.

JW Another possibility, not relevant.

GV That's #2: nothing to do or already done.

CS I like the idea of having something about every checkpoint in every tech specific doc. Even if just say "not applicable." Then there is no guessing.

JW good idea. WC coordinating work on techniques.

Proposed Resolution: each technique doc will mention each checkpoint. if not applicable, say that so no guessing.

GV Back to definition of objective.

CS If we call it objective, and 8 of 10 agree objective, is that objective?

JW 8 of 10 people agree on applying it.

GV If only 8 of 10 do it, not objective enough?

CS No, can we determine if people think it is testable.

GV Testing something is to run a test and see if 8 of 10 people do.

CS With a sample larger than 10.

MS Why 8 out of 10 instead of 1 out of 5? Is it the ratio or do you think 10 is a good size?

GV Originally 8 or 9 out of 10. Then if you say 8 or 9, then 8 is good enough. What really saying is 8. You could say 4 out of 5. Perhaps 80% is best way to say it.

JW 80% of people who have knowledge of the relevant tech and test methods would agree on their judgements.

AK Is this going to be part of the final draft or for us to use as a measure whenc reating success criteria?

GV We said would not make something a checkpoint that is not objective. This is how we are testing if this is objective.

CS In L.A. (2 years ago), we talked about stating assumptions. Wherever stating assumptions and process, we need to document. This should be documented.

/* GV Asks WC to reread minutes - JW's definition of objective. */

JW final part: should be on whether checkpoint or not has been implemented.

GV Although, if stop there (as it is) could apply to other things than checkpoints. Also, should be 80% or better.

Proposed resolution to take to the mailing list. If upheld this will be included in our Requirements document: We said we would not make something a checkpoint that is not objective. This is how we are testing if this is objective (our definition): 80% (or higher) of people who have knowledge of the relevant tech and test methods would agree on their judgements.

What if we don't have objective success criteria?

GV What about those things where we can not develop good success criteria. Do we have a good idea of what to do?

AP Start w/an example where we can not pinpoint success criteria?

GV Write as clearly and simply as possible. Hard to show a page and have 8 of 10 people agree it was written as clearly and simply as possible. We have a list of things that people can do, however can have a page that is clear and simple w/out doing all of them. Some might not apply. Or, could do all of them and the page could still be difficult. We're still hopeful we can come up with something, but we've had a problem finding something.

JW That's 3.3. 3.4 is another possible one. Supplement text w/non-text content.

GV Does that mean some text? All text? Even the same 3 people may not come in the same order, e.g. one person terrible, one person great, one person o.k.

JW Part of the task is to develop success criteria that are objective. If we can't, how do we deal with those checkpoints.

GV One section for objective, a separate section for non-objective. I was trying to think of ways where they were together in terms of importance, but characterized as one or another. Maybe that means that those are not testable, people can assert they are true w/out tests. OR they don't need to assert they are true in order to conform. Asserting they are true, many people say, "absolutely not. I will do this stuff, but not assert since there is an implied warranty that is worse. Asserting you did and having someone say that you didn't." Flip is. If don't have to assert, then they will be ignored. Better to put them w/stuff you need to and say not testable than put them someplace else. Another idea I've heard is a section on usability. It becomes, "do these, too." Then can include stuff brought up in study by Nielson/Norman - "It's not good enough to make pages accessible, they need to be usable as well." It has that advantage. Disadvantage is that many things that are good for accessibility get thrown into that category.

AK Could we have different levels of testability? Ideally all checkpoints would have success criteria attached. e.g. clear and simple language, have you evaluated content in best efforts? Acknowledge that it is subjective. Or for text equivalents: do all non-text have equivalents? yes/no. Then subjective test: are they appropriate?

GV Now you have legally asserted it is true. I thought it was a good idea, but companies have said, "under no circumstances will we assert something that does not have a test."

WC We need to provide as much info as we can. They can decide if they want to assert or not.

CS I think we can come up with good criteria for almost everything.

GV Have a checkpoint which is not in itself objective, but e.g. "write good alt-text." then, i fyou do the following 3 things we declare good alt-text. if those objective, then there will be some who will have bad alt-text, but mostly that will be the exception. Mostly have good alt-text b/c it meets the various points.

JW In addition to the success criteria?

GV Checkpoint "must provide alt-text and it must do these three things" or make it 2 checkpoints. Then have 3 checkpoints.

JW Existing success criteria, tried to do that.

WC The way that I see it, checkpoint: non-text needs text equiv. success criteria: one of them be - should make sense. In core techniques: if spacer, if horizontal line, if bullet, when do you provide long description? This is a lot of AERT info as well as excerpt from NBA manual. Then within each technology describe how to do it. In HTML: use alt, and longdesc on IMG. In SVG, use desc.

GV The problem with that: if you have a doc that is normative. It's interpretation can not depend on any doc which is not normative. The criteria for testing can not be anywhere in doc.

WC I question that.

GV here's the reason why: if people adopt the standard and look in techniques and say "this test looks good." we hereby adopt this. Then change techniques. You have rewritten.

WC WCAG describes the end result.

GV If you change test, you change the requirement.

WC disagree. other W3C recommendations have test suites. these are not normative. they are tools to help determine conformance.

CS normativity of tech specifics is still an open issue.

GV Doc itself should have what the 80% need to determine conformance. If rely on other doc, has to be frozen. If we number technique docs and freeze, publish on TR, and W3C willing to adopt guidelines w/out reviewing techniques, standards body could adopt guideline and a particular version of techniques then frozen something in place.

WC That's how we did it before.

GV I don't know of anyone who adopted guidelines and techniques.

WC In OZ, they say conform to WCAG. You can make that claim. You make that claim using the tools provided in techniques.

JW If government body were to adopt them as a standard which if satisfied would make a web site non-discrimonatory, that's when you would get into difficulties in what is normative or not. Making sure everything stable. Don't want reliance on changing standards.

GV Acceptable to do it the old way in OZ since you haven't made it mandatory only advisory.

WC At the F2F, we can't make the policy. Give the policy makers tools. Give the developers tools.

GV If not in the guidelines, the policy makers have nothing they can use. They can only adopt a doc w/no changing parts.

WC At minimum, core techniques normative.

GV Think we can come up with a short list of items for those who are familiar.

WC I think that is success criteria. I would like to talk with Judy about this. I still think looking at the W3C test suites is a good model. Statements are being said as absolutes and I'm not sure they are the case (that these things have to be normative).

GV Back to discussion of non-objective checkpoints. separate or together? Jo and Loretta?

JM I don't have strong feelings on the subject.

CS We can add filtering mechanisms so people see different views.

/* agreement */

LGR Keeping them together, to help understanding, as long as it is clear what is advisory. Recommendations or requirements?

JM Governments will do their own things, no matter what we give them.

GV New idea: toolbox. We give them the tools, they create policy.

JM Access board did not codify success criteria.

GV We didn't have success criteria at that point.

JM They would throw out 8 of 10.

GV That is not part of the standard, that is our test to determine if something gets in or not.

JW Work out a process to go through to determine if success criteria are adequate?

GV I have a chart where I have listed them. Shall we walk through it? It has each guideline. It is empty.

/* wendy stops minuting. GV will fill in chart based on discussion */

ISSUE: success criteria 3 needs to be reworded. The concept is good (it's the Mona Lisa example), but the use of the word "label" is confusing. It makes it sound like (for HTML) would need to use a label element instead of alt on IMG.

Resolved: first (and only) issue on the agenda next week is to continue going through success criteria. We ended with open issues on success criteria 4 for checkpoint 1.2. GV will send notes on discussion.

ISSUE: How provide an exception for web cams w/out creating exception for all television?


$Date: 2001/11/29 23:09:18 $ Wendy Chisholm