WAI Authoring Tool Guidelines Working Group
Chair: Jutta Treviranus
Date: Tuesday 20 June 2000
Time: 2:30pm - 4:00pm Boston time (1830Z - 2000Z)
Phone number: Tobin Bridge, +1 (617) 252 7000
The Latest Draft is the Recommendation dated 3 February, available at http://www.w3.org/TR/2000/REC-ATAG10-20000203. The latest techniques draft is dated 4 May March, available at http://www.w3.org/WAI/AU/WD-ATAG10-TECHS-20000504. The latest draft of the Accessibiltiy Evaluation and Repair Techniques is dated 15 March at http://www.w3.org/WAI/ER/IG/ert-20000315
JT: First business is to go through techniques to see if there are inconsistencies with the definition. Did anyone do them?
Most: No (some misunderstandings since minutes of last meeting not up)
JT: Did this. No glaring problems. But would like others to review. Largely the appendix that JR did.
JT: Action item report for the techniques and evaluation database.
CMN: Not yet
JT: Reason?
CMN: Time. Hope to work on it this week. Techniques database is a long term project.
/* CMN joins
JT Let’s talk about conformance evals.
GR: Conformance evals. Difficult for guideline 7.
JT: for ATRC course tools study access was more objectively tested. Dedicated workstations equipped with representative assistive technologies. What do others do.
HS: MS trying to refine process for internal accessibility testing this can be dependent on who tests and how. For Windows Logo program there is one external provider who ensures consistency.
CMN: My consistency process is to do the same things across products (not a very satisfactory or scalable approach at this stage).
JT: We listed a set of tasks and tested across a set of technologies. Needed a balance between being to prescriptive and to open. When too prescriptive it was hard to get a good idea of access. Better to focus on tasks.
HS: More scenario based.
CMN: did partial review of dream weaver. Tried to write down how things were tested. Would like to post how this was done.
JT: Asks about MS logo program
HS: I think methods used for badging program are proprietary to outside testing companies - will check.
CMN: We need to develop a matrix type method. For each checkpoint there are a bunch of things to test for for different kinds of tools.
JT: Do we want to do pass or fail or scoring?
CMN: Scoring is A, AA, AAA
JT: Other stuff?
CMN: His conformance database tool will allow partial tests to suit individual needs. People will build their own scoring mechanisms.
HS: Scoring will add validity to "why" product didn’t pass
/*GR rejoins
JT: ATRC used scoring such as how many steps to alt-text. But still included comparative tables.
GR: people ask him how the guidelines will help them make judgements. Must be more than A,AA
CMN: database approach will allow simple A, AA as well as custom queries
GR: it is also helpful to put in tips for users
CMN: that is just writing help docs
GR: it is just providing work arounds
CMN/GR: Back and forth
GR: Disability community wants results. Very bad to ignore work arounds.
DB: What are we arguing about?
JT: Ratings
JR: Work arounds
JR: Can we agree to general ratings as well as specific details
CMN: (comment not recorded)
JT: More granularity – ex. does something easily or with more steps
GR: Granularity has to include how it satisfies relative checkpoints etc. A, AA is meaningless out of context.
DB: Concerned whether WG should be doing this at this level of granularity.
GR: as evidenced by how many we have been completed.
JT: Task of WG will not be to pile up lots of evals. But should we come up with process.
GR, CMN: Agree
JT: We need to develop objective tests. Many steps for relative priorities.
GR: My method is not yet ready. Used boilerplate text
CMN: Mailing list should be the feedback mechanism.
JT: When will GR’s work be ready?
GR: Still needs work.
JT: Do you need volunteers?
GR: Give me a week.
JT: We have a huge task ahead of us. We are making little progress. Ideas?
GR: Sense of urgency has dissipitated. We need to get moving again. First we need to reping all present and past AU members.
JT: OK. We are re-chartering. Maybe we need new staff. Will talk to CG group.
CMN: Spent 40 hours on dream weaver. Takes a long time to learn new products to the proper extent.
GR: AFB has resources for testing. Maybe we can get these resources for evaluations. These people are professional testers.
JT: Should we pursue other testers?
DB: Then WG is still undertaking large effort. Concerned about doing everything we talk about.
JT: Agree that our main task should be to great a process. Should we make pieces that can be funded, staffed externally.
GR: Talked to someone at AFB about blind low vision evaluation of five main market tools etc.
CMN: Balance betwen collecting evals and support and setting up a software testing service.
GR: Same problem holding up WAI review process.
CMN: Hoping QA person would sart sooner.
JT: Should I go to CG with idea of separate externally funded project.
CMN: Still concerned. But we should talk to the CG about it. It is WG work. But we have limited resources.
GR: Should use pre-existing expert resources.
CMN: Need vendor neutrality.
Action Item: JT will ask CG what they think
CMN: Long range question. How does documententation apply to accessibility of the tool itself? Does it fit in 6 or 7?
GR: Both
JR: 7 only
Last Modified $Date: 2000/11/08 08:13:13 $