W3C logo Web Accessibility Initiative (WAI) logo

WAI UA Telecon for January 6th, 1999


Chair: Jon Gunderson
Date: Wednesday, January 6th
Time: 12:00 noon to1:00 pm Eastern Standard Time
Call-in:W3C Tobin Bridge (+1) 617-252-7000


Agenda

Discussion of User Agent Types

  1. How are user agent types defined and used for conformance
  2. How are user agent types described
  3. What user agent types should we use

Background materials

Ian Jacobs message on January 4th

ISSUE LIST: User agent types with associated e-mail discussion

Ian Jacobs message on December 16th


Attendance

Chair: Jon Gunderson
Scribe: Ian Jacobs
Harvey Bingham
Kitch Barnicle
Marja K-Ritta
Charles McCathie-Neville
Denis Anson
Kathy Hewitt


Action Items and Conclusions

Proposed Resolution to user types and conformance issues

Under the assumption that different agencies are likely to point to the User Agent Guidelines when measuring the accessibility of user agent products, the Working Group has been wrestling with a workable definition of conformance. Finding a satisfactory conformance system has been difficult in part due to the following parameters:

  1. There are three priority levels for the checkpoints.
  2. There are a lot of checkpoints.
  3. Not all checkpoints apply to all User Agents (e.g., a UA doesn't support a certain content type, a certain language feature, a target medium, etc.)
  4. User agents may satisfy some checkpoints natively but others through standard interfaces to other software.
  5. Trying to define conformance based on subsets of the checkpoints is a complicated, time-consuming task.
  6. The WG doesn't know all the contexts in which conformance will be tested.
  7. Conformance of one product should not rely on the existence of another. However, solutions that involve more than one product may be more beneficial than what can be achieved by a product alone.

At today's teleconference, the Working Group discussed a proposal [1] that would replace the notion of "conformance" with a "test suite", or "scoring" mechanism. Instead of a definition of conformance to the document, the Guidelines would provide a checklist so that people evaluating a user agent would know, for each checkpoint, whether:

NOTE: The WG would only provide the score card mechanism, but not score User Agents itself.

What are the advantages to this type of scoring mechanism?

  1. By eliminating the pass/fail conformance test, it promotes accessibility and does not punish user agent developers.
  2. The public relations impact of score card comparisons will push developers to satisfy as many checkpoints as possible, including Priority 2 and 3 checkpoints.
  3. User agent developers can use the score cards for marketing purposes. For instance, a company that develops add on technology targeted for people with disabilities (assistive technology) might be able to hold up several score cards and say "We work well with any of the top three browsers!"
  4. People in different contexts (e.g., procurement agents) can interpret scores according to their own particular needs. If a conformance mechanism were used, a user agent might conform and still not be useful for a particular audience, while another might not conform and be more useful. The score card mechanism makes users the ultimate judge of the utility of a user agent.
  5. The Guidelines will be simpler.
  6. The Working Group can focus on identifying checkpoints.

The WG's skill (and scope) lies in addressing accessibility issues, not addressing potential legal battles. Even in the score card scenario, it is important to prioritize the checkpoints so that developers understand which the Working Group considers to be the most important. The Working Group members on the teleconference reached consensus that such a scoring mechanism would be beneficial to the Guidelines and to the process of advancing the document. Furthermore, the WG discussed providing profiles (in this document or another) of which checkpoints are of particular import to different users or user interface scenarios. Such profiles would be informative only, intended to guide developers.

[1] 6 Jan 1999 What conformance system should we use? Ian Jacobs, See "Option 1".


Minutes

DA: Gnarly issue w.r.t. accessibility. Hard to answer the question "What is accessible software?" Re: Option 1 of [1].

IJ: Definition of conformance is only w.r.t to this document. HB: I have a concern that in making the assertion that Assistive Technology is the solution to the problem.

KH: It's one thing to say "This can be done" and another to say "This has been done." Don't want conformance to be dependent on other products.

JG: I prefer Option 4, with subsets established by rendering types. Don't want dependencies between companies.

IJ: I think, that if conformance is measured based on existing technology that is available and at reasonable cost, then conformance should be phrased in this manner.

DA: Want to distinguish content types from media types.

CMN: See Hans' email [2] about Option 1 approach. [2] http://lists.w3.org/Archives/Public/w3c-wai-ua/1999JanMar/0018.html

DA: The key to establishing the subsets relates to how do you get at the information.

JG: You shouldn't require special navigation functionality for visual UAs since it's "built-in".

CMN: If you've got visual rendering, you've satisfied the requirement of navigation. In other modes, you need other mechanisms.

DA: Subset thought about at the ftf meeting - must be able to display cells as discrete entities, to have access to header info, and to be able to do simple navigation.

JG: /* Procurement of W3C-conformant UA */

KB: At the blindness agendy, we look at different things than the DOE.

CMN: People will use a "mask" over the test suite.

KB: Even if a UA conforms, it may not meet the needs of a company's particular user base.

DA: Needs of a blind person differs from someone with cerebral palsy.

CMN: In a practical sense, the procurement guy has to find products (one or more).

IJ: Use the "test suite". Best for procurement agency. Best for users. No "conformance". Market will drive quality. Marja: Need usability profiles.

CMN: The WG can then produce profiles for particular requirements.

JG: I don't like the test suite. If I'm a developer, I want to know what to do if I am doing aural rendering.

IJ: But you can determine that from the text of the guideline. JG: I don't think we should be talking about Assistive Techs either.

CMN: I think we need it. E.g., IE doesn't handle video natively, it's done through assistive technology.

KH: IE doesn't render video natively. I would want to put a check in the "Not applicable" box in this case, not in the "Assistive technology" box.

CMN: It's not up to MS developers to show that an AT is available. It's up to some other party to say that product A, in conjunction with MS IE, satisfies a checkpoint.

DA: We must continue to say that a checkpoint be satisfied natively *or* through AT. It's to advantage of both parties, and doesn't force MS to implement all the checkpoints.

KB: Suppose an agency needs to support a browser for sighted and non-sighted employees. If I only need to look at Jaws to purchase a browser combination....

CMN: Jon, if we adopt the test suite approach (Option 1). Are you saying we don't want an "Assistive Technology" column?

JG: The only interface we can talk about is DOM.

IJ: We also talk about other standard interfaces, standard os interfaces.

KB: At the face-to-face, I liked the idea of separating visual, aural, etc. In the test suite scenario, it's to our advantage to say how we enable other technologies.

HB: Burden of proof is on developers to show that they do in fact make the info available through an interface.

CMN: In a sense, it's the reverse of KB's fear. It will be up to Jaws (in practice) to show what profile of the checkpoints are met in combination with other browsers.

IJ: How does a tool that is on the "front end" side of the interface use the tool? What box do they check if they do something but only when given information?

CMN: They check the "through interface" box. It's to their advantage to profile how they work with other tools.

KB: (on Option 1): a) Who will be performing the test? b) Does a manufacturer use the list? Who do they submit this to? I like the idea of this WG focusing on accessibility issues and leaving the interpretation up to others.


Copyright  ©  1999 W3C (MIT, INRIA, Keio ), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. Your interactions with this site are in accordance with our public and Member privacy statements.