28 February 2000 ER telecon

Summary of action items and resolutions

Participants

Agenda

  1. Review of new ERT draft: http://www.w3.org/WAI/ER/IG/ert/
  2. Discussion Points
  3. Feedback on the Wave accessibility evaluation tool Main page http://www.temple.edu/inst_disabilities/piat/wave/ Tutorial http://www.temple.edu/inst_disabilities/piat/wave/doc/

Table of contents and document formatting

LK: Wendy been marking issues with double at-sign (@@)

LK: first issue table of contents ad formatting; how much information to put into table of contents; list of guidelines and checkpoints; include techniques, as well

WC: Ian and I have been working on scripts to generate document; thinking about what should be there--currently hard to read and many pages when print out; links wrap

// Brian Metheny (BM) joins //

WC: want to generate ToC through scripts; asked Ian how to modify scripts; old ToC against W3C Process Doc and WCAG; part of concern is that up to 21 December has full table of contents (every technique in ToC); hard to read because included full text of Tech; took out Technique links 10 Feb 2000; ToC only includes GL name and Checkpoint name; what is needs to be in ToC? only Tech name? shortened version? WCAG original ToC full text of GL and checkpoints; shortened and included short heading underneath -- kept to 10 to 15 words; not sure what is most useful

GJR: when did conformance eval for ATAG, used TITLEs to put full text of checkpoint and guidelines into ToC to avoid bloat and hyperlink text wrapping

HB: add 15 lines if short ToC with just GL and names, and then full ToC; is that a possibility

WC: yeah; Ian's suggestion, just short ToC, when go to GL, suitable of contents that tells about techniques; problem is some just have only 1 tech--in that case, though don't need sub-ToC

BM: ToC just

WC: sub-ToC could be checkpoints or checkpoints and techniques; don't know that need all techniques listed in ToC

BM: techniques as link in ToC, have header of Checkpoint at top of sub-section

LK: how many people when used this document actually utilized ToC?

BM: tried to, but now just scroll to where I am going

LK: for just you personally, what would be most convenient

BM: short ToC -- probably just GLs at top, jump to section and get sub-ToC; if trying to scan through doc, sub-ToCs would be annoying

WC: short main ToC with GL text, follow one, get to GL that list all checkpoints, from there click to get to Techniques, if more than one technique, get short table of contents; 1.1 have no idea that there are 10 techniques in the sub- section; sub-ToC may help give people scope; 3 levels of ToC might work well; GL 8, only one checkpoint and only 1 technique,

GJR: like the nesting

MC: click navigators may find it a pain, but might be best balance between what we began with

CR: like multiple level ToCs, too

HB: omit checkpoints from sub-ToC, make sure that GL phrase associated with

HB: another dimension by which to find checkpoints? an index?

WC: thought of an HTML elements and attributes list/table as in HTM L 4 spec

GJR: put full text of GL, checkpoint, or technique in TITLE, as I suggested

MC: IE 5 supports @media print which allows you to suppress repeated content when printed

GJR: what about link back to sub-ToC

WC: next section, previous section, main contents

GJR: might want to give choice to go back to sub-ToC as well as main ToC

LK: are people happy with this?

// general agreement //

Resolved: short main ToC with GL text, follow one, get to GL that list all checkpoints (sub-toc 1), from there click to get to Techniques, if more than one technique, get short table of contents. perhaps have navigation (previous next) after each toc that goes to same level.

Format of the evaluation section for each technique

LK: format issues; one thing I noticed that makes it hard for me to read, evaluation section in each Technique; lots of different styles in examples; some say do X, some say, if X do Y; some are passive some are active; have trouble with the "evaluation" part -- just what does it mean? need to make a pass and have a parallel structure throughout

WC: on issues list; lot of places where language inconsistent; need to figure out which style is best

LK: suggestion: instead of calling section evaluation, call it "Triggered By" -- AREA elements without ALT text; any multimedia object, or any TABLE element

BM: like that; move "prompt user..." clauses into repair

LK: suggested wording for prompt and repair

CR: agree that evaluation shouldn't contain what needs to go into "Prompt" and "Repair", shouldn't section be called evaluation

LK: I consider evaluation an overall process, includes trigger and tool's response; see your point, though, that you could call trigger or evaluation

CR: just trigger condition

LK: need particular style -- if X, then Y, or check for X and check for Y

MC: what about and/or statements? listed in list or in prose? a number of them have multiple bullets, such as APPLET, separate into 2 bullets -- one evaluating attributes, one looking for a child

LK: if want form "Trigger if.", following Logic 101, "Check if APPLET has valid ALT attribute and X within content of APPLET "; bullets sometimes going to be and-ed together, sometimes or-ed

LK: have at beginning, "Trigger if any of the following occur" and "Trigger if all of the following occur"

WC: can then have sub-list underneath -- grouping by condition

LK: like that -- logic masquerading as English!

MC: think that might work, but want to see it; trying to find major ones that might not fit into a bullet format;

LK: if have an "and" and an "or" statement, perhaps techniques needs to be split into 2 techniques

WC: good point; do we want a checkpoint for each element?

LK: ALT repeated

BM: IMG has ALT and perhaps LONGDESC, APPLET has ALT or inline alternative content

WC: yeah, may be too many "buts" following that track

LK: in 1.1 rules for ALT text put into appendix, rather than repeating inline each time ALT explained/mentioned

WC: MC has suggested a definition for programmatic objects: applets, scripts, objects

HB: why object?

MC: object can be applet or shockwave file; tried to differentiate -- suggested look for type="application"; if interested in ALT text for images, need to look for OBJECT where type is image;

GJR: like tying together concepts rather than minutiae, OBJECT not widely used, so perhaps if attacked problem if defining an image, ensure that there is alternative content

// William Loughburough (WL) joins //

LK: ok, try bullet lists with sentence at top indicating all or any?

CR: yeah, I can go with that

HB: one caveat: if not sufficient logic, sub-divide so that there is sufficient logic

LK: can always change into a normalized form later, but don't really want to take that chance; if have an and one thing is an or, can wind up with a lot of bulleted lists

WC: sounds like we have resolution on this

LK: all ands or all buts -- all and-ed together or or-ed together

WC: can try and if gets messy, I'll bring it back

LK: that just leaves the issue of the title; I suggested trigger, but not sure about that

WC: could be "Triggers an Evaluation" if or when

CR: could go with "Evaluation Triggers"

HB: minor objection that trigger is something on a gun

LK: under Techniques "Check object elements and type." then "Evaluation" section

WL: how about initiate instead of trigger? you're initiating an evaluation

HB: no, recognizing a condition that triggers an action or alert

LK: what about "Condition to test for"

WC: could just say "Conditions"

GJR: have "Conditions" for all, then have "Evaluation" for what needs user input or thought; way of emphasizing what tool can do by itself, and when it identifies a problem that needs human intervention to rectify

WC: interesting, like that it reinforces the auto-fix, human fix dichotomy

HB: built into Bobby; human assessment part built into trigger

MC: trigger is things that tell you whether or not need to run an evaluation; whether or not you need to evaluate for this technique; trigger is looking for IMG, evaluation is "is there an ALT attribute, and is its value valid"

LK: too technical a distinction

WL: look at what is there, try to find things that cause an action?

MC: don't know if overcomplicating, but evaluate by taking pass through page, when find IMG, have to know which checkpoints apply, then run all applicable checks; that may be implementation dependent

WL: get a lot of false positives from Bobby;

GJR: exceptions dictionary, ALT registry can avoid overburdening user

LK: what GJR is talking about is organizational?

GJR: purely organizational?

LK: some condition machine can apply; judgement that user has to make, then some inputs that user has to put in,

GJR: and mark clearly, this information can be recycled, etc.

LK: case of ALT text, condition that have an IMG with no ALT, then that is an error, no need for further judgement; if have ALT text in IMG, then tell user is this ALT text ok? guess we are not doing that

MC: ID as suspicious, but not outlined what to do with that

WL: is there a similarity between what a spell checker does?

WC: I think so

WL: ERT going through document source, encounters something

GJR: spell checker and grammar checker in a way

WL: a checker, nonetheless with parameters that notify you that something might need your attention

BM: that might be a good metaphor for an evaluation tool, repair tool can ask you about things that aren't there; issue is "how to limit noise", but need to include as item

LK: example?

BM: example in future -- navigation; have you used navigation bars? eval tool could help identify this is a nav bar if reused across entire site; down the road, but

LK: look for navigational grouping? what if none?

WL: spell checker that flags every proper name and grammar checker that tells you to do something a way you don't always do them; concerned going to get away from things that matter and just deal with stuff that is easy to ID, but which aren't important; when someone gets an actionable thing

LK: so where are we with this? document already has a number of sections; are what we are discussing included in here; are we talking about fixing the wording and making style consistent, or adding and subtracting sub-headers; have "Evaluation", "Example Message", "Repair Techniques"; do we need to change that organization

WL: me; we're talking only about whether or not to use trigger or not

LK: distinction between human evaluation and automated; do you think

GJR: condition is what tool looks for, then "Example Message" would indicate whether user needs to be alerted or not

WL: is it going to be more than "have you brushed your teeth" "did you wash behind your ears" or a list of nags?

MC: full coverage for WCAG while limiting nags; how can we make algorithms necessary

WL: so we're just discussing what to name that process; things that can actually be looked for, rather than regenerating a list of guidelines

MC: what can be looked for currently under "Evaluation section"

LK: every GL, every checkpoint, then evaluation and suggestion; where question that needs to be asked is a general question, question needs to be general; concern is having too many false alarms

WL: did you write this clearly

LK: right; is there some way to address that-- for example attaching a signal; in practice, designer might want to handle each thing differently; handling ALT text going to be different than

WL: so many of these things are "keep it simple, keep it clear"; never going to be able to automate that

LK: is there some rating that applies to these things that differentiate between missing ALT text

MC: for Bobby 3.2, breaking them up into a few categories; some things "Full Support" if triggered, a problem, if not, no problem; "Partial Support" something that triggers it, but don't know straight off if problem; sub-divided -- LONGDESC for this image, others once per-page; third category is "Ask" -- not even a trigger that we can devise; trying to come up with ways to reduce the noise from partial and partial support items -- can turn off (which makes it impossible to be Bobby improved) but makes it easier to use and scale for the user; CR expressed interest in putting into tips;

LK: in terms of underlying parameters, there is the importance of the checkpoint and then the signal-to-noise ratio inherent in condition -- probability that there is a problem given that it was triggered

WL: keeping it simple is a P1 -- is that a disability issue or usability issue

WC: disability issue for cognitive disabilities, especially learning disabilities

LK: condition is "Always Trigger"

MC: is concern having too much of a Bobby mindset about this; don't think that A-Prompt inundates things that it can't help you with -- helps you with what you can be helped with,

CR: taking omg all things we can't check for, but user can and putting in one dialog box and ask for user action, then move on to automated checks, can be disabled

WL: don't need a tool for that

WC: some people do

MC: Bobby as much a way to teach people about WCAG as it is an ER tool

WC: right now people will look at automatic checks and up to congratulations, but don't read the supplemental stuff that needs to be checked by author

LK: question of identifying which are those items?

WL: idea of the word "tool" as opposed to "checklist"

GJR: tool could offer 3 modes of checking and advanced mode (configurable)

LK: should we put into the document some indication of user overload? cost-benefit analysis

WL: checklist is a resource, tool, while a resource, is also something else

LK: but there are things in between -- can have a checkpoint that says "make sure simplest language used and pass to ID all multimedia objects -- any multimedia object is still general -- more general than missing ALT text

WL: really?

LK: presence of multimedia object triggers certain things that may not be automatically checkable, but could trigger false alarm

WL: my problem with Bobby, MC all the things on the "Ask" list have a vagueness to them

MC: can't think of a practical algorithm for some

WL: that's what I'm talking about

WC: not all tools will do that; don't think is fair to ask all tools to do everything; asking Bobby and A-Prompt to do everything, because they are the only extant tools; need to get as many tools out there to fit different user styles

LK: if have tool without every prompt -- would be useful to have some sort of guidance as to what to put in prompt; how important is checkpoint? to what extent can tool avoid false positive?

MC: rating each technique on a scale?

LK: yes,

WC: what if there is a tool that just focuses on one technique; should encourage that; user-built tool kit; rating only works for someone implementing the whole thing; Microsoft, could, for example, take the Word spell and grammar checker and put it into FrontPage

LK: rating has to do with false alarms

WC: algorithms aren't going to be uniformly implemented; wary of rating based on our perceived importance

LK: for algorithms we suggest, should indicate whether or not might issue false alarms; would be a way for someone to scan through doc and try to build a tool with less false positives

WC: just highlighting potential bombs -- where tool could blow up

LK: problems

MC: statement that "we've stopped here" or "for reasons we consider valid"

WL: adding detail to introduction to the ERT?

LK: this will catch them all

WC: these are the instances we know of that will produce a false positive

LK: power of the test -- doesn't have to be in first release

GJR: no conformance statement, but need a usage statement to stress that isn't necessary to implement all; maybe a configuration appendix, as well

WC: "how to use this document to do what you want"; should include a section indicator

LK: how long before have draft ready for public commentary

WL: 2 months past deadline; think could stand up to outside scrutiny now

LK: what do you think WC?

WC: like to get ToC fixed, will go through and create ToC by hand as Ian is very busy so don't think could help work on scripts; CR are you available?

CR: yeah

WC: fix ToC and put it out publicly

LK: feel more comfortable if got format consistent before public review call

WC: think ToC should be done last, what you do may cause ToC to change

HB: deserves to be a script; can work on script while LK editing document

LK: plan -- go through and word everything the same way; turning everything into "and" or "or" lists; only remaining issue is to call it "Conditions" or "Evaluation", but that is a global search-and-replace

WC: let's put that issue on list

LK: will leave sub-headers alone, concentrate on getting wording in parallel, where repair has snuck into Evaluation, will move to repair

// ACTION LK: Draw up plan for uniformity of evaluation sections //

// ACTION WC: work on script to regenerate table of contents //

Next meeting

Monday March 6 at 10am (EST) bridge: +1 617 258 7910


$Date: 2000/11/08 08:17:43 $ Wendy Chisholm