05 Sep 2002 - WCAG WG Teleconference Minutes

Present

on phone: loretta, bengt, cynthia, wendy, avi, andi, lisa, paul, gregg, jason, ben, preety, lee, matt may, john, mat mirabella

on irc: roberto scano (<rscano>)

Regrets

Eugenia

Action items

Alternatives to phone participation - by IP?

gv some people can only attend by IRC. perhaps we could set up an internet broadcast so that they can get the full audio, although it will be delayed.

gv candidates? how many platforms does netmeeting run on?

mm how many windows versions are there? :)

jw i know of a x-platform one, i can send the details.

<rscano> XP... just a new notebook buy today :D i use with some friends and in iwa www.paltalk.com that have rooms with audio/video support

next week's discussion with device indie

jw DIWG (device independence working group) is interesting in talking with us. we will meet jointly next week.

jw purpose of this discussion is to consider both group's activities (DIWG and WCAG WG) and what issues we want to bring to the discussion next week.

jw the 2 docs they have published publicy are...(refer to agenda)

  1. Device Independence Principles
  2. Authoring Scenarios for Device Independence
  3. jw's message to wai-gl re: DIWG

gv i have not reviewed the principles, one thing i saw earlier: it seemed focused on working w/different devices. it previously didn't divorce interface from content.

gv have things accessible either through keyboard or through a mouse.

gv still true?

ls it's much better.

ls it is not technology specific. i had a quick look over it.

<rscano> un point 1.2 of their goals: "The general phrase "device independence" is used for this, although the access mechanisms may include a diversity of devices, user agents, channels, modalities, formats etc."

ls differences: their emphasis is primarily on device indie and ours is on scenario independence. interesting for how the 2 could mesh.

ls if device indie are you de facto accessible?

ls like in wcag 2.0 we are not talking about any particular technology, more into general principles.

ls functional presentation.

ls lots of customization - preferences. bewteen them those principles covers a lot of ours.

wac how is "comprehension" part of "functional presentation"

cs get info to the device not necessarily to the user.

ls right, that's what i'm refering to with "scenario"

ls they haven't got a checkpoint structure. they have principles. hard to implement.

ls we don't just say the ideas but we take it through "what to do to implement."

cs does the authoring scenarios cover that?

ls we really are holding people's hands through the process as much as we can.

cs interesting section on personalization and accessibility.

jw section son accessibiltiy are largely placeholders.

(discussing diwg authoring scenarios)

pk read that some applications of forms be too complex for mobile devices.

pk there could be apps where device indie would not make sense. don't know if apply to accessibility or not.

ls clarify?

pk there was a section about implications when trying to address device diversity, might be instances where complex forms may not be suitd for a certain device.

ls an interesting equivalency that their concern is the complexity not appropriate for the device. we're thinking not much in terms of device but person.

ls but might achieve the same end.

cs discuss quite a lot about delivery context. user disabling sound. seems close to changing accessibility settings in a browser.

cs there are places where they are doing the same thing we are, but have worded it differently.

jw concept of delivery context is important in the diwg. set of device characteristics, network chars, make up delivery context.

jw content has to be adapted appropriately to the delivery context.

jw think want to define functional presentation independent of cognitive issues.

But what have they proposed to discuss about?

<rscano> i think we need to focus on what they need to clarify with "our" support

cs it's probably good that the content provider doesn't know about the device. forgo "digital ghetto" issues.

ls "presentation" and "user able to complete" dependent on user's capability.

jw doesn't seem to be clear demarcation between cognitive issues and definition of functional presentaiton.

jw perhaps unintended issues.

wac agenda for the call?

jw work out whether they have unified position in reviewing our doc. or questions we want to consider.

wac seems 2 points for now: 1. specific checkpoints we want feedback on? 2. how future guidelines would mesh

cs make sure we cover all the authoring scenarios in our docs. that we have techniques to address in accessible ways.

jw they are strongly interested in server-side and proxy technologies.

jw how address that tech in the guidelines? kinds of techniques we'll develop in that area.

wac perhaps overlap in techniques documents.

cs toc for their scnearios doc: applications and content, interactivity, rich media, etc. we're worried about all of these thigns.

cs do we have guidelines or techniques to cover.

cs we've never really sat down.

action wac and cs - go through client-side scripting work started at linz f2f as well as authoring scenarios. before the meeting. (this weekend?)

action jw: go through checkpoints to determine the ones that we should highlight for review by DIWG.

js thinking through how to describe the process - how to think about accessibility throughout the process.

ls give this doc a good review and see where like more input.

action ls - join cs and wac in discussing authoring scenarios

jw they are good at distinguishing different types of content.

jw set down different characteristics.

gv: info should be presented on any mechanism, but nothing about operation.

jw think it is there, but perhaps not spelled out as much as it could be.

technology-specific checklist proposals

ls concern: loss of content. help people know why they are doing. people will be downloading and refering to checklists and not guidelines.

ls if losing that content that concerns me.

pb which are you concerned about losing? what we've generated so far include guideline, checklist, and success criteria.

ls i like that, the embedded, particularly with different colored tables. the explanations under the guidelines.

pb prose examples and things. very useful.

jw definitions, non-normative discussion, etc?

ls it doesn't mean we shoudln't generate checklist, these are well done. it doesn't remove checkpoints from principles.

ls my concern is that if we decide to generate checklist, this is not all the info.

pb let someone else generate the checklist?

ls there is some argument to doing that?

gv the checklist are technology-specific. it's not just a collapsing of guidelines to checkpoints. it is more info and different info.

pb it's both. in the version that i put up with the gateway page, there is a version that is analagous to 1.0 checklist - it is collapsed version of the guidelines. although, even in collapsed include success criteria.

asw what does "other" column mean?

asw under techniques "applied, other, not applied"

pb we generated 2 in the meeting, then did updates since then. refer to ben's email.

pb i came up with a main gateway page to help you select the most appropriate page. also came up with another design of the checklist.

bc i refer to both what we did in linz (has the other column) plus 2 examples pages that are more refined.

asw i'm looking at pb's gateway.

ls people make checklists then don't know why. the flipside is they are conforming to the checklist and not to the pricniple.

<rscano> is important to make a step-by-step that could be used also from the single user and not only from the developers...

ls what if on the top of each (guideline/checkpoint) what if there had a box or table for conforming. have diff types of checklists.

<rscano> and for every point the user must read the checkpoint that has generated the checklist... otherwise it becomes as an automatic tool

ls at the guideline level you are expected to do some sort of user testing. here are the scenarios you test for.

wac had talked about including tests for each technology-specific "rule"

wac e.g. "to determine if alt attribute provided for image element: turn off images see if text is there, use an automatic validator...etc."

ls kind of. i'm talking about user tests.

pb advocate that user tests be recommended.

cs then we should write test cases.

ls we don't really know what they are.

pb in talking about the checklist, it would not included details like that. i realize that is part of your concern.

pb hopefully the checklist is brief enough to get reminded. the details your discussing fit better into techniques

cs or a separate testing document.

jw have not proposed to write one.

jw have requirements from the atag wg.

wac list what info people want versus how they want to see it presented. therefore, lisa wants this one view, but not eveone will want it. we need to make sure the data is there so that that view is possible to generate.

gv yes, we can now generate different views.

jw we have some idea of what we want to do. we need to set up automated processes to generate.

jw ideally change format in which presented by changing transform w/out work. what do people think about that implemented?

jw what infrastructure need in place?

wac it's in xml.

pb i didn't alter the structure, but ability to create views and can inform the xslt.

jw the idea w/the transformations. should be easy to change structure, add another view, etc.

gv on these drafts, we have min. success criteria: pass/fail/n/a

gv thought write w/out n/a.

gv if don't have non-text content, then you pass.

wac think n/a is applicable. don't think people will find it intuitive to think that if don't have something that they "pass" b/c they haven't actually done what is needed.

pb agreed. similar issue in 508.

cs technology-specific stuff. in dealing with iamges, sound is irrelevant.

gv when i see "n/a" i see people using it for checking it when they come across something they don't have to do. or "can't do in this technology."

gv can't say n/a have to find another way to do it.

gv understand that people want to use it. worried that having it and the way it will get used.

wac i think we can write things so that it is clear what we mean. there are lots of ways people can abuse things. i wish preety were here to discuss how her eval tool works.

jw write exclusion.

gv check exclusion not "i'm just going to bail."

bc if making a conformance claim and looking at what changed since last month.

wac in EARL also have "not evaluated" can pick up those.

gv in old ones had to have n/a since we didn't word properly. in many eval protocols have to have n/a b/c of the way they are worded.

gv if these are checkpoints, the current under html techniques - they are guidelines not checkpoints. "use alt, use body" etc.

gv if these are techniques ok, but if technology-specific checklists. you can't check off "use" you have to check off "there are..." then can say pass or fail.

gv applied/not applied does not say if achieve anything or not.

pb as a distinction, either apply or not apply techniques but pass/fail the success criteria.

gv you should have to check everything in the list.

pb that is a good point. for any success criterion we could 1-20 (or more) techniques. 5 could apply, but choose one. the techniques become suggestions for how to achieve the success criterion. that is confusing.

gv is a checklist - you have to do them all. if a bunch of diff, then list of tehcniques.

pb i didn't have checkboxes for each technique, but the success criteria.

jw an issue raised earlier, desire that our techniques be clearer as to which are mutually exclusive and which will satisfy success criteria, which version a technology a technique relates to. e.g. dependent on feature only avail in certain versions.

pb one thing to complicate it - as technology evolves we'll have to keep adding to it. and maintain it.

pb every new technique would have to fit in somewhere.

pb not that it's bad, but a huge task.

wac anyone want to work with me to think about these questions?

gv i could.

ls we've been doing some of that?

action asw, js, pb, wac, bc work through a specific html item to see what could look like in checklist to answer some of the quesitons raised today.

jw in order for a technique to go in, what info do we need.

wac think the structure is in the dtd.

gv also issue with "technology x". perhaps, "an alternate version of what provided in x is ..."

js how much person doing evaluation has to do know about which technologies is being used? often times, i don't know what all they are doing udnerneat.

cs think this is designed more for the author.

jw and authoring tool guidelines.

jw running out of time, so won't have full discussion of 4.1

wac who is ok with "plain language"

yes: 3 rest no.

<mattSEA> I'm a no.

jw not opposed.

pb undecided

cs undecided

lgr undecided

gv take word "plain language" translate...it is jargon. it doesn't mean just "plain language" it means structured, organized, and other things. not just language being plain.

gv as soon as we start using terms of art, then i worry about translating it.

aa basically, what is comes down to is "write clearly"

aa "term of art"? am i too steeped in it. in some sense, yes.

aa in essence, for 4.1, maybe it would be more to the point to say "write clearly" then people can understand that to mean "use language that is meant to be understood>"

bf issue with clearly.

bf why not "use appropriate language"

ls then we'll go around and end up with the original wording.

aa if you do a search in google, enter "plaing language" you will see how it has worked its way into vernacular. perhaps only english.

aa i know people in the group i work with, they would say "audience." however, this group looks at content not audience.

aa where clarity is emphasized.

ls it brings home the problem that the phrase "plain language" is ambiguous. it's an amazing concept.

jw i think there are a couple of issues. 1. what is required is in the success criteria, but make the checkpoint as clear as possible.

jw we ought to make every effor to make checkpoint as clear aspossible.

jw 2. do we want to keep discussing checkpoint text or move to success criteria and come back.

jw i would like to see the success criteria discussed. 3. we were considering restricting 4.1 so that it didn't include structural requirements in 3.1.

jw proposal to separate those out.

action aa: repropose 4.1 based on communications today. ask people to focus on success criteria instead of checkpoint text.

aa oftentime, the success is just drawing people's attention to this issue.

js the checkpoint tries to incorporate into one sentence how to use language effectively.

jw ls, i will try to find the message, it did get posted.

bc 4.1 ideas compilation.

aa do you include the guidelines that long?

wac perhaps break proposal into checkpoint, success criteria and fodder for technques. look at core techniques for wcag 1.0 for a model.

jw can't take this up next week, since meeting with diwg. let's discuss on mailing list. week after next (19 sept 2002) it will be on the agenda.

ls no one like the idea based on percentage?

action ls: take discussion of percentages back to the list.


$Date: 2002/09/06 16:46:54 $ Wendy Chisholm