WAI Authoring Tool Guidelines Working Group
Chair: Jutta Treviranus
Date: Sunday 26 March 2000
Time: 9:00am - 12:00 noon Los Angeles local time
Venue: Los Angeles Airport Marriot Hotel, California. Further details at the bottom of this page.
Attendance at this meeting was open to members of the working group, and to people who are interested in joining, have read the Working Group's Charter, and are able to make the required commitment.
Introductions and agenda overview
JT ATAG is now a rec. Probably has greater ramifications for implementation than WCAG. Congratulations
JT How to stop the document being a gigantic monster. What type of view is useful to a developer? Given the AU document what would be helpful support?
DB Checklists are always best for meeting with people and doing conformance. Techniques that run off the checkpoint list is helpful. When you want to print it gets tricky. I haven' gone through all that much of the techniques
JT Is there sufficient granularity in the checkpoint list?
HS We have gone through the checkpoints and where it doesn't seem clear we went to the techniques. At that point we needed to be online because the printouts were a bit overbearing - shuffling through too many documents. We had to have WCAG and ATAG techniques.
CMN As a member of the AU WG, one of the things you aresupposed to do as a prerequisite for participation is keep up with the work being down by both the UA and GL WGs; GL is currently modularizing the WCAG Techniques document; aim is to produce a trimmed down view so that you get only what you want; do we want different views for different technologies? just in WCAG portions, or for everything?
HS Yes. It's well broken up but feels like a lot of bullet points.
WL So without paper it isn't a problem?
HS Right
JT We have tried to include samples, hyptothetical examples, etc. Which are most useful?
HS The examples are most useful. It is helpful to state up front that examples are just that.
HR There are some techniques that will be from real tools. Those are most valuable.
WL Increasingly we have, and we are looking for, real examples
JT We have 3 types of support and we are starting to fill those in. Is there anything else that would be helpful
DB As well as techniques referring to reviews of conformance would be helpful
JT What about how to deal with images and multimedia in the techniques document. If we put in screenshots the document becomes a very large download. Has anyone had thoughts on how to do that? Should we include images in document, or have a version where the images are just linked, or what?
WL I don't think we need to worry - users can turn images off
GR I think having zips solves the problem
CMN compression doesn't do much for images
GR We could have image-free bundles.
JT Do we need redundancy for images
certainly
JR So far there are not many images in the document outside the section I am doing
CMN that will change
JT So should we have an image free version?
GR I would like to have two views, with descriptions to generate text text-only view.
DD I don't think we should have a seperate version.
CMN I don't think that we need the images
GR I don't care, so long as we have long descriptions
DD Can we put the longdescs inline as an appendix
JA That will be easier
GR I am concerned about the plaintext version. Having an appendix at the end is a hassle
WL If there are links then it's OK
GR Yes, but a plain text version, a braille hardcopy, etc, is a pain
HR Use CSS classes
CMN I started by using object to include inline text descriptions
Resolved: One version of techniques document with multimedia
JT What we are doing in relative priority is referrring to potentially the entire WCAG. How do we deal with the references to WCAG (and then to techniques for WCAG as well)? We want to convey which WCAG checkpoints are relevant, and then convey strategies on implementing WCAG requirements. We also want to convey techniques for tools that can solve several WCAG (or even ATAG) checkpoints.
WL What's an example
HS The first relative priority - ensure that auto-generated markup conforms to WCAG. The ATAG indicate a great deal of information, but at WCAG checkpoint 5.1 or 5.2 it talks about generating tables with correct markup, which is a problem for how to interact with the user. Is this a set in stone ting or a living document?
JT The guidelines stabilise, the techniques are intended to change.
HS So we should just look at things where developers find issues
JT When there is a relative priority we should leave out things that are fairly clear how to do it. The question then becomes what is obvious.
CMN splitting by technologies
JT It seems to make more sense to our work - we are dealing in general with developers of a particular type of tool.
WL Who checks that the WYSIWYG editor doesn't need image stuff?
CMN Word has an image editor for example
HS Office 2000 has an image editor for the whole lot
JT So what do we do with the relative priorites...
CMN The trade off is that in my experience the things that seem obvious aren't, but having things that are too big is scary. I prefer to have everything in there.
JT should we have the stuff inline, or linked off to a seperate document
WL Why
JR There may be 60-odd WCAG checkpoints, with several techniques for each of them.
CMN People prefer to use print in general, so shuffling documents makes it more difficult
DB having print is helpful
JT What if there are appendices for the sizeable chunks that are relative priorities
CMN Is it a problem that the techniques for some checkpoints are very large?
JT I have heard from some people that the apparent redundancy prompts them to skip things
DD Maybe we should use some kind of styling to enhance. I can see what is ATAG technique and what is just WCAG - I think it is OK. Maybe it should be organised by priority. The other thing I would like to see is topic-grouped techniques.
JT Should wwe order WCAG by priority or have multiple views?
DD Unless we move to dynaimc views I would use priority for the default.
JT is this coming?
DD We have been working on it
CMN Sort of - it is slightly different stuff
HR When there is a lot of repitition you can make appendices. The issue is that when you read the document you encounter a problem and want to know all of it.
HS What information would be available?
JT The guideline itself, the intro text, the general implementation techniques, the checkpoint, the general techniques inline, and then in the appendix the references to WCAG
DD SHould every technique have an ATAG-specific sentence? in ATAG 3.2 there is WCAG 5.4 do not use structred markup in layout tables. There should be an ATAG bit
JT RIght. It's work to do
DD I think it is necessary.
CMN Using the WCAG checklist organisation - topics, and within that by priority.
DB Taking the WCAG checklist you can have it once and refer back to where it matches the ATAG.
JT Have one unified WCAG list for all ATAG relative priorities?
HS It has to be 7 times
JT Are you suggesting an appendix that has a map from each WCAG to the relevant Checkpoints.
HS That's going from Web Content to authoring tools
DB I find that easier
JT We can have that as the way it is done throughout, or you can have it as a companion piece or appendix
JR I think that might be easier - each WCAG puts it on a page
HS I don't care how it is ordered so long as it has the content. But you have to think about who the audience is - it makes sense to go from WCAG to ATAG if you started going from WCAG, but the purpose is for people to understand how to implement ATAG.
DB Let's look at the first ATAG relative priority - I have to review all 64 checkpoints. I don't want to get something lost - there is difficulty in actually doing this. There are actual techniques to be written.
CMN I propose to do it with the WCAG stuff inline, ordered by topic, and add an appendix that has the mapping of WCAG to ATAG
DD I would prefer to have it organised by Priority so we can release a techniqes document that at least covers the Priority 1 stuff.
WL If this can be done dynamically we can choose to do it all
DD It can, but it won't for the next release
WL To me the most interesting observation is that we are not dealing with any content yet.
CMN I would prefer to see people look at the whole topic as well.
HS We look at Priority 1, then at Priority 2....
Resolved: have the WCAG stuff inline, ordered by WCAG checklist order, and add an appendix that has the mapping of WCAG to ATAG
JT One of the reasons we didn't put all the content in here is there is a lot.
DB Heather and I have talked about this. Developers are trying to comply. When the WCAG drafted their document they thought about humans making decisions. Authoring tools don't necessarily make decisions or prompt authors. For example ensure auto-generated content complies. How can a tool ensure that information is not being conveyed with colour alone.
WL If the documentation and prompts does that the tool complies
CMN so the issue is how to put in the prompting
DB Right. The issue is how often you want to prompt people. You could end up with a situation where everything involves being prompted several times
DD I think the answer lies in the configuration - the first time people do something they get a prompt, and then you let them set up the way prompting is done. The answer is with configuration.
DB I understand about being able to configure, but you are talking about a lot of work - many prompts, lots of configuration. Are they on by default? These guidelines push for prompting, but generally developers try not to prompt people all the time.
WL There are people who are looking specifically for the tools.
HS Then if they are specifically doing that they can turn on the options deliberately. But that's not prompting
CMN There are alerts.
WL You get to make the choices on sme kind of generic basis.
/* GR minutes to break
DD: is the issue you are raising that it is hard to implement in product or a pain for the user?
HS: issue is are we defeating own purposes by prompting all the time
DD: should be left up to the company -- might want default on and can't disable, others want to do another way; important thing is that functionality is available
HS: having hard time with GLs--product group wants to get compliance; if recommend that something should be on by default, do we not meet it if not on by default
JT: look at integrating into look and feel of tool -- user should be able to state preferences; should be able to decide for themselves
CMN: 2 points where have prompting: 3.1 prompt the user for alternative content -- that's straight out -- put in image or movie, have to prompt user for description
DB: if user can turn off, is it ok
CMN: if you already have a prompt built into the tool
DD: not if you can only add it with a drag and drop operation
DB: ensure when creates markup that complies; doesn't say ensure user can do it, but tool automatically generates markup
DD: automatic is important; if the user drags and drops an image, then the tool adds something to the document source, then that can trigger a prompt
JT: if have image map tool that is WYSIWYG it isn't the user that makes the choices about the markup--developer has already done that
DB: if structured page using color for semantic purposes, but someone turns off the prompting, does that make the tool non-compliant?
GJR: the onus should be on the user to add the content--the onus on the developer is to guide and assist the author in adding appropriate alternative content; if the user turns off prompts, there isn't anything that the tool can do about it, save for a general "you should run an accessibility check on this document" prompt/alert when the document is saved
JT: what about content generated manually? WL: if turned off prompts, is it tool's bad if it doesn't remind the author that he or she should do an accessibility check?
DB: then when the tool comes out of the box, have to have prompts and alerts turned on by default; implication is that if user changes the default settings, then it is their bad
GJR: checkpoint meant to refer to things where they are done by the user -- when user tries to change color of text, give user choice between CSS or FONT; if done automatically, tool should use CSS;
JT: tool would be making assumptions based on what is in the original document; way expressed as HTML should be WCAG compliant; applies to things author doesn't control, but which the tool controls
DB: allows people to work in environment that allows them to interact in a layout modality; if the user is going to do something bad, going to have to prompt them
JT: as for FrontPage and the color of text; shouldn't have FONT as option; should use CSS; second talking about tool's choice, not the user's choice
JR: 2 choices: choice being made automatically -- what formatting to code something blue -- use CSS rather than FONT; in another case when user says things in blue are important, the problem lies not with you but with the author
DD: in Word have possibility to say you have an H2 -- want to convert to a real H2 and not use FONT sizing
CMN: case where user does something is covered by 4.1 --check for and inform the author; author is made aware of the fact that they have done a bad thing and need to fix it; doesn't have to prompt everytime you doe something
JT: prompts in form of dialog boxes; appropriate way should always be first choice
CMN: that's the P2 req, too -- make good ways of doing things obvious
DD: don't use invalid markup for styling;
DB: don't quite see how can measure compliance; foreground and background contrast
WL: only have to concern ourselves with compliance of tool
DB: trying to explain trickiness of trying to make a tool make decision based on GLs intended for a human
JT: that worries me, b/c you are part of group; some tools can do things automatically and others
DB: not asking Word to have WCAG prompts, but when save as HTML want it WCAG compliant
DD: when no prompting is important; heuristic in tool to find out if poor contrast; not completely automatic
CMN: if author says I want white on white, should identify a problem; if tool generates only background color and not foreground; that would be auto generated markup that is bad
DB: would you expect word to prompt them when save as HTML?
CMN: yes -- that's the 4.1 thing -- don't expect that under 1.3; author told tool to do that
DB: do we need 64 separate prompts
DD: potentially
CMN: should be done before saved as HTML as part of saving the file
JR: can't check for simple language
CMN: Word does this as well
JR: doesn't check to see if grammar is good for audience level
DB: Word may be the trickier one to measure
HR: if want to be web authoring tool, need to worry about these things; if you want to advertise as a certain level of compliance, have to do these things
CMN: should you have 64 prompts when convert to HTML; answer is no-- process should be invisible to author
DB: no, asking for people to be prompted when using Word; got to be a better way then prompting 64 times during save as
JT: look-and-feel of Word, grammar and spell checkers are part of Word's look-and-feel, so would want to use that format for
DB: thought that when out of box all have to be on
DD: where is that written?
CMN: 5.1
DD: assist author in correcting (4.2)
DB: 4.1
CMN: neither say default is on -- integrate and make most obvious
DD: tool may, not must
DB: is CMN saying that the 2 GLs together equal defaults has to be on when installed
DD: need to have verbiage to make configuration easy;
GJR: (comment lost)
CMN: 5.2 -- if off by default have a lot of work to make them work; HotDog have to turn on to check, but tab always there agree that is a lot of work, but that is what is needed
DB: point isn't to complain that is a lot of work, still a bit fuzzy when we are in compliance; measuring compliance is the problem; have gone through with HS and FrontPage person; need to make easier to determine compliance level
HS: better organized checkpoint
JT: conversion tools are a difficult thing and should address that class of tools
/* Break, HR Leaves, CMN resumes minuting
JT Several points coming through.
DB Prompt in developers language requires
CMN looking at the next topic a bit, conformance evaluation is important for resolving these issues and providing examples from the WG of how conformance is assessed
JT There is also the issue of different classes of tools and how we organise the document -should we have a way of organising it in terms of conversion tools, or document editing tools, or source editors, etc
DB I am not sure whether it is worthwhile having a completely seperate document
HS You also run the risk that a development team say it is a conversion tool when an author thinks that they are using an editing tool.
WL I think you have to leave it to the developer to decide what they are doing
JT Should we have these within sample implementations
CMN important to have varied tool evaluations because it is important to developers at the end of the day to get to an evaluation
WL Is everyone satisfied that default settings is an issue
CMN I think this is an issue to be flagged.
HS That is going to some out when we have techniques. As long as techniques are consistent then it will be clear.
DB I think it should be explicitly stated how it should be.
GR We could borrow a page from the User Agent guidelines noting what is required in defaulting
WL And for other settings.
JT There are issues of what has to be on by default.
CMN THey are important because that is what developers are doing at the end of the day. It is important to get the tests into the techniques. But the real thing is how do we get more of them done
JT We need to agree on how this happens. That may be done by doing them.
CMN The best way to learn is to do it
HS What is the poicy? Are there going to be people you can pay to do a review?
JT There are several pieces. What will be in the techniques document, conformance resources that will be available, the process(es) that developers can use.
GR I think a good example is what happened with the Homesite evaluation. They read it, came back to us, and Marjolein is now in the WG and working on conformance.
JT In order to come up with the test we need to go through the exercise. Taking that point, if we want good standardised evaluation we are going to have to come up with fairly clear statement of how the tests take place
DB Are we as the WAI planning to make sure that there is an analysis of each tool, are we expecting developers to do them, will they be public, etc?
JT We are not going to do them all - we are doing them to get a better standardised method. What are we doing to beneift the techniques docment, and what will be the general practise in the wide world? The question of how developers get an evaluation is part of EO (and a bit part of ER - CMN).
WL There is a lot of an honour system - you can get the Bobby Icon and put it in a page whether or not you are compliant. WAI is unlikely to go to court over misused icons...
DB I think if we put something out with a misleading icon we will hear about it
CMN one of the questions that was raised is is this stuff going to be public: reasons to have public gives developers leverage to go to teams and as for voluntary evals; in interest of developers to look at what is there to see if accurate
WL Some of this will come up in a regulatory arena
DD From the W3C point of view we are looking at starting a conformance activity in W3C at large, introducing more formal ways of branding product/services as conformant, and I imagine ATAG will be part of that. CSS already has a test suite online where you can evaluate a browser, and get a report. The same kind of thing can be done with ATAG. The question is whether we are going to have an official W3C stamping. It might happen in the future.
CMN the other part of your question is there someone you know to whom you can hand over a fistful of dollars to perform a conformance eval; in case of WCAG there are people taking big bucks to do testing
HS If it is someone saying "I'll test" that is different from an "official" testing. We have people testing from outside for 508 compliance, but we also have an outside testing agency for the Windows logo program
JT We are not yet at that point. we are going to have to come up with the tests for W3C to use
DD When we get serious about this in W3C we will have to talk to people who have done a lot of this. It takes resources - if we start testing for compliance it is likely we will ask for money
HS That is how the windows logo program works. It is interesting to look at how this will happen
JT Yes. But first we have to come up with a standard process for finding conformance. Two issues are whether we make those things public, and what if we get to or three different evaluations of the same product.
CMN The AU group can only do things in public. But we would like to have a formal process for getting developer feedback on reviews
JT I think it will be interesting to have different people evaluate the same tools, to see where the fuzzy areas are.
DD We need to come up with measures for each checkpoint (or parts of them for relative priority checkpoints, for example). Some checkpoints are more subjective that others, and compliance based on those is different to failing for a simple objective.
JT SHould we look at a tool
DD Should we finish the techniques first
CMN we are a small group with a small amount of resources; filling out doc could take months; evals -- do the work, see what comes up -- this is the issue this is what I found
JT CAn I suggest frontpage?
DB This is a question for us. We have privately looked at it
DD I think word is more important
JT I suggested Frontpage because it is closer to the traditional tools
HS Frontpage would be better.
DD Is there value in doing Frontpage express (does it still exist?)
HS I haven't looked at it. As far as HTML output our biggest product is Frontpage
JT Looking for volunteers...
Action JA: Review Frontpage
Action GR: Review Frontpage
JT Timelines?
JA weeks
DB We have looke through it, and we have more meetings scheduled. The question is whether that analysis will be public (which is not my decision).
CMN I have found it harder to sit down with a new tool than to do it with one that I know.
GR Could you get a commitment that if other people do them they will comment on reviews
HS We can't promise (sitting in this room...)
GR Doing evaluation in AFB we made drafts available to developers - they can give feedback if they want to in a limited time.
DB I am glad that there will be outside evaluations done, which puts a bit more pressure on the development teams to respond. As WL said, there are going to be market pressures as well
JT From our perspective the exercise s to figure out how the conformance reviews will work.
HS A question will be what is the copyright - can people take things out and publish them?
DB You can't stop a reporter from publishing
GR We want a proper disclaimer.
CMN if these are published by the WG they are covered by W3C copyright
JT SHould the test be part of the techniques document?
DB I think you should have a seperate document
CMN You mean a seperate document from the general techniques document.
DD Should the test document be as detailed as the techniques document.
CMN You have to get down to the functional requirements. But there are economies of redundancy.
HS So it will be more helpful if this is a seperate document
DD When we resolved the issue of WCAG checkpoints within ATAG checkpoints what do we do
JT We omit them. (By current custom)
JT Once we have several reviews we might want to try another tool.
GR If you want something quicker, I have done a HomeSite review and we could work on that - there is a promise from HWG to do that.
CMN In general, you can pick a tool for which we have a conformance review already.
JT How do we encourage implementation? How do we support it
HS Do the techniques document.
DD Do we have resources that tell us which tools have market share?
GR There was a review posted to AU list. People are using multiple tools.
DD We need something we can look for using robot.
CMN The key is about doing work - the techniques document.
DB Is the outreach group making sure that computer publications have summaries that they can publish? They should be pushing that.
GR They are pushing accessibiltiy for web design contests.
Action JT Ask CG to get EO to promote use of AU work
JT We are hoping to have a joint meeting at WWW9 with them. MAny techniques are being dealt with by them and we want to coordinate.
CMN We are waiting for an actual invitation.
DD I have checked on that already. You are hereby invited to the ER meeting on 12 May. Agenda/registration online
CMN We need to read the Evaluation and Repair Techniques document and track it.
JT Our charter expires in April.
DD WAI charters are now extending into the future.
CMN How long do we go for and do we want to try and revise the guidelines within the life of the charter?
DB I don't think we should be aiming to come up with a new set of guidelines yet.
WL I think the techniques should include adapting to new technologies
JT It is understood that techniques is living and breathing.
DD We should go for two years, and aim to revise the guidelines at the end of that.
CMN I would prefer one year, and a deliverable is a report on what to do with the guidelines.
Resolved Want a one year charter, and a deliverable is a report on what to do with the guidelines.
DD Judy and I have been working on promoting the WAI in europe. Part of that will be ATAG-specific, tracking localisation of implementation. We'll have some detail coming on what we want to do.
JT Is this seperate activity?
DD From the EU perspective it is administratively seperate. From the WAI point of view it will be the working group doing it.
DD Glossary - do we have a plan on sharing that with other guidelines
JT We ahve adopted a number of terms
CMN Ian is the joint editor of all three. We have aimed for consistency where possible.
DD What about Amaya?
CMN is working on it.
WL We should look for people who can comment on guideline 7 in particular.
JT ATRC is working on that.
CMN Should we have another face to face, and where?
DD Where have last ones been?
Boston, Boston, California, sort of one coming in Europe.
DD Tahiti.
Last Modified $Date: 2000/11/08 08:13:13 $