> EOWG Home > EOWG Minutes
Doyle: Could be more people from Wells Fargo participating in WAI.
Shawn: Status on meeting in Canada?
Judy: Waiting for confirmation on room block at hotel.
Judy: Status is pretty certain, but maybe wait until early next week to reserve tickets.
Judy: Why are people not posting comments to list?
Alan: Wanted to keep the noise level down.
Judy: Want noise level up! Unless comment is private, post it to the list.
Judy: We're working on format for change logs.
Alan: Yes, they're confusing.
Chuck: Big issue is resolving wording for identifying scope of the site.
Judy: Yes, that's most difficult piece of document. We not have come to a conclusion.
Chuck: Number of confusions. Semantic confusion using term "expanded". Default should be entire site, not subset. The order of point, looks like small set is first.
Judy: Original discussion yielded two things. Whole site or reduced set.
Chuck: Generates confusion.
Chuck: Maybe we need more explanatory text.
Judy: Conformance, one would select a sampling of pages the represent different parts of site and run exhaustive tests on it. Then, take entire site and run automated tests across entire site. If entire site is in range of million of pages, instead of doing entire site, one could pick representative sampling of pages that was largest than small set and test that and disclose that not every page on the site was evaluated.
Harvey: dynamically generated pages need to be accounted for.
Judy: Identify two sets. First set, sample with representative page types. Throw everything at it. Second set supped to be entire site, but if not realistic, then could be replaced by "expanded" page selection upon which automatic test would be run and disclosed.
Andrew: Becomes clear in point 2, need to put it in point 1.
Judy: Terms are confusing. Set A, Set B.
Shawn: Requires more thinking.
Helle: Put something in introduction that describes these selections according to manual or semi-automatic?
Andrew: You still need to make a page selection.
Judy: Assumption checking. Working on this in-depth 3/4 year ago. Thought then it was unrealistic to run tools across entire site. Still valid assumption? Are there tools that do massive-scale evaluation. Sense talking to commercial developers is that a lot has changed. Could we now way there are tests you run on selection and tests you run on entire site.
Natasha: agreed. There are commercial tools available. Need to mention that. Is some people can test with commercial tools then we should guide them.
Judy: Put a lot of emphasis on page selection. There are tools where you can do millions of pages.
Shawn: Agree that things have changed. Good to have in there that if you don't do the entire site then you need to clearly explain that.
Judy: Remove A-B version references and say page selection and entire Web site. If for some reason you're unable to, then do an expanded page selection and disclose. Note that goes somewhere in the document.
Andrew: Then we need to mention commercial products.
Judy: We can do that on tools page. Note there which can run across large sites. How many can run with WCAG conformance as opposed to 508 conformance.
Judy: Bobby can do millions. Crunchy, Page Screamer?
Natasha: They do 508.
Judy: Need to do research on this.
Helle: What about LIFT?
Judy: Retrofitting tool.
Helle: You can test your site or retrofit.
Judy: Remove reference to expanded page selection, demote to exception note only.
Judy: Add more information to eval tools list about large volume evaluators.
Andrew: Need to identify when was update of notes.
Chuck: Still confused.
Helle: What do you do when you test millions of pages?
Natasha: Break them into smaller sections and test them, run the entire sites.
Helle: Maybe you just say you need to do this on the entire site.
Judy: Move Identify targeted conformance level. Identify page selection. Identify entire site. Note then, if entire site is not realistic, expanded page selection (see below). Remove referenced to expanded page selection.
Shawn: Add Identify page selection for manual testing.
Judy: Leave ordering, add in rationale.
Andrew: Manual and user testing.
Chuck: People don't understand "disclose".
Judy: Agreement that we needed to be clearer about disclosure.
Natasha: Not clear whether it should be on Web site, on paper?
Judy: Two stages, one while testing is going on, identification has to be stated internally. After done, if making public statement, claim should be accompanied by disclosure that entire site was not evaluated.
Andrew: In notes.
Judy: [Reviewed her changelog notes]
Judy: Adding issue numbering to changelog format.
Chuck: Change freeze to capture. Capture test pages as rendered in browser.
Helle: We use this in both prelim and conformance, in prelim we need to be sure people know what they need to do, how to capture pages.
Judy: Is anyone evaluating dynamic pages?
Judy: Natasha and Doyle, please check with technicians. What are the 2-3 most important questions, large sites of dynamically generated pages. Can test templates or capture generated pages.
Harvey: Does page content always have images or multiple images and are the ALT-texted?
Andrew: If you have image database, what image are you storing with images?
Natasha: Send questions.
Natasha: More detail needed. Should be addressed.
Judy: Want to note location.
Shawn: 2.1 Limit the discussion in main part of document, refer to section on this issue in specific contexts.
Natasha: Relevant level checkpoint, what does that mean? Give them option to do that in semi-automatic and automatic fashion. Uncertain what we mean by relevant.
Chuck: Only check ones that apply to your site.
Helle: Also conformance level.
Natasha: Need to clarify.
Andrew: Don't emphasize conformance level.
Judy: Use checkpoints that are applicable to your site.
Natasha: Would like clarification of "manual".
Andrew: Things you cannot do automatically.
Judy: Next section, usability evaluation.
Chuck: Many of the manual checklist checks can be done automatically.
Judy: Can you really do things on checklist automatically?
Doyle: If you're new, you cannot recognize that these things cannot always do things properly.
Natasha: Run tool, when it flags things you need to check things manually.
Helle: Sometimes you need to do things manually that are not flagged.
Judy: We have noted in this document that things might not make most sense to carry out in the order specified. Let's not get bogged down talking about the order. Not hearing clear direction about what to do with this section.
Natasha: No tool can provide 100% guarantee. Will send suggestions for clarifying automatic and manual checks.
Helle: Translating document, ran into trouble 2.3. Doesn't make sense. Open page in HPR, close eyes and listen, open and look at page. Is information equivalent? First, would never close eyes. Trying to compare incomparable things, text browsers and graphic browser.
Judy: To do comparison, need things running synchronously. Also, information might not come the same way. Easy for people who aren't familiar to, in looking and listening, to fill in things that are missing from audio output. Seen this technique used, had it used, brain fills things in.
Chuck: Good suggestion for people who are not accustomed to this.
Chuck: Need to separate text browser and voice browser.
Judy: Doyle, Natasha, Andrew, Shawn will look at Selecting Software.
Judy: Developing Org Policies, adding in some changes, needs to be looked at. Started updating deliverables. Matt May will come on and brainstorm about WAI page designs.
Judy: Editors stay on, g'bye everyone else.
Last revised June 26, 2002 by Judy Brewer