Monday Summary | Tuesday Summary | Subgroup summaries

Minutes from 15/16 July WCAG WG F2F
Linz, Austria

Present

Monday Summary

Introductions

We introduced ourselves to each other. Each person said a brief statement about which organization they represent and what they hoped to accomplish at this meeting.

Agenda setting

First we discussed the issue that WAC brought up on-line last week. From this discussion we created a list of issues and work items to address during our meeting time together the next day and 1/2.

Discussion with John Gardner about SVG and MathML

After the first break John Gardner joined us to help educate us about his work in MathML and SVG. We discussed two questions

  1. are there structural elements in MathML or SVG that could be divided into "essential for accessibility" and "less essential for accessibility"?
  2. are there any of the current checkpoints or success criteria that we might have difficulty applying to SVG or MathML.

He gave us a demo of his talking SVG browser. We looked at a map of the United States and a picture of a heart. He described looking at these images using a tactile map as well as with an interactive tablet (??).

Summary of issues and ideas:

Topic groups

After lunch we broke into the following groups:

We worked in these groups throughout the afternoon.

Tuesday Summary

There was still work to do in the small groups, so we stayed in those for the first session.

Once back from break we broke into these groups:

After lunch, we worked for another hour in these groups, then each group reported for about 10 minutes on what they had done and discovered as well as what should still be done.

The checklist group went last so that we could discuss the mock-up they created and the issues that it raised.

Finally, we went around the room and each said what we hoped to do next (without commiting to anything since our time is contingent on what our bosses will allow).

WAC described thoughts and issues with the next steps and future:

Subgroup summaries

scripting

cynthia, wendy

html techniques

andi, john and zachariah

html techniques follow-up

andi

going through checkpoints for format-specific issues

zachariah and john

went through success criteria - implications for non-html technologies. got through 3.5. still have more to do. still too general to be useful, we're both most familiar with html. we ran into (thread throughout) questions about how to represent the structure of non-text objects. are those ...svg has a way to represent complex graphical objects, does smil allows identification of structural pieces of a video stream? a way to identify frames or scenes? to allow for keyboard navigation of video document. navigating scene to scene. similar issues in 3.1 and 3.2 - current draft runs into wall with non-technical documents that don't use structural elements (e.g. poems and letters, etc.). The text encoding initiative looked at that - developing ways of coding complex literary documents. Johns takes an action to go through in more detail. Talked about voice input and how to provide for consistency and variability.

@@ will finish and send notes to list

checkpoints 4.1 and 4.2

Discussion about this report

CS what about content? don't try to rewrite james joyce. some like new york times instead of usa today. don't put something in that could be interpreted that all content must be written at a particular level.

AA right. LS made the point that some sites may assume that audience is college educated people. however, even within a community, there is a continuum of knowledge. To write a physics paper simply...what does that mean? To write it in plain language, you need to make it as understandable as possible.

JS guideline that allows me to point novelists or artists who are interested in working on web that they would not recoil from. use plain language is one that writers would recoil from.

AA excellent point. still work in progress. perhaps qualify it...

ASW where the writing is not art.

CS is salon art or commerce or... portals go across this.

AA wherever beneficial.

JS rhetoric. principle of effective communication. in the west there is a 2500 year history of we're walking up to the edge of. 16th century debate about favoring style. they both look ornate. what was plain at the time is not plain to us today.

AA if the purpose of a web site is to be informative...the guidelines should include this. ideally, all web site, not just accessibility. almost needs to be phrased in that way. you put the onus on the writer to aspect themselves "what is my point?" if informative, should be written in plain language. there are a lot that try to be but are not.

CS many that go to far, they have taken out all of the content.

AA must be some way to say this to underscore to everyone to think in these terms. it is the trend in governments, industry (insurance, finanical services, etc.) to simply what they are writing.

CS market segmentation issue. If you want to target people who think they are "smart" and like big words, then you should do that.

AA they know their target audience.

CS that should not mean accessible the same way "it should work w/my screen reader" is accessible.

JS design philosphy, importance of user-centered design. Think about user-centered content production. Goal is to write content in way that in your best judgement and faith meets needs of audience.

CS personal preference.

checklist

Paul, Ben, Christian (day #2), Bengt (until went off to 4.1/4.2 discussion), Avi (until went off to 4.1/4.2 discussion)

pb idea seemed really simple at first. realized, don't want to pass technique want to pass success criteria. everyone not using same technologies or combo of technologies. thus, multiple ways of displaying checklist...thus multiple checklist. e.g. HTML, HTML+CSS, HTML+CSS+scripting, etc. in theory, end up 100's of variations depending on what you are using. tech not only variable. levels, also. "everyting for minimum that uses pdf, html, css that specifically has special characteristics to benefit people who are deaf." These all seem doable and good. Think we can generate. Perhaps a form that submits and gets exact checklist. This is a subset. Therefore, spent time trying to figure out the main document. simplified preliminary prototype. Hard to create linear version that made sense.

/* display a couple of mock-ups for the group */

  1. Checklist Example 1
  2. Checklist Example 2

BC trying to figure out what the main content of the page should be, while keeping in mind the variables for sorting and how to keep usable and not 100 miles long. by putting each success criteria (80 of them), this will be long.

pb describes formatting to make it visually easier to read.  

cb we have applied, other, not applied. also pass/fail/n/a for the overall success criteria.

bc started w/guideline, went to checkpoint got that in a grouping. the success criteria are listed. then went to techniques that are appropriate. that will change for each success criteria. the SC are stated in such a way that not sure how the techniques say what is different.

CS on most of technical, should be code examples.

BC subjective get fuzzier.

JS examples, nonetheless.

BC techniques, we're refering to 1 or 2 sentence rule.

CS That's another issue.

BC current terminology is "rule." that would refer off to appropriate section in techniques doc. applied/other/not applied - for dev to annotate or evaluate what they've done. other - what happens when techniques are not updated. if don't find one that is in there, but think you have something that works, you can describe what you do. the next step would be to fill out that area as a motivated to submit the technique.

CS is this a software tool, something you submit, something you print?

BC not treating as a tool. not much diff than existing. currently, when i use, save to my disk and edit as a document. also, "technology x techniques" there are no techniques for this tech..." refer to guideline 5. but instead of applied/not appplied/other has "not possible?" there might be techs that can't be addressed.

WAC Also, other technologies that don't have techniques doesn't mean don't use, necessarily or can't make accessible. e.g. svg - just don't have a tech doc yet.

PB perhaps make yellow box go all the way around to associate pass/fail/n/a with success criteria.

BC has "Notes" instead of check. could also include date info

CS while in the process of doing stuff.

BC other model: success criteria and right below pass/fail/n/a then list of all techniques for technologies. we will need a way to put in legal numbering, e.g. success criteria 1.1.1 so we can refer directly to it.

cb at level 2 - don't say all of the criteria. All checkpoints together.

BC may look only at minimum level, but more than minimum level instead of all min level first, then 2nd, but get all min level and 2nd for one checkpoint. it gets really long, easy to scroll away from the checkpoint. so, could either jump back up to it, refer to it in place.

pB concept behind box model, although doesn't help if you can't see.

JS way to use criteria number of short form of checkpoint as header?

PB In the numbering system, if you get success criteria 1.2.3.1 they all refer to something up above. if you can remember that far back, but a lot of people can't.

JS Jaws has a key stroke that says "what are the headers associated with this cell" then if one of the headers is the criterion or some short phrase, that could be part of header list.

BC Struggled w/making navigable table. Perhaps create a layered version that is a transformation. Trying to keep it simple. Wouldn't rely so much on tables.

CS for visual rendering, in the box model have the yellow come in a little. Then techniques would scroll and checkpoint stay there.

PB Box/model does not easily get at headings. currently tables inside tables.

WAC case study for accessible tables.

BC perhaps not show techniques, but link to a page that is the techniques relative to that piece.

CS Have both. printing is an issue.

AA Trying to picture a way to streamline the readability. If first see "minimum" I might be liekly to stop there. Having it default to level 2, then toggle back to other levels. Like German government, combine 1 and 2. In context help you see...I dont have much mroe to get to 3.

BC Not clear, if only look at minimum level, what can we do to encourage someone to do further?

CB descision to go to next level, not here. Other levels must be convincing.

BF For testing purpose, print out success criteria for test sheet.

CS Isn't that what the rule is? e.g. text equivs for all images.

BF tester not interested in technique. only needs to know if text there.

CB You will meet the success criteria and say pass/fail. if not a machine, just a human.

CS Success criteria is sometimes where that happens. Some are rules some are techniques. But there are technology-specific rules that a tester needs.

BF Tester prints out test rules, designer looks at design sheet, two people.

CS It depends on the company.

PB Basic point is good, make diff views for diff users.

CS place to start w/that is the success criteria.

CS what to see the interface for choosing views. it's a great start.

BC we got tangled up in that. thought about all the diff views. but, came back to what does the doc need to hold. trying to figure out the best view is a can of worms.

CS is this a view?

WAC good next step is to fill in the rules for each of the technologies. like we started doing today?

PB length is an issue.

JS

/* UI discussion, missed since Matt came in to ask about testing */

WAC machine or human testing?

MM add teh context of what is machine-testable.

CS if publish algorithm to bring up a dialog, it's a test we can provide.

MM an ATAG checkpoint is to inform user of accessibility issues. the accuracy of that claim. if providing advice that is incorrect, can they claim conformance to ATAG. if they are developing a suite like this, and show that this doesn't work, then can go back and fix.

PB From the perspective of our checklist, machine-testable, human testable...

CS Everything is human testable.

WAC is this pass/fail per page? per site? per element? also human testable - no necessarily if millions of pages and need to check alt-text for every image.

CB techniques normative? not all are machine-testable. could be other techs fulfilling success.

CS That's why success criteria supposed to be human testable. A lot of discussion about what was normative. Techniques for tech will change over time. if someone comes up with something new...always human testable not always machine.

WAC for a tester, so perhaps just a view, indicate the tests.

CS link to a tool, or algorithm, or steps to perform.

PB next steps: generating a form to create views is one next step.


$Date: 2002/08/31 00:52:02 $ Wendy Chisholm