Action IJ: Propose wording to the list.
Action IJ: Add to techniques.
Action IJ: Add to document.
Resolved: Add to checkpoint 2.5 that equivalent alternatives need to be recognized.
Action IJ: Propose change to the list.
RS: We talked about different versions of the Guidelines in the future.
HB: We want new technologies to come on board with where we are.
RS: We don't have a crystal ball to predict all the devices that are coming out.
JG: We may want to attend conferences where new devices are presented and discuss participation in WAI by those developers.
JG: One reason we did not expand the scope of the guidelines was that some AT developers were not thrilled with the requirement for interoperability/communication.
EH: Definitions seem tidy in PR version of guidelines, so we don't need to say much more about what devices or not are "covered". What more do we need to do?
RS: Why don't these guidelines cover more? E.g., could have a palm device with the DOM.
CMN: I gave a talk in Japan. Mobile guys didn't understand why they were concerned? It's still valuable to point people to these guidelines, even if they don't apply entirely to your device.
RS: There is a need for more detailed guidelines for other devices. I think we should say that these guidelines were designed primarily to address desktop and other "heavy-weight" user agents and that we'll try to address pervasive user agents in future work.
MQ: We don't want to discourage readers. Some of the guidelines apply anyway.
EH: I think that we ought to use the terms already defined and say that not every checkpoint is applicable to every user agent. And leave it at that. This means that some specialized user agents may have only 6 applicable checkpoints, but they're still a user agent and should satisfy them.
MQ: It's a problem that applications don't know about each others hot keys.
CMN: We can't solve that, but we can say don't make it worse.
RS: You can't get at image map information for server-side image maps since it's on the server.
CMN: You could get at the information on the server by flooding the server. If we don't say "client-side" then we are requiring flooding.
MQ: We don't support server-side image maps. I don't quite understand the reason.
RS: Server-side image maps are based on a pointing device.
CMN: The other alternative is to expose the image map and ask users to name points. And you've met the requirement.
Resolved: Leave it as is.
Resolved: Say that the button should have a text equivalent (adopt the proposal).
CMN: It's just an example.
Resolved: Add "For example" at the beginning of the sentence.
CMN: Yes, this is an alternative equivalent.
MQ: I think we're covered.
HR: A location equivalent would mean something like "these controls are close together and that means that they're related". The sounds are not equivalent alternatives if they don't provide that information. Three-d sound does exist...
IJ: This grouping should be addressed by authors with markup (e.g., FIELDSET, OPTGROUP). We are covered by structured navigation.
CMN: My interpretation of the question was that it was about "where am I right now?"
RS: ATs provide "where am I" functionality.
Resolved: No change.
EH: I think that the expression "respect sync cues" is nice, more general. Some times tighter or looser synchronization.
KB: There may be times when people want to view them at different rates, and this doesn't exclude that.
HR: Not all media can be slowed down to the same rate.
RS: Recall that if the OS feature is used, it must be accessible.
CMN: The time when you need controls is not the gross volume, it's the mixing volume. You'll need the UA to provide easy access to that (which typically will be by punting to the OS).
Action IJ: Add note to Techniques.
Action IJ: Add note to Techniques.
IJ: "Changes" is generic. Not clear what is required.
RS: I don't think we intended to allow the user to configure keyboard focus changes.
GR: There's the issue of how ATs pick up focus on new windows.
Action IJ: Tailor this wording and propose to list.
HB: Include notification that another viewport has opened.
JG: The two main things have been focus changes and programmatic notification (covered elsewhere). Also, that user configurations are inherited by new viewport instances.
JA: I think that the minimum is what 5.7 says (notification).
CMN: I think that the minimum requirement relates to "configuration": The minimum requirement is to configure those things specified by the document. And perhaps that turning off is part of the minimum requirement. Point people to the definition of configure.
EH: We can add examples to the checkpoint. If you're relying on the definition of "configure" and it has circularity with the checkpoints, that needs to be corrected.
EH: I don't think that our clarification means we need to state a minimal requirement, but clarification is necessary.
The WG feels that this checkpoint includes:
MN: These are listed in the techniques document.
RS: We shouldn't have dialogs prior to opening new viewports.
IJ: Recall that we used to allow turn on/off, but the SYMM WG said this didn't work for SMIL presentations. Thus, configurability.
GR: I think that notification is key. Duplicate views should respect focus position, otherwise might be disorienting.
IJ: I don't believe that we have a requirement for notification through the UI when the focus changes viewports.
RS: User agents today don't have a mechanism for programmatic notification of change other than a focus change. DOM 3 has a notion of views and we should address this in DOM 3.
Action RS: Take this to PF as a DOM 3 requirement.
IJ Proposed: The requirement is that the user be informed (accessibly) when a viewport is created or destroyed (that has not been created or destroyed on request from the user). The same requirement should apply for the focus.
RS: Both of these requirements are covered inherently in the user interface design. People with ATs get the information programmatically, which we cover elsewhere (5.7).
MN: IE 4 notifies you programmatically when a new viewport has been created.
KB: There are situations where the user may actually request something but they don't realize it. The user should be able to query how many viewports are open.
Action GR: Send to list screen shot of JFW Window list.
EH: We may need a definition of window...
HB: There's a class of things we don't expect to cover (e.g., MS blue screens).
RS: I think we are covered except those things that we have no control over. Do we want to limit to application-generated events and not all system-generated events?
KB: If a Web page says "Following this link will open a new window", is that considered an explicit user request?
CMN: I would have thought that was an explicit request, but that's hard to find for the user agent. There can be something in markup saying a new window will open. The UA should provide this type of information.
Proposed: Delete 4.16.
Is inheritance of configuration in new windows a requirement?
Proposed: Add "inherit configurations in new viewports" to definition of "configure".
JG: In G8, we have some checkpoints about links. We might want to require that the user agent inform the user that following a link may open a new window (recognized in markup).
IJ: Note that UI notification of changes to prompts/windows not covered in the guidelines. However, like changes to viewports, there is notification programmatically and the assumption that users of the primary interface will know.
RS: I think that it's implied that new viewports inherit features.
/* The WG will chew this over */
Resolved: This is covered by checkpoint 9.3
Resolved: This is editorial.
Action IJ: Clarify definitions of content, user interface (possibly chrome), etc. Refer also to issue 207.
Resolved: Don't add "where available" since we have applicability that applies globally.
Resolved: No change. There's already a cross-reference to checkpoint 5.5, which talks about standard API.
CMN: There are a "bizillion" examples of accessibility settings in earlier checkpoints.
Action IJ: Add a couple of examples (sticky keys, mouse keys, show sounds).
/* Lunch 12:30 ET */
Does default keyboard configuration mean look at style guide recommended key sequences for accelerators and things, Qwerty vs Dvorak vs ?, both, or more? It should be clearer.
CMN: The answer is surely dependent on the system. If your system doesn't care what keyboard you have, one set of guidelines. If there's a keypad, another.
RS: Default keyboard is some combination of what is specified by the OS user interface and what the application specifies as its default keyboard interface.
Resolved: Delete "default" from "default keyboard configuration". The details of which keyboards are supported, etc. depend on the system.
IJ: The proposal seems to suggest another checkpoint requiring the use of accessible specifications.
Action IJ: Add to techniques document Java (and point to Java accessibility). SAMI?
IJ: Two issues:
JG: Point the reviewer back to discussion about the multitude of navigation checkpoints and how they got reduced: different display control functionalities have their own place; we included those that crossed boundaries into the guidelines themselves.
IJ: Refer to UA Responsibilities document for rationale.
EH: We can't predict every type of AT. We have extensive treatment of applicability. We also have the impact matrix.
EH: I don't want to change the scope of the document.
GR: I think one of the main points of the comment is that it should be highlighted to AT developers what's expected of a "mainstream" UA. It's bidirectional.
GR: One of the biggest advantages for AT developers is the use of standard interfaces (the DOM). A single navigation mechanism may be used with different independent UAs.
RS: One complaint about HPR was that it didn't support Windows navigation mechanisms. We will fix this; the market demands it and I don't know whether we need to require ATs to support them. Just because an AT implements these guidelines doesn't mean it's a general-purpose user agent.
Proposed: Checkpoint 7.6: Change "structure" to "document object".
RS: If the UA provides programmatic access to the DOM, does this suffice?
CMN: No. You may require access through the UI. The minimum definition of structural navigation in ATAG is "element by element". In many cases, this will be painful, but it's clearly identifiable.
EH: Will switching to the term "document object" extend the scope? (Or narrow it?)
CMN: I don't think that the new term extends the scope. However, I don't think that "document object" by itself provides the necessary piece for a developer. (Ian notes that he has an action item to include a definition.) What needs to be addressed is what nav mechanisms are (minimally) required (e.g., up the tree, next sibling, back, etc.). There are markup languages that don't have an inherent tree language (e.g., Postscript). What navigation is required for such markup languages? You do cover the structure of a language like Postscript by referring to "document object".
IJ: Note that we've already had a long discussion about the myriad useful navigation techniques and resolved to have a single (open) checkpoint since we could not come up with a minimal set.
JG: I'm concerned about saying what the minimal set of strucured navigation techniques should be used. It depends on the content, how its rendered, etc.
CMN: I would be very concerned about not specifying a minimum conformance requirement for navigation mechanisms.
RS: You want to be able to navigate to all rendered content. You use the DOM to traverse it in a logical sequence.
HR: I think that structural navigation has a purpose: get the structure of the document without the details of the content. As long as it meets this goal, sufficient.
EH: "Allow the user to navigate according to structure (e.g., forward and backward through rendered elements)."
CMN: We're not talking about the W3C DOM per se; we're talking about a generic document model. I think HR is saying that this is a way of getting around the document (in addition to the linear reading).
IJ: Speed/efficiency is the other advantage. Note that outline view also gives you a vision of the structure.
GR: I like having open-endedness and configurability (chunk-by-chunk, then lower detail).
IJ: The minimal requirement is access to every piece of the document object.
RS: If you're navigating through the UI, it's only access to what's rendered in the UI.
EH: You have different classes of object within the object model. One piece of efficiency is the ability of navigate objects of the same class (e.g., headings).
EH: Note that point two misses the point of efficiency, which was the key to the checkpoint. This is the same as viewing the content serially.
JG: Add a note that this checkpoint is designed to improve efficient access.
IJ: What about a minimal requirement of "more than sequential access" to the document object.
MN: "Document object" confuses me more than "structure". Also, there are objects within the document, etc.
IJ: Propose adding point 3: Because this checkpoint is meant to make access more efficient, user agents are expected to provide more than minimal access. Point to techniques.
RS: Should we add the term "iterator"?
HR: I am in favor of both improved efficiency and local/global inspection of the structure.
MN: I agree with HB - elements and attributes.
JA: I agree with points 1, 2, and 3 together.
Resolved (pending proposal from Ian on definition of content/document object/user interface/element, etc.
RS: Ensure that the user agent doesn't supply a different object model than the DOM for XML/HTML content.
Resolved: Editorial. Move to G9 (Checkpoint 9.4).
Resolved: Editorial. Add a cross-ref to G5.
CMN: I suggest we move the word "mobility" from the note.
MN: We usually talk about "built-in" accessibility features.
CMN: How about "default"?
Resolved: Editorial. Delete "mobility". Maybe add a required term to glossary.
CMN: This is "applicability of available keys".
Resolved: Add a note that in some modes (e.g., text input mode), is not required due to the nature of the mode.
Proposed: For example, on some operating systems, when developers specify which command sequences will activate which functionalities, standard user interface components display those bindings to the user. For example, if a functionality is available from a menu, the letter of the activating key is underlined in the menu.
Resolved: Editorial. Adopt some form of above proposal.
Resolved: Editorial. Don't feel it's necessary to add.
IJ: Relates to issue 207. Does a structured view suffice for some types of content? Previous discussions about 207 suggest that the WG feels that a source view does not suffice for content that may be rendered through the UI (it's too hard to navigate the entire structure to get at the "title" attribute).
Resolved: This is resolved according to the outcome of issue 207.
Resolved: No, DOM access is not sufficient. All checkpoints meant to be satisfied natively through the UI unless explicitly stated otherwise. Refer also to issue 233.
(Note to self: ensure to say that all checkpoints imply through the UI unless it's stated explicitly that it's programmatic or both programmatic/ui.)
CMN: IE ? has an option that prompts you whether you want to submit.
JG: This is done for reasons of security as well.
EH: (refer also to issue discussed previously).
RS: How hard will this be to implement?
CMN: Easy: UA knows when it's about to send a POST request.
MQ: I'm not sure that having to answer a "don't post yet" prompt every time that you select a new item is a good idea.
CMN: You are not forced every time - you can turn it off.
GR: I proposed a two part solution: submit mechanism was one part, another was scripted stuff. The first part was for inadvertent form submission. The second was to disable behavior like selecting a menu item triggers the form.
IJ: MQ's concern is addressed by the ability to turn off scripts (although it may be burdensome to have to do this repeatedly for a given form).
RS: When do you know to turn off scripts (how does the user know that there are scripts bound to a select item)?
Resolved: Leave P2 since it affects users who may be disoriented (blindness, CD).
IJ: I propose addressing MQ's concern more in the techniques for 9.2 (e.g., for long lists, don't prompt 100 times).
RS: This is more of a usability than an accessibility issue.
MQ: In WebSpeak, if there is no explicit submit button, we create one.
Action IJ: Add this to the techniques document...
GR: For the "50 states in a list" box example, this form is embedded in a larger form.
RS: I have the same problem as users with disabilities.
GR: But you know the change has taken place visually. For me, it's much more difficult and disorienting to get back to the previous state. I think that this is P3 for usability, P2 for accessibility.
CMN: This is a "curb cut" type checkpoint. It's helpful for many people (P3), but very important (P2) for some users.
GR: Refer to my (archived) problem statement that explains all of the accessibility problems associated with this situation.
RS: What if the AT says "A new page has been loaded."
MQ: Users don't know that they need to turn off scripts to achieve this goal.
GR: Also, by turning off scripts, they may lose other capabilities.
MQ: I think this needs to be a P1, not a P2. (Ready to register a minority objection to it being P2).
HR: In the example we are using (menu items), do you have to use the mouse?
JA, MN: Leave a P2.
GR: I think in Austin, we also talked about notification, and that helped out (less than P1).
IJ: I feel that without new information, I will hesitate to allow proposed changes at this time without new information. The WG already agreed to make this a P2 as of Last Call. It should be harder to make changes at this point. I want to make it harder.
CMN: I propose that we put an action item on anyone who feels this has to be changed to register a minority objection (that will be presented to the Director).
HR: Outspoken (screen-reader) handles this case gracefully.
Resolution: Leave a Priority 2.
Action: Anyone who objects can register their objection on the list.
RS: Unless there's a reference implementation, I don't think this should be a P1 requirement.
CMN: I don't find that argument convincing. People might not do it because it's hard, but that's not sufficient.
MQ: "LP Player" lets you speed up and slow down audio.
IJ: Priority levels are based on user need, not implementability.
RS: I don't want to make the guidelines so strict that it's not possible to reach P1. I think we lose our credibility if it's too hard to do.
IJ: Applicability kicks in here: if not possible by spec, you aren't required to do it.
GR: About raising the bar - the reason we are here is not to raise the bar but to put the bar where it belongs (since it may have been knocked down in some implementations). I understand the concerns of developers, but it's not just about developers - it's for users, too. We need to work with them.
CMN: There's an unresolvable tension between losing "credibility" with developers and losing credibility with the users who are meant to benefit. The priority scheme is based entirely on user need (and this is a fairly important feature). All three guidelines groups have tried other systems, but in each case, it's been a complete minefield.
IJ: Would users not have access if they couldn't slow down the presentation?
MQ: Partially deaf people can have access to audio by slowing it down.
MQ: Have you looked at "Sound Forge"? It plays back MP3 (and authors it) and allows you to change the presentation playback rate (you change time base and pitch).
JG: Why is slowing video important?
IJ: Physical disabilities may require slowing down.
CMN: One possibility is that being able to step through frames is a sufficient slowing of video. You can't step through slowing of audio, however.
IJ: If we split into separate requirements (video, audio, animations) does this get easier? Does it get easier for us to resolve if we talk about synchronized multimedia separately?
CMN: It is difficult to change time base, change pitch, keep it synched, etc.
EH: If we deconstruct the checkpoint, we should look at the usage of the words (audio, video, and animation) as well. Perhaps we should distinguish "audio presentation" from "multimedia presentation". An "audio presentation" is audio only (e.g., a radio broadcast). A multimedia presentation is either movies or animations.
CMN: Another question is "What is the need for slowing down presentations?"
CMN: For slowing pure audio presentation, that's fairly easy. For video only, also fairly easy. The problem is combining them as multimedia.
RS: Yes, I have a problem with the combination.
CMN: Synchronization is an accessibility issue because the audio information is required to make the presentation accessible.
GR: We should review EH's last call comments...Recall also that DA required slowing down by configuration (rather than dynamic button-based slowing) since otherwise some users with physical disabilities would not be able to slow down.
MQ: LP Player basically does the sync of audio and text.
CMN: Another nearly reference implementation is video-editing software.
RS: This is expensive...
CMN: Yes, but it can be done. And it's not tremendously expensive. What this software doesn't do as a rule is adjust audio when the rate changes.
MN: No one has said that this can't be done. It can be difficult.
IJ: Maybe this is the case: synchronization is required at "normal" rate (2.6). But the "slowing" requirement may not be the same priority if those who benefit from slowing are not an intersection of those who benefit from the individual pieces.
HR: I don't think that slowing should these media is a P1 since you can start, stop, pause, rewind.
JG: It is a P1 for audio since you can't step through audio. You could step through video and get some information out.
CMN: Use case - cricket! You get a noise from the "stumps" and video from the "thingy". You need to synchronize the two at which point the two came together. You can get this information through step through (since you don't care about the quality of the sound). But I imagine that the quality of sound is important in some cases.
GR: When I hear things like animation, I think of things like macromedia, flash, shockwave (and not just SMIL). When you have to respond to an on-screen video event and a sound, you need to be able to slow down.
RS: I don't think that the benefits of slowing down the presentation warrant it being a P1. And the cost is very high.
/* Discussion of slowing according to pre-determined increments, e.g., half-speed */
/* Madeleine Rothberg joins */
MR: I think that a lot of the slowing down issues (especially for animations), were intended for users with CD and that's not my expertise. The techniques for this say that this is for people with CD, new to a language, and newly acquired sensory disabilities. For auditory presentations, what if you understand it by listening several times? I'm not sure about the P1 level for audio presentations.
JG: I don't think that we've heard P1 for sight disabilities.
CMN: I have to speak (Australian) more slowly to be understood in the states than I would at home.
MQ: People that are partially deaf have to slow down material in order to understand.
CMN: Maybe we should assign actions to review this carefully?
HB: In the Daisy guidelines, they have a speech range for the player. It's something like half-speed to 2.5 times.
JA: Yes, I think it's town to 25% and up to double speed.
JG: Note that speeding up audio for users who are blind is not an accessibility issue but a usability issue.
JA: Depending on the type of vision, some students cannot use a presentation at full speed. They simply can't see it. If they can slow down video, they can get at the information.
JG: Is there a range that we can specify?
JA: No, the kids have very different requirements.
MR: It also depends on the initial rate of the animation. Many variables.
JA: I think that much beyond a 25% reduction, you start losing your audio anyway.
MR: The players I've seen that play video more slowly, keep sync of captions, but turn off audio. Windows Media player can be scripted to slow down caption presentation rate (it's not in the player itself).
/* George Kerscher pulled in from the hall */
GK: There are many implementations that let you slow down audio presentations (VisuAid, Victor, PlexTalk (by Plextor), LP Player by PW, and Labyrinten). These are standard tools on the market to do this.
Action MR: Talk to Geoff Freed about implementations that slow down multimedia presentations.
GK: Some people with learning disabilities need to slow down the audio in order to process the information. Synchronized with text.
MN: I'm concerned that we're going to establish priorities based on reference implementations. This is not what we're charged to do.
GK: In the SMIL WG, there's a requirement to have two implementations of any feature.
JG: Yes, but our references are not based on existing implementations (though implementations help us show how it's down); it's based on user needs.
Action JG: Write email to the list asking for information about which user groups require the ability to slow down presentations othewise access it impossible. (Get information from people with experience/research in this area).
GK: I believe that in the SMIL specification, the notion of the wall clock is there. For people who benefit from a complex multimedia presentation, it's clear that slowing down is obviously needed by some users.
IJ: Please note that I believe we're only talking about explicitly synchronized multimedia presentations.
RS: Yes, that's fine. But you shouldn't be required to slow down to the same rate two pieces of content that have not been explicitly synchronized. Otherwise there would be no use for SMIL.
EH: Even though impact determines priority, it's my opinion that we should not include impossible or extremely costly checkpoints. I don't have a lot of expertise on how important these things are to users with disabilities. I'm withholding judgment now since I suppose that this checkpoint has had a lot of review at this level.
CMN: I don't think there are many cases when you have several pieces of content together but aren't synchronized explicitly.
Proposed: Clarify for this checkpoint that we do not require slowing down of pieces of content that haven't been synchronized in the format but are playing together.
RS: I don't think Quicktime counts as a syncronization format. Same for AVI.
EH: This is a very late stage in the process. I assume that this has had a lot of review. I'm inclined to go with P1, unless this is a total show-stopper.
RS: Once again, I don't want to make the barrier too high initially. We need to weigh the benefits against the costs.
EH: I hammered on WCAG for this - how do you define the reference groups? You can always find individuals who need a particular feature. I pushed WAI to identify target groups. I suggested (even though I knew it wouldn't be popular) that we say something like "a substantial majority would find it impossible, beneficial, etc.".
GR: Users with a disability who get to the table early and who are vocal tend to get their issues addressed. But users who haven't had access to information, and who haven't been able to speak up, are being overloooked.
JG: We will postpone this issue and not start with it first thing tomorrow. I suggest that we address it on Thursday.
/* 5:30 pm adjourned */