JG:
IJ: Flexibility is built-in. No authoritative body to grant exceptions. I think WAI CG is looking at issue of validating claims.
DA: The example refers to low level APIs, but this is the wrong level. The issue is access to the information. The issue is "do I have the same functionality".
DP: Also access to the UI. This is also related to issue of OS features/native (refer to 7 December teleconf).
Resolved:
Action Ian: Draft a statement for point 2 and send to list.
Action JG: Resend the issue to the WAI CG.
Ian reviews the applicability provisions.
Resolved:
Action Ian: Refer to ATAG definition and propose to list.
Extend definition of "focus" to include GUI focus.
IJ: How does extended definition affect instances in the document?
JT: Seems like a Pandora's box to open this up at this time in the process.
KB: This falls in the category of following accessible design of UI.
RS: We do require keyboard access. If you have keyboard access, you have to have focus.
RS: Do we have a checkpoint that requires that in a GUI, when a window becomes active, that something takes the keyboard focus.
IJ: When I open my browser, there's no focus by default. The viewport has the current focus, but not set.
GR: That would depend. The first tab sometimes takes you into the address bar. The way lynx works: the first focusable item has focus when first rendered.
RS: Screen reader vendors look for focus changes to figure out what to speak. I don't care how you set the focus, but the viewport needs a focus by default when the viewport is made the current viewport.
GR: When events occur that cause a prompt to appear, since the spacebar is a toggle, you may perform some action that you didn't intend. It would help on Windows if "space" didn't mean the same thing as "enter" in a lot of dialog boxes.
RS: The problems not having focus.
DA: What happens if you have window with a lot of content but that doesn't have any item that can take focus. No links, etc. If you put the focus on the first focusable item, that will scroll down.
RS: I'm talking about UI focus.
GR: The let's distinguish the UI focus from the content focus, rather than merge the definitions.
DP: The UA needs to use the OS focus appropriately.
RS: If you don't have focus and you need to figure out what it is, you get inconsistencies.
GR: The model I like is what Opera does: when you hit page down, you shift UI focus to the content control.
IJ: We don't currently deal with UI focus. This is something that belongs to the OS/Window manager.
RS: If you don't deal with UI focus, it will be virtually impossible to give content focus.
Conclusion:
JT: You don't want the user to be able to change, e.g., the default behavior of the File/Open dialog.
Resolved:
Use of "impairment"/"disability"
DA: WHO proposed a model in which "disability" refers to the interaction between a person and a context (the person doesn't have a disability). The person has an impairment that may result in a functional limitation, or may not.
IJ:
GR: Is this something that should be handled at the CG level?
IJ: Don't know. Maybe editorial.
/* Joined by telephone */
GR: I recommend that we bring out Denis' point in the definitions rather than in the document, but not do a universal change in the document.
DA: The reason for the definition is that different organizations use the term to mean different things. Also, "handicap", "disability", etc. "Handicap" means a limitation that is imposed upon you as a member of a group, rather than as an individual.
GR: Bad etymology since the root means "functional limitation".
DA: They equate social limitation with handicap, not personal limitation.
DP: I think KB's concern is valid - normalizing the language based on potential legal ramifications would be ??. I think we should use "functional limitation".
IJ: For this draft, need to emphasize accessibility.
KB: I'm wondering if we can use impact matrix to emphasize functional limitation
DA:
MQ: We need to state clearly what we don't do in the document up front.
Resolved:
Action Ian: Implement this.
Action DA: Send URI to WHO definition and NIH expansion to the list.
Action GR: Take this idea/definition to the ATAG and GL WGs.
Action JG: Send this to the CG.
/* 10:30 Break */
Introduction of new IBM participants who work on Mozilla accessibility.
IJ: I propose
Describe the simply help mechanism in the techniques document.
GR: I prefer the old title.
DB: I prefer the old title.
Resolved:
Discussed at 7 December teleconference.
How much is the UA responsible for accessibility of OS features it uses?
JT: Keyboard has been referred to as an OS feature. It's up to the UA developer to make the UA accessible.
RS: Where the OS doesn't provide for keyboard access, the UA needs to provide it.
DP: We need to look at general UI issues, not just the keyboard.
DA: Let's back up and look at it from user's perspective. For the user, control is fundamental, whether control comes from the OS or the UA.
RS: If you write a custom control, you have control over content and the UI.
Resolved: Accept proposal from Ian re: native
Some checkpoints are special cases of others.
Resolved:
Checkpoint 1.5 (output device-independence) needs clarification.
JG: Speech API different from audio APIs.
JA: There are OS features for system beeps. But "You've got mail is initiated by the AOL."
JG: Would you need to output text as morse code beeps?
/* Scott Luebking joins the call */
MQ: Maybe to clarify, we should say that messages be available in user's preferred mode.
DP: My computer beeped. I don't know why. Maybe I didn't hear it. Whatever API you support (e.g., flash, beep), make it available in all APIs that you support.
RS (to DB): In the Windows Guidelines, do you point out which alerts are used to trigger the accessibility features? Do the Guidelines point out SoundSentry?
DB: Will look into this. But developers don't have to use; this is a user thing.
RS: The problem is that these alerts are used to activate SoundSentry. In the case of AOL, etc., they need to be able to know which alerts they can activate so that people.
DB: So how would an AOL developer trigger soundsentry in addition to the "You've got mail" message? Soundsentry renders the event visually.
RS: They need to know programmatically what events are available.
GR: There is a structure in Windows set up so that you can synchronize events in software through the Windows control panel.
IJ: I think making messages available as text is missing.
GR: Let's not link text to the status bar. There are some screen readers that request that you turn off the status bar.
Action DB: Find out how developers find out which appropriate triggers to use in Windows.
DP: Some applications add sound groups, others don't.
MQ: Dialog box is one technique, status bar another.
GR: I think we need to emphasize text equivalent and in techniques say how it could be done in different environments.
IJ: Text output would fit in 5.6: notification of changes to controls.
DA: Not just that something happened, but what happened.
GR: Text equivalent required. At Redmond, we discussed how the text would be most effectively routed to the appropriate device. 5.1 should at least point to those techniques.
DB: Are we talking about a requirement of a text equivalent for a system sound?
DA: What is the information of this message? The information has to be available through supported output device APIs.
GR: I would stress the text equivalent aspect of the question.
SL: There's a problem of I18N of text equivalent.
JG: I think that we want to say that all messages are available in a device-independent manner.
DB: Note that text equivalents not always possible for messages.
Summary
RS: Custom controls that implement the API will also meet this requirement.
DP: We have the parallel issue: if you have a long text warning on the screen, is there a need to provide a non-text equivalent?
MQ: Two issues: scrollbars and messages.
DB: I have a concern with the scrollbar example. Need to ensure that messages are available to ATs (not just through APIs).
IJ: I think this is covered by 5.4
JG: We want something analogous to 1.1.
RS: 1.5 should state that the message should be available to either an AT or the user. ATs can either get info from text draw calls or through standard controls.
IJ: The spirit was that the UA should not be able to just make info available to AT but had to support output devices accessibly.
DP: We're talking about redundancy.
DB: There's a difference between providing and making possible.
IJ: A beep doesn't convey much information. A flash would suffice.
Rabi: From the developer point of view: If you beep, you can put a flash on the screen. But how would a user who uses flash for other reasons distinguish if beep always accompanied by a flash.
DA: It's important that the information of the message be available; be it text or just informational heads-up.
DP: I want clear up: I'm not saying that for a text message that a UA has to display text; the UA needs to generate appropriate responses.
JG:
/* Lunch Break 12:20 - 13: 05 */
/* Phil Jenkins joins. */
Goal: Provide access to messages:
Issues: Sometimes messages available through controls. Some messages contain more information than others. Should UA be allowed to just present information to AT?
DP: We're expecting that users are provided with the information.
IJ: The issue is how much responsibility does the UA have?
Action DP: Propose new 1.5
Proposed checkpoint: "Do not constrain the accessible output by constraints of the existing presentation"
RS: AT needs to provide access to the application. If the information outside the visible area is not provided, there's nothing they can do.
DA: If a UA implements the DOM, wouldn't you automatically have access to the text of the document, even if not in viewport? If so, the DOM requirement meets this requirement.
RS: If the platform has a DOM mechanism, do we require this over text drawing (refer to later issue on order of APIs).
Resolved: No change since content available through the DOM.
Proposed: Move note of 1.1 to the Techniques document.
Action Ian: Repropose simpler 1.1
Proposal from IJ:
Render content identified by the author as being in a natural language that is supported by the user agent, according to the conventions of the language.
DA: Why is this an accessibility issue?
GR: Screen reader issue.
DA: This suggests that the language of the checkpoint isn't exactly what we want. What needs to happen is that the language change information must be made accessible to the AT.
Resolved:
Proposed deletion of 2.6 (more than one eq track) a since special case of 2.8 (more than one audio track).
RS: Put 2.6 in techniques.
DA: I agree that 2.6 is a special case.
/* Discussion of desire of several alternatives simultaneously */
Resolved:
(Checkpoint 2.4)
Proposed: Allow the user to request the name of the object, as that in some cases may help identify its purpose.
JT: Yes, use available information.
DP: For example "FILE: foo.gif". This is useful.
Resolved:
Resolved. Refer to 140.
/* Wendy arrives */
Why is blinking and animation a priority 1 but auto-refreshing (which can have the same effects) only a priority 3?
KB: We've thought of this in the past to be an issue more for screen readers.
JG: But Greg's issue is that if refresh is sufficiently fast, may also be a flashing issue.
JT: I think the problems are independent. It's important to not confuse animated GIFs from refresh. It may be confusing (cognitive issue) when pages refresh.
GR: I would rather see this checkpoint be about control of refresh timing. My screen reader may not have time to get to content at the bottom of the page.
RS: Some screen readers refresh the screen anyway to get to content.
Rabi: But the author has specified a rate of change.
WAC: Frames: no new URI.
DA: UA has access to timing specified by author but may override.
GR: Suggest techniques for auto-forward for auto-refresh as well. You want new content! You just need to control timing of getting it.
JT: I think existing on/off checkpoint reasonable at P3. However, page refresh timings may vary between different pages. I think too complex to configure for all of them.
RS: Problem is not as much refresh rate as user knowing when the content changes. Stock market quotes, GPS, some examples of useful content update. Do we want to solve it this way (programmatically) or do we want to do this through the DOM.
DP: Configurability would be possible globally and that would suffice (timing of changes).
GR: Control of rate is no more functionality than required by control of page forwards. There are a lot of applications where I want refresh, not static on/off choice.
DA: We can address both the screen reader issue and the epilepsy issue by having a minimal refresh rate that the UA would allow the user to set.
Proposed:
JT: I don't like the proposal. I think that on/off is useful. I also don't think the epilepsy issue is real since refresh cannot be faster than 1 second. For screen readers, it's not an issue since we have checkpoints elsewhere. You can also reload by hand whenever you want.
GR: If moved, it's also an alert/notification issue. How do I know to hit the reload button if I don't know the page is refreshing?
DP: I prefer "configuration" rather than "minimal refresh rate". Users want more.
KB: I get nervous when we change a lot of checkpoints to 'configure'. Most users don't configure.
IJ: I propose that since current checkpoint does not seem to raise issues related to epilepsy since the minimal refresh rate is 1 second, that we leave the current one as is.
JA: If you look at techniques for 3.9, it covers what we've been discussing for 3.10.
Proposed:
MQ: But the user may miss content that changes during that interval (e.g., 10-minute delay).
WAC: In cases other than HTML or through scripts you might be able to refresh more than once per second.
/* You can turn off scripts */
GR: We're overlooking the alert mechanism for both 3.9 and 3.10.
GR Proposed:
Action Jim Allan: Propose a revised 3.9/3.10 to the list.
Proposed checkpoint to allow users to turn on/off multilingual support.
JG: I think this is a performance issue (that may change over time). Switching languages in a voice browser causes a performance hit.
JT: HPR doesn't switch inline.
RS: I think this is a usability issue that will be cleared up in the future.
Proposed: no change since this is an implementation issue.
Action Mickey: Write Mark for more clarification of the accessibility issue.
What UI is required for turning on/off features that may impede accessibility?
IJ: This is also a documentation issue.
GR: This is techniques for documentation (e.g., in the documentation, say how to control blinking, whether it be through a UI control specific to this or through control of scripting).
Resolved:
Proposed: Checkpoint to maintain relative font sizes?
WAC: I think this is technique-level.
Rabi: With bitmapped fonts, very difficult to maintain reasonable presentation.
DA: It's useful to maintain scaling if you are maintaining semantic information in font size.
JG: You lose the information when you turn off author-specified fonts.
Resolved: This is a technique.
Action JG: Write Al Gilman for techniques.
/* 15:00 Break */
Control of text since in the User Interface?
Resolved:
DA: Does it make sense for a UA to try to be accessible in a system that's not accessible to begin with? If you can climb this set of stairs, there's a great accessible bathroom!
Checkpoint 4.17 (Note): User should not be able to turn off default styles.
Resolved:
4.14 Allow the user to select from available author and user style sheets or to ignore them.
Refer to Ian's Proposal
JA: Can we put this into 4.13? "Forward" instead of "Fast forward"?
DB: I don't agree with P2 for the first checkpoint.
WAC/RS: I think incremental is a technique.
SL: I'm ok with configuration of 4.13 being part of techniques of 4.13
JT: Techniques:
DA: You need a config component for people with slow motor control. If I can't leave a button quickly, I may end up at the end of the presentation.
WAC: For any Technique, you need to be able to operate independent of the device. You could also have a step techniques. I don't think you need a specific checkpoint for this requirement. I think you need a general configuration checkpoint.
Resolved:
Action Ian: Make editorial changes.
Resolved: This is a technique for the checkpoint about audio, animation control.
How do requirements for APIs overlap (DOM? Platform standards? Provided by Tool)
RS: A lot more "heterogenous" software development going on in the industry today. Software to work on multiple platforms. A requirement to implement MSAA on Windows when they can implement the same thing in a way that works across platform is wrong. I would like to see us accomodate cross-platform solutions that don't implement the system convention APIs. I'd like conformance to a W3C specification to suffice (e.g., DOM 3 will address "chrome").
RS: In some cases, there is no chrome: things may be generated from the server. The DOM will let you do a lot of these things.
DA: From the user's point of view, is cross-platform more important than interoperable on a particular platform?
IJ: Proposed: If DOM does what the system API does, then the DOM would suffice. In other words, when the functionalities offered by the DOM are the same as those offered by MSAA, use one or the other.
RS: In the case where chrome is sent by the server, need access through the DOM.
Rabi: Take Mozilla. Very standards compliant (UI is written in XML with RDF).
RS: I think a UA could meet the UAGL using custom controls that are accessible through the DOM.
JT: The disadvantage for screen readers is that they need to implement a DOM interface as well. I think it's far-fetched to expect that screen readers will navigate UA controls through the DOM as well.
DA: The Guidelines should not specify how to implement functionality, but that you implement functionality.
DP: I think that we should emphasize the cross-platform possibility.
GR: I share JT's skepticism. I'm concerned about platform-specific solutions as well (since they tend to lag behind anyway). I'd like to see sample implementations (like those in the Authoring Tool Guidelines). E.g., I'd like to see how a UA such as Mozilla can do cross-platform accessibility, but I wouldn't put all eggs in one basket at this time.
JT: One advantage of MSAA was to standardize one way of looking at control. But now you're talking about getting that info from a different source.
WAC: If UAs are using the DOM, they won't have to focus on their own object model. Also, WebSPeak at CSUN one year ago had implemented DOM 1 and was doing more than what could be done with MSAA.
DB: Why does cross-platform matter?
IJ: The guidelines say today you have to do both: DOM and platform-specific APIs.
RS: On the adaptability of screen readers: there was a time when there wasn't an MSAA, but screen readers adapted.
DA: We don't want to lose interoperability of assistive technologies.
IJ: What did we mean by 5.1? I think we could probably delete it.
RS: I'd like to address DA's valid concern about keyboard and mouse. I'm not proposing that we don't support keyboard and mouse input. I'm also not proposing that we don't pay attention to accessibility settings. I do think that we consider a cross-platform solution as sufficient. If you look at 5.3, it looks a lot like 5.1. Today, even on Windows, there are at least four ways for ATs to look into apps (off-screen model, DOM, MSAA, Java).
JA: Looking at techniques: 5.1 has no techniques. 5.3 has all the techniques we've been talking about.
Proposed:
JA: What if you include DOM in 5.3 (it becomes a convention for the OS)?
RS: Propose an "OR" for 5.3 - use W3C standards for UI if they are applicable.
DA: I propose a hierarchy to merge them:
JT: But today, screen readers are using OS-specific solutions. Mozilla won't work today with screen readers.
Wendy Chill: It sounds like we might want to combine 5.1 and 5.2 to state:
IJ: So two axes: (1) Order of preference of API (2) two types of access (content v. UI).
DP: I like the order proposed by DA. But accessibility is paramount. If your solution breaks accessibility, it's not a solution.
DB: I don't think the 1, 2, 3 order should be a requirement.
Suggestion: WAC, DA, RS, DB, JA, KB, GR.
Requirement: IJ, DP
IJ: Otherwise you break interoperability, which is an accessibility issue.
RS: I think more work needs to be done before elevating to a requirement.
DP: If the solution doesn't make your UA accessible, then you can't make a requirement.
IJ: P1 for access, P2 for 1-2-3 order.
DA: I have a problem with this proposal because we confuse the meaning of Priorities that way.
WAC: "You can't satisfy this checkpoint if you break interoperability."
JG: I don't see why the 1-2-3 requirement is more useful at P2 than at P1. If AT's don't work at P1, what's the point.
WAC: What about a "Until DOM..." clause, like WCAG. Or "Until Assistive Technologies..."
DP: We're doing the guidelines to make it possible for UA developers to make accessible UAs. Everyone has to do their part. Screen readers will cope if enough users want them to. What's the best solution for now and further down the road.
RS: My impression of what I'm seeing with Mozilla and IE: In Mozilla, the whole chrome layer can be enriched with Java(script?).
DB: How are we preventing or inhibiting cross-platform development?
Rabi: We have the event model in the DOM. We should push W3C Recommendations.
IJ: Openness in what the UA can provide (DOM v. system convent) doubles the work for AT developers.
DP: ATs create accessibility on an application-by-applicationb basis. This is just another application.
DA: I'm sure AT developers would rather have fewer standards than more. The word processor and the spreadsheet won't use the DOM, so cross-application will please AT developers.
IJ: I propose:
KB: What is the likelihood that IE will implement both DOM and system standards?
JG: IE already does this.
RS: MS provides MSAA for controls and DOM for content access. That's accessible because it's designed for a particular platform. For cross-platform solutions, you need the solution.
RS: How to make these Guidelines forward-thinking so that we don't have to rewrite them in a year?
RS: I don't want arbitrary programmatic access to the application. You can't do a little bit of the DOM here, a little bit of MSAA there. You need to use a "proven" solution.
JT: MS apps used the standard dialog API and provided info to AT vendors. Is that what you want UAs to do?
Proposed:
Rabi: Good to highlight W3C standards
IJ: We have a checkpoint (p2) for that.
GR: I want to see the work being done in Mozilla to be included in the techniques document as a sample implementation - a blueprint for how this can be done.
Resolved:
WAC: We need additional materials for how screen readers work with UAGL. E.g., refer to "UA Browser Support page".