Minutes of 1-2 March UAWG face-to-face (Cambridge)

Agenda | Thursday, 1 March | Friday, 2 March

Participants

In person: Jon Gunderson (Chair), Ian Jacobs (Scribe), Mickey Quenzer, Aaron Leventhal, Harvey Bingham, Rich Schwerdtfeger, Gregory Rosmaita, Al Gilman

On the phone: David Poehlman

Observers: Chris Lilley, Loretta Reid, Jason White, Phill Jenkins, Helle Bjarno

1 March

Definition of "interactive element" and related issues on focus

IJ Proposed:

RS: Problem I have with mouseover events: no semantics at all to javascript. The user doesn't know where they will go when they active the element. Semantics are not well-known. This needs to be addressed in WCAG. I don't think that things with mouseover should be part of the definition. Perhaps we need to ensure that interactive element has known semantics.

GR: Concern about scripts is justified, but I don't think we need to change the definition in UAAG as a result.

RS: There are a lot of ways (notably with DOM 2) to interact with an element. Maybe we need "accessible interactive element" that has some attached semantics. There are plug-ins and scripts that can be activated.

Axes:

AL: This is used in four checkpoints 1.2, 7.3, 7.4, 8.2. I think the user needs to be able to control which events are fired. Do you always want to fire onfocus events when the focus lands on an element?

RS: In swing "armed element" means that there is any event handler attached (implicit or explicit). This would include links.

MQ: Here's my problem: I browse without knowing what's an element or not. I stop somewhere, and my "focus" is on that thing.

IJ: Note that the focus may move through the document, and may sometimes be on active elements, but not always. That's why we've disassociated "focus" from "active element". I think that focus moves through content, and can be on an active element. Navigation through links (by moving focus) is just one functionality.

AG: I think Ian's model is a little too broad. I think that the focus is a state of the current view. I think for UAAG that we should use "focus" in the way that XWindows and Windows use the term.

AG: Moving the focus to an element and having it make a noise might be considered interactive (not just user input, also styled behavior).

JG: In IE, every element is potentially interactive through event bubbling.

IJ: Does focus = active element? I've been wondering (and presuming that it does not equal it).

RS: One problem we had in java accessibility work: you don't want to provide focus by default to all active elements (navigation will be slow). In particular, elements with event handlers. I think that active elements should include event handlers. Focus should be given to some subset of active elements.So I agree with AG that we should stick with "traditional" usage of focus in UAAG 1.0.

/* Chris Lilley enters to observe the meeting */

AL: I think "interactive" is easy to understand, but "active" can mean so many things in English...

MQ: "review" in screen access means reading the content.

CL: "Review" sounds like passively reading. The computer doesn't know what you are reviewing in a visual world. The computer has more of an idea in the audio output.

AG: "A view" doesn't capture the fact that users of screen readers operate in two planes: the GUI interface, and a layered interaction over that. I think "different views" mean two from the user agent. But our language for describing the interaction between UA and AT need to account for the AT description of their world.

GR: Screen access programs let you move point of regard around.

MQ: System has no idea where you are until you interact with the document through input devices.

RS: User interfaces don't attach focus to the point of regard. Traditionally, in UIs, focus is directly attached to where keyboard input is directed. What we want to do in terms of accessibility is require that the focus be tied to where the user is viewing content.

MQ: Review is like "point of regard".

GR: The focus needs to be exchanged with what you are reviewing.

CL: Section 6.7 of SVG (on hover, active, focus). Hover is like point of regard, active is an element that it being activated.

/* Break */

AL: Event handlers can be attached directly through the DOM

IJ: If the UA can recognize, that is included in active element.

Proposal:

1) You have to be able to navigate to all enabled interactive elements.

2) The UA must highlight enabled interactive elements.

/* WG is ok */

IJ: In most operating environments, the focus is used to designate the enabled interactive where triggering will occur.

AG: Today, the focus is used to redirect sensitivity to keyboard events.

IJ: Should our document treat "focus" as a device-independent state variable?

Consider these two options:

AG: In the voice input mode, the voice doesn't point to anything, it just activates a particular enabled interactive element. I agree with AL that there are things associated with a pointer and things not associated with a pointer.

GR: How do I establish focus in a pointer-only world?

RS: Is a device-independent focus implementable? Or implemented? We should use terms that developers can use today.

/* Loretta Reid joins */

LR: The keyboard focus has state. The mouse focus doesn't have state. If you fold mouse/keyboard into one focus, you lose the ability to separate the ability to move point of regard and keep focus.

AL: (We call this "pointer location", not "mouse focus").

JG: "Point of interaction"?

IJ: I'm not sure we gain by having a single term for "keyboard focus" and "pointer location").

AG: [Interaction between selection and focus] is like event bubbling. If you hit enter on a selection, the keyboard input is sent to the focus since not grabbed by the selection.

Proposal:

IJ: We need to unbind "current viewport" from "current focus" then. I think we need to change checkpoint 4.15 and 4.17 to not talk about the focus but the current viewport: the requirement is that the current viewport not change.

/* Lunch: 12:30 */

[Comments from Aaron before meeting restarts]:

/* David Poehlman joins */

IJ: One way to state what I think of AG's proposal from the previous meeting: You have to be able to active all active elements. You can do this through a number of mechanisms, including moving the marker and activating or moving the pointing device there and activating the element.

GR: You have to be able to get to all active elements in a dev-independent means. You also need to be able to query the element to find out what you can do there, and a device-independent means for triggering.

AG: You need to keep separate "getting to" and "activating". This means the user can do things in two steps if necessary.

IJ Proposal for breaking down 7.3:

AG: What about requirement to view the available actions?

IJ: Would this be for all behaviors (e.g., form controls, following a link)? Or is this just for things with explicit behaviors?

IJ: I would propose not including a requirement for telling the user about what behaviors are attached. The author-specified ones are in scripts. The predefined ones are in the HTML spec. It's a little different simply identifying that there is more than one action associated with an active element.

GR: And sorting by event type.

/* IJ demos the enlightenment window manager event handler tooltip mechanism */

AG: The query functionality is a technique. What I don't think is a new requirement is to be able to trigger all of the associated input actions. If we don't allow the user to distinguish the actions (e.g., an element has three "on" attributes specified on it), how can you in fact give device-independent access?

JG: One solution is just to allow the user to trigger different actions through different keys (without telling the user which ones are available.

/* Judy Brewer enters */

JB: Yes, example: let's say that I can see the screen but am using voice or eye gaze input. I'm trying to guess at what I can do.

/* Judy leaves */

Resolved:

Action JG: Write a proposal about documentation of associated actions on the focused element.

Action JG: Take to WAI PF the requirement that formats need to provide more precise means for specifying events (than HTML allows for).

/* Break 15:00-15:20 */

/* Phill Jenkins joins meeting */

Priority of checkpoint 10.1 (documentation of user agent)

PJ: Top on my list of priorities is to get UAAG 1.0 out the door. I also want to make UAs more accessible.

PJ: I have an issue with the priority of checkpoint 10.1: "Ensure that at least one version of the product documentation conforms to at least Level Double-A of the Web Content Accessibility Guidelines 1.0". This sounds like the priority of documentation is higher for user agents (than for other software). I would encourage user agent developers to make accessibility information P2 or some P3 even.

PJ proposal:

DP: We don't want to lose sight of the fact that a person with a disability may need access to all the document of a piece of software. Don't want to fall into trap of documentation (WCAG 1.0 Double-A) for access features only.

GR: I think that the proposal to limit the requirement to features that benefit accessibility is a form of cyber-ghettoization. I use all of the user agent, not just specific features. And I need to know about the impact of anything that my user agent does (e.g., communication with other software).

MQ: If you as a sighted user only had access to a subset of the documentation, would you want to use the product?

RS: Why is documentation more important for users with disabilities?

IJ: CMN has frequently made the point that it is (e.g., because the user interface is not obvious to users with disabilities).

JG: Suppose the UA provides only Single-A access to HTML document. If the UA is only a P1 UA, and the documentation is only P1, then it's even harder to use the user agent.

AL: I think relative priority is reasonable.

GR: Documentation is the make or break point for users with disabilities (e.g., due to loss of gestalt view) to be able to use software.

PJ: Why is that different from a banking application on line?

RS: Yes, software will be more and more server-based.

PJ:

AG:

MQ: From my technical support experience: the more documentation provided to users, and the more usable it is, the fewer tech support calls you have.

GR: Refer to my minority opinion on checkpoint 10.1.

GR: Focus of these guidelines needs to remain on the user.

RS: I think that if WCAG 1.0 is insufficient, it needs to be fixed. The levels are in there for a reason. If they're in there for a reason, WCAG needs to be fixed.

PJ: Why do you need a description of the GUI if you're blind?

GR: Sometimes I need to talk to tech support.

IJ question:

JG: UAAG doesn't have relative priorities for satisfying requirements of software guidelines.

GR: Priority is based on need, and the need is to know how the user agent works.

PJ: What are the technologies used to implement documentation that make it difficult to conform to WCAG 1.0 Level Double-A?

AG: One example: the documentation comes on a video cassette.

JG: Then captioning is P1. And auditory descriptions.

AG: [Opinion] Documenting how the user agent works is a more critical document than a general document. The people with disabilities are more dependent on the documentation for gaining the ability to use the tool than the general user.

GR: If documentation for installing the browser is in the browser itself...You need to know how to get help.

AG: To reiterate what GR said: "There needs to be a requirement about what the documentation includes."

IJ: We haven't said "you have to document everything". I suffer no more as a user with a disability for that which is not documented. (We do have some documentation requirements in G10 - what must be documented.)

RS: I'm not convinced that UA documentation is more critical than a general document.

PJ: Here are P2 items in WCAG 1.0 that are hard to do:

IJ Proposal:

/* IJ leaves and JG takes over scribing */

GR: Until user agent clausing on user agent can be reduced.

PJ: that narrows some

GR: I truly feel you pain to create accessible documentation. Do not support any lower of the requirement.

MQ: Problem is WCAG priorities are broken.

PJ: I don't assume that

MQ: I support a proposal from PJ. My proposal is that you explain to us what is needed in the documentation.

PJ: I am going to do Triple-A conformance for HPR. Fundamental issue is WCAG priorities.

HB: I have my own minority opinions that I have not documented

AG: If there was more data, I would like to here about users about their use of documentation. I don't know where we would get more data.

Resolution:

Action GR: Talk to National Tech Center at AFB about possible information on importance of documentation to users with disabilities.

Demonstration of IBM Home Page Reader

PJ demonstrates HPR version 2.5 for Windows.

PJ: Uses the DOM and has scripting support.

/* IJ returns and takes over scribing*/

Clarification of speech requirements

Refer to email from Ian requesting clarification on speech requirements.

JG: In the SAPI 5 interface, you have the voice groupings available. The SAPI 5 spec has volume, rate, average pitch, and engine type (e.g., some type of gender information or age level). You can't adjust the fundamental frequency of filter number 5.

PJ: In HPR, you have a dictionary separately. For most users, it's difficult to provide an understandable GUI for tweaking all the parameters.

MQ: I don't think we'll find implementation of all nine parameters. We'll find implementation of pitch and a few others.

PJ Proposal:

GR: The checkpoint was about the UA not blocking the user from configuring the parameters of the speech engine.

IJ: Note that this checkpoint has two parts:

JG: There is not a standard way that an engine provides access to these parameters. Some engines don't provide direct access to all parameters.

MQ: We don't have implementation experience on how to configure all 9 parameters.

HB: My concern is that the list of parameters are not independent and that they be moved to the note. Don't require them in the checkpoint.

GR: Control over some of these aspects may UI-dependent if the self-voicing browser implements ACSS.

IJ quoting a PJ proposal:

GR proposal:

HB: Quite a few engines don't support these nine features.

MQ: I think this is getting closer. My problem with the nine items is a definition one. Every synthesizer defines these terms differently.

JG: I think we need data not on the union of all speech engine parameters. We need a matrix of what is currently supported.

/* MQ and AL leave */

GR: When you open up a speech engine's UI, what comes under voice characteristics, that may be a subset of the all the characteristics you can configure. The functionality to provide stress needs to be there at a P1 level. The most important aspects of speech need to be there at a P1 level (e.g., supplemental speech).

AG: There is a difference between the "gross parameters of the voice" and the opportunity to have inflection (to set off special cases).

PJ: What you provide a UI to is different to what the speech engine can do. Our list of nine seems to have overlapping concepts and to many levels.

JG: We have two levels of things we want to change:

JG Proposal:

AG: Two ideas:

  1. I would describe what HB said: a lot of the inflection is there that corresponds to prosody. I would distinguish styling: it is changes in the prosody to indicate that it's a quote etc. (what's in the markup, rendered through style).
  2. "Provide styling to allow the user to distinguish active elements, fee links, selection, focus.

Resolved: Break 4.13 into the following checkpoints:

PJ: I suggest we build a table of what speech engines do.

RS: Another P3 idea: custom grouping of voice characteristics.

/* Adjourned 19:00 */

2 March

Participants: Jon Gunderson (Chair), Ian Jacobs (Scribe), Mickey Quenzer, Harvey Bingham, Daniel Dardailler, Rich Schwerdtfeger, Al Gilman, Aaron Leventhal, David Poehlman (phone).

Observers: Marie-Claire Forgue

What happens when author takes control of UA-UI controls?

Refer to email from Ian on taking control of UI controls.

IJ: Author writes to the status bar some scrolling text.

RS: The simplest way to deal with this is to allow configuration to turn off changes to the user agent's user interface from the author.

AG: I heard that "there should be a config setting where the chrome is strictly reserved to the user agent; the page content would be strictly segregated".

GR: "When UI controls are intended to serve a specific function, the user must be able to configure the UA so that the UI space reserved for the display/communication of specific information is not obscured."

AL: What else is there besides the status bar? Is there a scroll-to command?

JG: There are javascript functions to change UI settings (e.g., auto-scroll through the document).

RS: Does this mean you can't pop up windows? This overlaps with other checkpoints.

DD: I think GR's proposal should be shorter: "Allow the user to configure the user agent so that the author's content doesn't change them." Some pages change the title bar, some pages prevent you from closing the window (you have to use a close button in the content of the page), sometimes scripts change the back button behavior. It's not just presentation being obscured, it's functionality of the UI being changed or disabled.

AL: You don't want chrome taken away that you want to be able to use. You don't want information to be obscured. I think you want to have access to system status at all times.

AG: Two comments:

DD:

MQ: Need to ensure that the status bar is available to ATs. There should be a message history for the user (e.g., of messages that appeared in the status bar). You don't want to monitor the status bar all the time. If you are not monitoring it, you might miss something.

AL: Note: We shouldn't limit what authors can write. People may want to write entire applications in XML (including what describes the chrome). I agree we should make important info/features to the user available at all times, as opposed to prevent the author from doing some things.

GR: We don't have to prevent the author from doing things; just give the user final control. I agree with MQ's idea for a message history.

IJ: I would be comfortable creating a guideline (not a checkpoint) about final user say over author-control of the use agent user interface. I don't want to add new requirements at this point, but it would be valuable (if this is an important principle) to convey a message at the guideline level. I am not thrilled to add a checkpoint about a configuration so that the author can't write to the status bar (or title bar).

IJ: Are there other means than scripts to do these things? Turning of scripts is a P1.

GR: Turning of scripts is too Draconian (you don't necessarily have to turn off scripts that are in content).

AG: I'm going to be at the other end of the scale than GR, but don't expect this to be a checkpoint in this document:

AG: This group should tell WAI PF that the fact that scripting breaks the back button and that is a bug in the format.

/* There is large agreement that turning off scripts is not a real solution to this problem due to predominance of scripts on the Web */

IJ: Another way to view this problem: these are UI controls and we have checkpoint 5.11.

GR: I think that this is insufficient.

AL: I think there needs to be a standard for setting what an application coming over the net can do on your computer.

IJ Proposal:

GR Proposal:

IJ, RS: I will object to new requirements. This document needs to be published.

Resolved:

Action IJ: Craft a demo guideline to convey the message of user control over the UI.

Action JG: Create a wish list of other requirements not in UAAG 1.0, including those that might fall under this Guideline.

Action JG: Write up some techniques for checkpoint on scripts on/off about subsetting the configuration.

/* Thierry Michel announces that IE 5.5 for Windows supports the SMIL 2.0 'speed' attribute, allowing user agents to change the playback rate of SMIL 2.0 presentations */

Alerting the user to the existence of author-supplied event handlers.

Refer to Jon Gunderson's proposal for orientation about enabled elements.

Proposed: "For the element with focus, inform the user which keyboard, pointing device, and voice input events will activate associated author-supplied event handlers."

RS: You don't know what the handlers will do. Without a description, I don't know how valuable this is.

JG: Right, this information is in scripts.

RS: WAI PF put in a DOM 3 requirement: access to scripting semantics and activation schemes. We've requested info about associated event handlers, and descriptions (and WCAG should require descriptions).

AL: Are we talking about which key?

IJ: I think so, yes.

RS: For discussion with the DOM WG: programmatic, device-independent activation of event handlers.

AL: Refer to XBL-XML (for info about associating behaviors to elements).

IJ: Is this requirement for all associated actions or just device-dependent ones?

JG: For all, I think.

RS: I'm concerned here because we don't have much experience with this requirement. If we have descriptions some day, we should require access to that information as well.

IJ: I think we're already covered on access to the description by virtue of Guideline 2.

RS: The problem is that the DOM 2 and the specs don't allow authors to do this.

JG: Add to the techniques document: "If in the future, descriptors of script semantics are available through the format, make that available to the user."

AL: I think this requirement could be useful, depending on how people make use of the DOM.

IJ: Note that this (alert) checkpoint becomes necessary as soon as automatic behavior triggering is turned off (per yesterday's discussion). The device-independent triggering is only one aspect of this requirement.

Resolved:

RS: I think this is a step there, but it's half-baked: Without descriptors, I don't think it's as useful. But I can live with P2.

Technique idea: context menu for triggering available event handlers.

RS: Suppose I have an AT that wants to write to the DOM. I'm pretty sure that the user will want to know about event handlers associated by the AT. They shouldn't appear in the list. You won't know from the DOM which came from the author. At least say in the techniques that the event handlers from the AT should be distinguished.

Device-independent activation of author-supplied associated event handlers.

JG: Issues with double-click, mouse over (how to simulate the pixel coordinates).

JG: I think it's doable: for the pointing device events to be mapped to keyboard events, you either consider the pointer inside or outside the focus rectangle.

AG: Can you create an event "having processed the mouse pointer", i.e., you have a path in the DOM tree? Or is the UA required to synthesize a coordinate (pixel) which means it needs to know the focus rectangle coordinates, which we don't have today because we don't have the views and formatting model.

JG: We don' t have this through the DOM, but the UA may have it.

AG: Maybe it's a "0,0" rule in WCAG: scripts need a default handler when it gets 0,0 (if we can't get information about real x,y for the event).

IJ: Why ask the author to write scripts that way? If you have to have the author do something, have them use device-independent handlers?

JG: What about mouse buttons?

AL: That's part of the event.

AG: But are we capable of generating the event?

JG: But the UA knows this information (it generated the focus rectangle).

Resolved: No exception for author-supplied device-independent event handlers (see the full resolution above).

Action RS: Write to w3c-wai-ua asking whether, within IE, it's possible to simulate mouse button events in a way that would generate an appropriate event object. Can we get the box from IE.

"Required user input" in checkpoint 2.4

Refer to email from Ian on required user input.

MQ: Is this an author responsibility?

IJ: The WG decided that this was a UA responsibility.

HB: Is there in fact a way for the user to find out that the user is waiting? "Alert the user when paused to the availability of input."

DD: I agree with the proposal and the alert. I think people will ask the same question that I had about the diff with the general pause control. I think you need to make clear the difference from manual user control of other checkpoints.

JG: Refer also to modified proposal from JG.

Resolved:

UAWG meets with DOM WG

/* 12:00 - 14:00 */

Subjects covered:

Action JG: Take to the WAI CG the question of pursuing the DOM Views and Formatting module.

More on navigation

IJ Proposal: Change 7.1 from "navigate to all viewports" to "make each viewport the current viewport".

AG: Also need to be able to reach all of them in a systematic manner.

IJ Proposal: Allow the user to make any viewport the current viewport. The minimal requirement for achieving this is serial navigation through all viewports.

GR: Allow the user to navigate through all available viewports. When the user navigates to a viewport, that becomes the current viewport.

AG: I think that identifying "serial" is misleading.

JB: I agree with GR and AG. If I'm navigating through speech recognition, serial navigation would be an annoyance.

GR: With other checkpoints, we have configuration options that allow sufficient control.

AG: Serial order in content is naturally defined by the transfer encoding (thus makes sense to start from that point). Not the same linear topology for windows.

Resolved: Add a Note to 7.1 that navigating to a viewport makes it the current viewport. Add some discussion to techniques

/* Break 15:20 */

Proposed section on "how to refer to UAAG 1.0" (not conformance)

JB: We have a structure that allows organizations to adopt WCAG. I haven't looked at the implications for UAAG 1.0. Is the question how to "diverge from" UAAG 1.0?

IJ: I wanted to propose a mechanism for the document for how to refer to UAAG 1.0 (e.g., from a catalog).

Action IJ: Propose text for this cataloging information.

Other formats than SMIL that allow caption positioning?

Refer to question from Jon Gunderson to Cindy King.

JB: Please ping her again.

Action JG: Call Cindy King.

Action HB: Talk to Geoff Freed.

IJ: Quicktime lets you do this. Refer to minutes of 10-11 april ftf.

Animation classes

Refer to comments from Ian based on Aaron Cohen comments on animation classes (issue 430).

IJ: One issue in SMIL animation: you can refer to clock time:

CMN: "You can refer to clock time 1pm, then that + 3 minutes, then 1:05", and changing the speed may cause the order to change.

GR: But you can translate everything to relative time (including clock time).

CMN: But that falls down when the clock time is based on some real event in the world (that happens to happen at 3pm). The UA won't help in that case. But allowing the user to slow down the whole world may be useful. Or, allow the user to address real world time as best the user can.

CMN: Here's a problem case: you start your banking at 2pm. You have seven minutes to do this. If you slow the presentation down, you may miss the "deadline" when the server expects to get data.

IJ: The question was for 3.2: Does this make sense for just animated images or all animations? Also, 3.3 talks about animated text. What's the relationship between animations, animated images, and animated text (we use all three)?

CMN: I think that 3.3 is to stop animation (probably shouldn't be just limited to text).

IJ: Historical point: we isolated the case of text because of the importance of text.

AL: I think of animated images and videos as being the same (at least from the perspective of the format).

CMN: I think that animated images and video should be moved. This should be separate from animation in general (e.g., bits of a page, or text to scroll). Video is a subclass of animation in one sense. Video/animated images are "blocks". Animations are a superclass.

AL: What about SVG where you mix the two?

AG: If you want to make clearer what an animation is, pop up:

CMN: There seem to be two questions:

  1. Are we requiring stuff to be played backwards.
  2. What's the definition of animation?
  3. Which requirements apply to which classes of animation?

AG: Point off to the digital talking books in the techniques for cue and review. This should be distinguished from fast forward through time.

Action IJ: Distinguish these in the techniques, and point to digital talking books.

IJ: I think we have avoided "continuous content" to avoid scope creep.

AG: For controlling speed of play and stop and seek, I think it's better to talk about continuously changing effect to the user.

Action IJ: Add a Note to checkpoint 4.4. pointing to applicability provision about streaming and whether slow/etc. control enabled by the format.

AL: An animation is something where elements of content is moving. A spinning globe is like video in that it doesn't change it's position on the canvas.

AG: I'm not happy with defining animation as including all video. I don't think that's the common understanding. But I think my definitions would overlap.

IJ: I'd be happy referring to "video and animated images" as a pair, and referring to animations separately.

AG: I think they overlap.

Action CMN: Propose a definition of animation. I think it does overlap with video and animated images. Provide a definition that makes clear why "animated images" are not explicitly mentioned in 4.4 and why video and animated images are both in 3.2. Deadline 8 March.

GR: Be cautious about using the term framed (does this mean "has a frame around it?").

/* No resolution/no change. Any change depends on result of CMN's action */

Review second applicability clause

Due to addition of "conditional content" and the model in which role may not be known to the user agent, then the UA may not be able to recognize role where it is pertinent (e.g., can't tell something is a caption in SMIL).

IJ: In the SMIL case, you don't know from the format that something is captions. So our captions positioning requirement (4.6) becomes dubious.

CMN Proposal: I would suggest broadening 4.6 to be about positioning of objects. You can't tell whether a video object is a video of captions (e.g., a signed track), or a text stream, or a sequence of animated images, etc.

IJ: Broadening would be consistent with our other changes to G2. But I resist broadening the checkpoint requirement.

Proposal: Delete 2.5, as it's redundant with 2.6 and 2.1

IJ: I don't think that 2.5 is not covered with the right priorities by 2.1, 2.3, 2.9.

AG: Functionally it's covered, but not at the right priority.

CMN: My concern with removing 2.5 is the possibility that it refers to providing user control over synchronization and that that user control might be a good thing to keep. The use case is:

CMN: Part of access to visually rendered information is what it's rendered on top of. The requirement for positioning is an important requirement.

JG: That would be a big new requirement.

CMN: It's a new requirement in principle, but not in implementation. If you can implement 4.6 as proposed, it would be epsilon difference to do the broadened version. AG reporting on discussion with Real Networks: Everything is there through our API, but this should be done with an API. My estimate is, if we were to go back to RealNetworks, they would support CMN's assertion that positioning is a single problem.

CMN: I talked a lot with RealNetworks about this particular problem during this week. I have an algorithm for solving it (content transformation).

Action CMN: Propose this algorithm to the UAWG and to ERT.

Merging checkpoints 3.5 and 3.6?

Refer to comments from Al Gilman.

Action AG: Look at IJ's reply to AG's comment.

Returning to last call

JG: We have enough changes to the document that we should return to last call.


Last revised: $Date: 2001/03/16 17:15:52 $