W3C

- DRAFT -

HTML-A11Y telecon

26 Jan 2011

See also: IRC log

Attendees

Present
Regrets
geoff, judy
Chair
John_Foliot
Scribe
janina

Contents


<scribe> scribe: janina

<JF> http://john.foliot.ca/

<scribe> agenda: this

Identify Scribe

So, Janina is scribing ...

Actions Review http://www.w3.org/WAI/PF/HTML/track/actions/open

JF: Has submitted poster change proposal, so we can close Action-94

<JF> JF's change Proposal: http://www.w3.org/html/wg/wiki/ChangeProposals/PosterElement

Next, Action-95 asking media to revisit bug 1195 ...

JF: Think it's not a bug ...
... but Action-96 is to revisit Bug 11395
... So, Action-95 is probably in error and we should close it.

RE Action-96 -- Eric is following up with David Singer ...

Now Action-97 to follow up on Bug 9673 ...

<JF> http://www.w3.org/Bugs/Public/show_bug.cgi?id=9673

<JF> - Remove any reference to a specific Time Stamp format for video

<JF> captioning from the specification at this time.

JF: Wondering what to do, as references are generic

SH: I've left mine bug unresolved until Section 8.3.4, I believe

JF: So we should let it ride for now?

SH: Still hazy on the process--it'll get closed at some point ...
... Correction, it's Sec 10.3.2

JF: So we can close Action-97

Now on Action-98

JF: Eric you posted description of text formats, was that posted elsewhere?
... Ah, should be Sean drafted with Geoff

SH: Suggest the Media group should approve, and our chairs can forward to WCAG

JF: Will post to TF list asking for CFC

JS: Need more time on my action

Bug Review Followup

Time Tracks Feedback from Google http://lists.w3.org/Archives/Public/public-html-a11y/2011Jan/0152.html

JF: We were expecting Silvia would walk us through this, but she's unable to join today, so we'll defer this item.

Synchronizing Multiple Binary Assets

JF: Is Media Queries a way to do multiple binary assets?

EC: No, it's different
... Not convinced it's a gap we can close, it's a big deal

JF: So, we have src, can we use @kind to declare available binary? Is that a partial solution?

EC: Yes, there's a proposal Silvia wrote asking to generalize track to work with nontextual tracks
... Issue is keeping things in sync when timelines aren't exact

SH: Even if they are the same, there's still no sync concept

JS: Also an issue for i18n alternative audio

SH: Yes, a call for proposals is out but no proposals yet

EC: Agree with Sean that we don't have a mechanism to sync multiple media elements
... Don't think it will be that hard to have external files playing in sync, assuming timelines play in sync

JF: We've had a flash player that supported audio description, just based on firing both files at the same time

EC: Yes, we'll have that

JF: Sign translation needs to be close, but not as precise as audio description

EC: Think we can get good sync if timelines of files are correctly authored, media engine will sync
... We have no mechanism for discontinuity in timelines

JF: And also the requirement to navigate by chapter or smaller unit

EC: Not a problem
... Only discontinuities are a problem. Describing the nonlinear relationship is what's missing.

SH: Not sure I agree that's all that's issing
... Concerned about drift after a few minutes

EC: As long as we're only talking about child elements, the audio hw clock should manage the sync
... If we expect script to do the sync, that's different.

SH: Well, not what I'm hearing ...

JF: So, we should return on this?

EC: We need to followup with Silvia to make sure this is moving forward.

JF: OK

Media Queries on Track http://lists.w3.org/Archives/Public/public-html-a11y/2010Dec/0145.html

JF: Nothing to do?

JS: Yes, we've checked in already.

Poster Issue http://lists.w3.org/Archives/Public/public-html-a11y/2010Dec/0054.html

JF: Have a change proposal on this ...

<JF> http://www.w3.org/html/wg/wiki/ChangeProposals/PosterElement

JF: So, I believe there are use cases where we need ml in exposing poster info, i18n specifically
... Fairly common in video
... A main reason why I propose we set it up as child id
... Problem is that an attribute cannot except attributes
... Screen readers can pronounce correctly if lang identified
... I'm in sync with how we handle specifying caption in Italian vs a second captioning file in French
... Seems the benefit is more than a11y, also generally important for i18n

EC: Agree about applying attribute, but don't believe we need to
... If first frame needs description, it should be described as a part of the video resource itself
... I believe this is also Silvia's point.

JF: Pattern I'm proposing requires nothing new, uses pieces of what we already have.

EC: Isn't it more important to describe the video resource thoroughly?

JS: Yes, but aren't these two separate things?
... So how would it look--John's examples--if we described the posteer in its video resource description? What's the ml?

EC: Additional problem is first frame description only makes sense until the move starts.

JF: So, once user starts the video, there's an action that's exposed to browser and can be passed to AT, so blanking could be done.

EC: I'm asking what the appropriate thing to do is?
... So ua needs to querry for time=0?

JF: Perhaps
... There are poster content examples where alt is insufficient because it's not intended for that much text. That's in addition to i18n.

SH: Firstframe shouldn't be void element
... Would prefer that alternative text not be an attribute.

JF: Agree.

JS: Propose we take a set of example first.frame/poster images and ask for ml proposals to expose that info.

SH: So, Eric, you're in agreement with the no change proposal?

EC: Yes.

JF: Seems there are two things that need proof ...
... Is my ml most efficient?
... Compare some real world images--I only did two.

SH: Agree that the stronger the use cases, the better.
... If can show ml and real world examples, should be more suasive.

JS: Think identifying the pieces of info by classification is helpful. Not comfortable with a runon block of text.

SH: You're using area-describedby to point to synopsis, it's been argued it should point to the transcript.
... I believe we need another mechanism for transcript.
... Concerned that describedby is autovoiced without user action
... We currently have no mechanism for transcript, I think?

<JF> http://dev.w3.org/html5/spec/Overview.html#attr-track-kind

JF: believe one of the @kind is transcript, ... looking ...
... Ah, it's not there!

JS: Should be the same as caption or text description

SH: Think we're not covered,

JF: Another @kind term?

SH: Yes

<JF> Hi silvia, can you join the call as well?

<silvia> oh, is it happening right now?

<JF> since an hour already

<JF> <grin>

<JF> OK

<JF> we think we are close to wrapping up

<silvia> sorry - but I will participate here

SH: What did we decide between using screen reader or TTS directly to voice texted description?

JF: Isn't IBM Japan our info source on this?

<silvia> IBM Japan - as per the meeting we had in November - is keen to be able to have both supported

<JF> Janina has concerns about using the screen reader, as it introduces complexity to use the SR for other actions

<silvia> the demo I gave of a screen reader doing text descriptions was well received

JS: Problem with relying on the screen reader is now there's this constant media thing that interferes with the user's ability to control the computer.

<silvia> no different that other aria-live javascript activities

Yes

But we don't know a lot about that yet. It's all pretty new.

JF: For next week's agenda?

JS: Yes

<silvia> I would assume that when you are watching a video you are focused on the video and not doing other things on the page, so I don't know if there are many other things reading out at the same time

JS: yes, Silvia, but you need the option to grab control and do something different quickly, like mute the video and answer an incoming call on your voip windoqw.

<JF> proposing to put this topic on agenda for next week

<silvia> that option is not influenced by the screen reader reading out text description cues

<silvia> they are short text blips read out by the screen reader - no interruption of interactivity

Yeah, we kind of went back to poster ...

JF: We're adjourned.

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.135 (CVS log)
$Date: 2011/01/26 23:20:30 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.135  of Date: 2009/03/02 03:52:20  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Found Scribe: janina
Inferring ScribeNick: janina

WARNING: No "Present: ... " found!
Possibly Present: EC JF JS MichaelC SH eric_carlson html-a11y inserted janina left silvia tm trackbot
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

Regrets: geoff judy
Got date from IRC log name: 26 Jan 2011
Guessing minutes URL: http://www.w3.org/2011/01/26-html-a11y-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


[End of scribe.perl diagnostic output]