RE: [Minutes] Media Sub Team of the Accessibility Task Force - Feb 2., 2011

I fail to see how this has anything to do with accessibility, or SMPTE, or timed text.

-----Original Message-----
From: public-html-a11y-request@w3.org [mailto:public-html-a11y-request@w3.org] On Behalf Of Silvia Pfeiffer
Sent: Friday, February 04, 2011 12:40 AM
To: HTML Accessibility Task Force
Subject: Re: [Minutes] Media Sub Team of the Accessibility Task Force - Feb 2., 2011

As is my nature, I am curious about the FCC and SMPTE-TT work. So,
I've looked around a bit.

Ultraviolet is a cloud-based DRM system standardized by a consortium
of Movie studios, Sony, Adobe Systems, Cisco, HP, Microsoft, Neustar,
Intel and several others, see http://www.uvvu.com/alliance-members.php
. It is not available anywhere yet. Not part of the consortium are,
amongst others, Apple and Google (should that tell us that it's not
about the Web?). It's still questionable whether it will be the DRM
system of choice for the market once it comes out, but certainly many
are working towards that.

Anyway - it seems there is a lot happening around specifications for
Internet services - whether it's all good for the Web is a very
different question for me. Is the FCC actually looking at Web
standards or is it only concerned with TV services when delivered over
the Internet (not the Web)? Actually, even their mission statement
never uses the word "Web" and only every talks about Internet. I
wonder how much their agenda is driven by the TV and Movie industry
rather than native online services.

Cheers,
Silvia.
(speaking all for myself here)


On Fri, Feb 4, 2011 at 10:03 AM, John Foliot <jfoliot@stanford.edu> wrote:
> The minutes from the 2 February 2011 Media Sub Team can be accessed as
> hypertext from:
>
> http://www.w3.org/2011/02/02-html-a11y-minutes.html
>
> ...and as plain text following this announcement -- as usual, please
> report any errors, clarifications, mis-attributions, and the like by
> replying-to this announcement on-list
>
> JF
>
> *****
>
> HTML-A11Y telecon
> 02 Feb 2011
>
> See also: IRC log
> Attendees
>
> Present
> Regrets
> Chair
>    Janina_Sajka
> Scribe
>    JF
>
> Contents
>
>    * Topics
>         1. Identify Scribe
>         2. Actions Review
> http://www.w3.org/WAI/PF/HTML/track/actions/open
>         3. Time Tracks Feedback from Google
> http://lists.w3.org/Archives/Public/public-html-a11y/2011Jan/0152.html
>    * Summary of Action Items
>
> <janina> agenda: this
> Identify Scribe
>
> <scribe> scribe: JF
> Actions Review http://www.w3.org/WAI/PF/HTML/track/actions/open
>
> <silvia> close Action-98
>
> <trackbot> ACTION-98 Create a statement with geoff to forward need for
> caption and description techniques for wcag closed
>
> JF: re Action 98, posted draft to the list for CFC, and no feedback
> received
>
> should forward to the appropriate stake holders
>
> <silvia> Action-88?
>
> <trackbot> ACTION-88 -- Sean Hayes to review Media Fragment URI 1.0
> http://www.w3.org/TR/2010/WD-media-frags-20100624/ -- due 2010-11-24 --
> OPEN
>
> <trackbot> http://www.w3.org/WAI/PF/HTML/track/actions/88
>
> <silvia> Action-96?
>
> <trackbot> ACTION-96 -- John Foliot to media Sub Team to revisit bug 11395
> (Use media queries to select appropriate <track> elements) -- due
> 2011-01-06 -- OPEN
>
> <trackbot> http://www.w3.org/WAI/PF/HTML/track/actions/96
>
> Re: Action 88 - will leave as is, needs to go back to PF
>
> <Sean> can you make the due date on 88 end of March
>
> Issue 96 reassign to Eric Carlson
>
> <silvia> close Action-97
>
> <trackbot> ACTION-97 Follow up on bug #9673 closed
>
> Issue 97 - to be closed
>
> <silvia> Action-99?
>
> <trackbot> ACTION-99 -- Janina Sajka to annotate 9452 with clear audio
> discovery and selection, as well as independent control of multiple
> playback tracks -- due 2011-01-19 -- OPEN
>
> <trackbot> http://www.w3.org/WAI/PF/HTML/track/actions/99
>
> Issue 99
> Time Tracks Feedback from Google
> http://lists.w3.org/Archives/Public/public-html-a11y/2011Jan/0152.html
>
> Ad agenda item - overview of FCC status/situation
>
> <Judy> http://www.fcc.gov/cib/dro/VPAAC/
>
> Judy: VPAAC - Video Programming Accessibility Action Commitee
>
> recommend to look at the Mission Statement (Word Doc:
> http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-303943A1.doc)
>
> meetings and actions with tight time-lines around video accessibility -
> captioning and descriptive audio
>
> some awareness of work that is happening at W3C
>
> Janina: interested to understand what this applies to, penalties, etc.
>
> Geoff: there will also be rules about amount of video description as well
> as requirements for emergency information
>
> they are also looking at getting television shows already captioned for
> on-air broadcast, must also move to the web
>
> this now involves SMPTE
>
> and SMPTE TT will likey emerge as a recommendation from the committee
>
> Janina: unless we find accessibility issues with this
>
> this will potentially inovolve massive amounts of programming (TV shows)
>
> including older content as well as future content
>
> +q
>
> <silvia> +q
>
> Judy: can we get differences between SMPTE TT (which is a derivitive of
> TTML)
>
> adds the ability to add background images, as well as binary data
>
> also some additional metadata content
>
> JF: are broadcasters aware of the browser vendors will or wont support?
>
> Sean: we can already support, doesn't require native support for this to
> work. will likely wait to see how the market plays out
>
> Silvia: SMPTE TT is a new format, how much content is currently available
>
> Geoff: there is not yet a lot of implementation, but there is one major
> support - UltraViolet - which is a DRM-like solution to view content from
> the cloud
>
> since SMPTE TT is based on TTML, there is potential for growth
>
> Eric: is SMPTE a full profile subset of TTML?
>
> Sean: yes
>
> Judy: with this superset nature of SMPTE TT to what extent are the added
> features - things that align with accessibility user requirements that
> we've uncovered?
>
> Sean: the addition of images was from a request from asian territories
>
> they would rather not use actual fonts, and rather have images as more
> 'hand-drawn' character-sets
>
> the binary data is mostly for commercial requirements, for set-top boxes,
> etc.
>
> not really for user-benefit, but rather operator-benefit
>
> Janina: one of the other things coming from the FCC work is requirements
> for devices being sold in the US market, there will be more of these types
> of devices, and more regs to follow
>
> <kenny_j> Need to drop off the call for another meeting. bye all.
>
> Synopsis of questions re: time Tracks
>
> Silvia: the track element allows us to associate external caption files,
> sub-title files and other text files to videos
>
> Judy: is there a mechanism that can discover those assets
>
> +q
>
> ERic: the track element is for things that have timing with them
>
> so if the description has timing info thta needs to be displayed in sync
> with the video, then it is appropriate to use track element
>
> Sean: we've identified that there is no mechanism for labeling a
> transcript as such - there is no semantic link-up at this time
>
> <gfreed> geoff needs to go-- will read the minutes later this evening.
>
> Judy: a case can be made that access to a transcript would serve certain
> user needs for a11y
>
> +q
>
> Janina: we've identified that if there is timing data, that it should be
> linked to the video, but even if a transcript has no timing it may need to
> be programmatically associated to the video none-the-less
>
> Judy: the order of presentation /positioning
>
> that has been a problem in the past
>
> if we are trying to support multple media formats - foolproof
> discoverablility and sharability
>
> discussion about discoverability versus mechanisms for delivery
>
> ERic: discussion is not that there is disagreement on this, but how we
> deliver it - in sync (with time)
>
> it makes no sense to try and repurpose track and source for
> non-time-aligned content
>
> how does the content author package it
>
> Judy: so do we need another element?
>
> <silvia> s-
>
> given that we are under a very tight timeline at this point?
>
> eric: don't think we need a different/new element
>
> echos silvia's observation thta at transcript would be avialable for all
> users
>
> +Q
>
> <Judy> eric: you could just do the association with an attribute
>
> <Judy> jf: that would take us down the same path as with longdesc
>
> <Judy> ...we need to be able to package the transcript in some way that
> makes it available to users, not just visible on screen
>
> <silvia> http://www.w3.org/WAI/PF/HTML/wiki/Media_Multitrack_Media_API
>
> Janina: bottom line is that we do not have a means of associating a
> transcript to a the video resource
>
> whether an element or an attribute
>
> silvia are you on mute?
>
> <janina> Silvia, we don't hear you
>
> <Sean> try redialling. not hearing you
>
> Judy: we should record everything we can in terms of what is still open
>
> Silvia: we should have an email discussion on transcript
>
> (JF will check for that bug and post to the list)
>
> eric: when the durations are not the same - it's not an issue when they
> are not the same, but rather when the internal timing information are not
> the same
>
> when segments of one don't exactly overlap segments of the other
>
> there is no way of describing those associations
>
> Silvia: on the multi-track API
>
> summarize from discussions and an email thread from last fall - will
> summarize into a wiki page for further discussion
>
> we re-start a new mail thread
>
> Janina, another isue is if the user wants to control the secondary content
> - change font size, colors, adjust audio levels, etc.
>
> Janina: on one hand, this is very specific to Operating Systems
>
> but what we should be discussing is a systematic way for authors to create
> content, and signify this to the browser
>
>

Received on Friday, 4 February 2011 18:35:47 UTC