Re: Tech Discussions on the Multitrack Media (issue-152)

On Wed, Feb 16, 2011 at 6:33 PM, David Singer <singer@apple.com> wrote:
> I think we might be in agreement here, but I am not being clear.
>
> On Feb 16, 2011, at 15:22 , Silvia Pfeiffer wrote:
>
>>> b) and the timing does not need to change, but the audio description has, as part of its mix-down, the appropriate portions of the main audio, then make the video the primary source, and offer a <track> which has multiple sources, one or more of which are the plain audio, others are the audio description
>>
>> That would require the author to pull the main video into two separate
>> resources
>
> no...I am not being clear.
>
> <video src="just the video, madam" />
> <track src="primary audio" />
> <track src="audio description mixed with the right bits of primary audio" kind="audio-desc-of-video" />

I think I did understand correctly. Now you have three resources: one
with just video, one with just main audio and one with mixed-in audio
description, rather than just two: the main resource (a/v) and the
mixed-in audio description.

What I am saying is that this requires the author to pull the main
resource (a/v) into two resources: one a and one v, which is not the
typical way in which <video> deals with resources. Thinking about
formats (mp4, webm, ogg), this can result in quite an explosion of
files.


>>
>> In case of a mix-down audio description - which I regard as the 20%
>> use case
>
> Oh.  I think it's the 90% use case.  Usually there isn't enough quiet time in the primary audio to give the audio description.

Yes, but you don't have to provide the audio description as a mixed-in
resource, i.e. you can very well provide just the recorded speaker and
do the mixing by the browser. When I have authored audio descriptions
in the past, I have recorded my voice while listening to the main
audio on the headphones, so I had a separate audio track for the audio
description. I would think that's the easiest and best way of doing it
because it does not degrade the main audio and allows for the "clear
audio" use case.


>>> c) and the timing needs to change; offer two or more sources, one or more of which have the normal audio and normal timing, and one or more of which have the audio description with the revised timing
>>
>> The problem here is that not just the audio changes timing, but also
>> the video.
>
> again, I am being unclear
>
> <video>
>  <source src="the usual normal program, muxed audio and video" />
>  <source src="a described program, with timing changes in it, muxed audio and video" kind="audio-desc-of-video" />
> </video>

This is not possible. Because the <source> elements are only selected
based on the first codec that the browser understands. They have
nothing to do with multitrack. You would have to put them inside a
<track> element if you want to follow Eric's model, but then you have
to alternative media resources in a <track> element with no resource
on the <video>:

<video>
  <track>
    <source src=""the usual normal program, muxed audio and video" />
  </track>
  <track>
    <source src="a described program, with timing changes in it, muxed
audio and video" kind="audio-desc-of-video" />
  </track>
</video>


>>> while it is technically true that the user-agent may be able to make all sorts of ingenious displays, it's not a great system design to assume that the UA and the user will have the time or skills to make the choices over lots of ingenious possibilities.
>>
>> We do in fact have to discuss how the display of multiple videos would
>> work. Would they be expected to be displayed as picture-in-picture?
>
> I'd love to be able to give them display areas, and adjust the page as needed to suit.  That's why I originally thought of media queries; they can be used as needed to adjust the entire page layout, and also 'style' the tracks in the video.

Can you give an example how you think the media query would achieve
this? I don't follow.


Thanks,
Silvia.

Received on Wednesday, 16 February 2011 07:54:05 UTC