02 Nov 2011

See also: IRC log




<marie> TPAC11 Breakout Session: HTML5 AV Club

classically television was a live real-time broadcast medium, many watching same thing at once, optimized for showing lots of things to people at once

dvd players to ipods to personal video cameras - designed to help migrate to allow video to be shared with samller groups

use cases: 1) minimize latency - built with assumption over internet, 2) misused to classic television use case to everyone at once

Kevin: ... missing the two-way component; 3) previously recorded and edited (YouTube case).
... classic - news - if you watch cable news, sequence of pre-cut stories; and repeat over time
... speech/radio/music radio equivalent - same thing
... make transitions seamless- not at the script level but the browser level; buffer to hear, then play this, play this
... and then i would contend - movie, long form drama, are this case, because they have a scene structure

Kevin; so you could define them as a play list of sequences.

Kevin: The other one not attended to in HTML5 video space. Is user generated content, and keep it, archival model, small number of viewers, compressing down to fit in the network
... is this the right use case list and are they well represented in the HTML spec

Mark Vickers: 1) only one spec so it has to work for everyone. 2) web and tv interested in all use cases, and interested in gaps

Mark: play list is one we have on the list to talk about tomorrow
... play this sequence when the next one is done. Another need...
... ... is adaptive streaming

Kevin: the adaptive streaming, that is one of the specs almost there to satisfy legal worry not tech worry

Mark: it is on a shared internet connection, and you don't have sufficient bandwidth, you choose one a lower bandwidth stream

Kevin: download the damn thing and play it

Mark Watson: choose a bit rate commensurate to the bandwidth you have, or lower that meansyou can start playing it right away, but ... asdaptive straming is about fast start up and low buffering

Kevin: doing at chapter chck points

Mark: but you need 2-3 second points to do adaptive streaming correctly

Mark Vickers: even when we have full right to download we use adaptive streaming, because it will starts quicker

Mark Vickers: techological leap ahead

Mark Vickers: full download and adaptive streaming are well supported

Eric: extremely difficultto the point of making it not always work well, adding the kind of API people are talking about...

to do all the media loading and switching to do adaptive streaming, technical issues to make that work well that it cannot be resolved

Eric: example, experimental API in Chrome to write javascript to load media and to the decoder, javascript that does all the adaptive stremaing, everything except decode the media
... i argue that it will be very very difficult to make that work as well as we want it to work, because there are some difficultor implossible tech issues that cannot be solved in the javascript layer
... for example, caching media, all the data that gets loaded by Javascript is stored in memory
... because of security constraints, cannot give javascript arbitrary access to the harddrive, so don't think it will be possible for the javascript to do sufficient caching of the media

mark vickers; may not want to do everything in javscript.

Eric: using now, load media from the server, figure out what byte ranges, feed it down to the decoder, and as long as you don't care about looping, allowing the user to seek back into content ...

without going back to the server. I don't know how to make it more intelligent...

Mark Watson: why couldn't it hold onto what it is given?

Eric: doesn't have thelogic - which data is being loaded - how much should be kept in cache - the data is in Javascript

Mark Watson: what kind of data?

Eric: how user is viewing media, duration of the whole thing, likelihoo...
... when a seek happens...

Mark: i see what you mean, are you assuming, w... a different model, provide the position in the global timeline, then it would have th emap of what media goes where...

Kevin: stream orientation .. classic MPEG TS model. ...

Mark W: the current experimental web kit model - i wouldn't propose as a product model, but question about what goes on in javascript and what goes on underneath

Mark W: i imagine there is a smart division between those layers.

Eric: don't doubt your sincerity, but we need to be careful about adding something here because you have something there (in existing adaptive streaming solutions)

Kevin: 2-way conversation - we hae a spec heading towards that (webRTC) and viewed as a different thing from adaptive streaming world.
... the other, distinct case, where i accept adaptive streaming makes more sense, near live, sports game, (1-way near live) that may be a case...
... ... the key difference is a close enough ... 1 way near live, two hour phone-in type thing, want to be listening in near real-time

Clarke: tradeoff bandwidth and computational power

Mark K: inband is one way, also out of band multiple tracks, no way of saying which one is best...

Eric: handled by existing spec

Kevin: the archival case, interesting, one case we have had - how do we deal with codecs, patent things, hardware things, and what i am wondering...
... ... is there space for archival, non-patented, non-compressed, is it worth building up use cases for playing back lightly compressed video, as well as focusing on the 264/webM view ehere everhthing ...
... ... is rather strongly compressed.

Eric: i have thought about it - two things - i don't think it is worht putting in the spec. i don't think i tis useful to take up space in the spec...
... ... what i do think is useful... so... if not in the spec the right way to handle it is to put pressure on browser vendors to support it.

Kevin: the principle that once you have made the media we want to keep it around... somewhat related to that is the input side of things...
... which is in the spec but no one spupports it, file, particularly on mobile, audio and video ... but the browsers cannot import them
... ... is it a matter of grumbling to people...

Eric: and it could imply give me an interface to take a new photo

Frank O: quality of implementatino issue, not a matter of changing spec?

Kevin: at the moment the spec says you will get images...
... Write a test suite and grumble...
... what else would be nice to do with AV that we cannot do now?

Mark Watson: protection of UGC - you want your family to view in the cloud, and have them be able to play it back

Eric: as people outside this room, save it like an image. Conrol key, audio or video and save it to the local system. if we enable that, some way for content owners to opt out of that

Kevin: saveable flag that defaults to YES
... and then you need to be able to set it
... user editability - ask for at all?
... once i can capture, can i trim it.

Mark: there is a media fragment API?

Eric; not the same thing. only attributes on the URL, to change it you have to change the source

Mark V: URL to point into a fragment, but it doesn't allow you to play from this point to that point because each is a separae URL

Kevin: ... plays into the editing question as well.

Mark Watson: tiny one. presently, you can find out from media element how much was buffered, but if the player is inteliignet enough to stop playback, it knows how much...

mark: needs to buffer, so it could show a progress bar, which is a better user experience than a spinning "wait" symbol.
... right now, i cannot do a progress bar.

Kevin: scrubbing (dragging through to find where you want because there are no chapters)

Kevin is that a diff mode or payers problem

Mark: if VOD rather than live stream, knows how long and whatis available and get pull only those things to create whatever user interface you want

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2011/11/02 22:31:43 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

No ScribeNick specified.  Guessing ScribeNick: johnsim
Inferring Scribes: johnsim

WARNING: No "Topic:" lines found.

WARNING: No "Present: ... " found!
Possibly Present: Clarke Cyril_ Eric JMR JMR_ Mark Vincent eric_carlson frankolivier gang howard howard218 igarashi jallan jeanne jnurthen kevin marie mav mhakkinen noriya_ ryoichi_ ryoichi__ si-wei tpod
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Got date from IRC log name: 02 Nov 2011
Guessing minutes URL: http://www.w3.org/2011/11/02-htmlav-minutes.html
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report

[End of scribe.perl diagnostic output]