IRC log of mediafrag on 2008-10-20

Timestamps are in UTC.

06:38:56 [RRSAgent]
RRSAgent has joined #mediafrag
06:38:56 [RRSAgent]
logging to
06:38:58 [trackbot]
RRSAgent, make logs public
06:38:58 [Zakim]
Zakim has joined #mediafrag
06:39:00 [trackbot]
Zakim, this will be IA_MFWG
06:39:00 [Zakim]
ok, trackbot; I see IA_MFWG()3:00AM scheduled to start in 21 minutes
06:39:01 [trackbot]
Meeting: Media Fragments Working Group Teleconference
06:39:01 [trackbot]
Date: 20 October 2008
06:39:40 [raphael]
06:39:49 [raphael]
Chairs: Erik, Raphael
06:41:26 [nessy]
nessy has joined #mediafrag
06:42:39 [davy]
davy has joined #mediafrag
07:07:36 [raphael]
zakim, call IlesC
07:07:36 [Zakim]
I am sorry, raphael; I do not know a number for IlesC
07:08:11 [raphael]
zakim, call Iles_C
07:08:11 [Zakim]
ok, raphael; the call is being made
07:08:12 [Zakim]
IA_MFWG()3:00AM has now started
07:08:14 [Zakim]
07:11:04 [nessy]
Meeting openend 9:08
07:11:16 [nessy]
round of introductions
07:11:20 [nessy]
07:11:22 [nessy]
07:11:36 [raphael]
scribenick: raphael
07:12:00 [raphael]
s/round of introductions/TOPIC: 1. Round of introductions
07:12:22 [raphael]
Davy: also in Multimedia Lab, IBBT, Ghent (BE)
07:13:07 [raphael]
Silvia: involved in MPEG-7, MPEG-21, developed Annodex (annotation format for ogg media files)
07:13:36 [raphael]
... start my own start up for measuring the audience of video on the web + consultant for Mozilla
07:13:47 [raphael]
... developped the TemporalURI specification, 6 years ago
07:14:55 [raphael]
Guillaume Olivrin, South Africa, focus on accessibility, how do you attach specific semantics to parts of media
07:15:29 [raphael]
Daniel Park, Samsung, co-chair of the Media Annotation, focus on IPTV (background in wireless networking)
07:16:06 [RRSAgent]
I have made the request to generate raphael
07:17:51 [raphael]
Andy Heath, Open University, UK, background on e-learning, but develop far more general technologies, focus on accessibility
07:18:57 [raphael]
... experience in standards such as LOOM, DC, SKOM
07:19:45 [raphael]
07:19:53 [raphael]
07:20:50 [raphael]
Colm Doyle: Blinkx
07:23:02 [raphael]
Larry Masinter: Adobe, experience in co-chairing HTTP group, focus on acquisition of metadata
07:25:24 [guillaume_]
guillaume_ has joined #mediafrag
07:26:22 [raphael]
Khang Cham, Samsung, focus on IPTV
07:27:11 [raphael]
Yves: W3C team contact, expertise in protocols, web services
07:28:37 [nessy]
07:29:13 [nessy]
... working group charter
07:31:11 [raphael]
Larry: important to define first requirements for why these URIs will be used for
07:34:16 [raphael]
... it might happen that you can not satisfy all the requirements with a URI, don't put that out of scope now
07:34:33 [raphael]
TOPIC: 2. Use Cases Discussion (Part 1)
07:34:44 [raphael]
Photo Use Case:
07:35:57 [raphael]
Slides at:
07:36:03 [raphael]
Erik goes through the slides
07:40:03 [raphael]
Erik: take parts of images ... and assemble them together in a slideshow
07:40:23 [raphael]
Guillaume: unclear the value of the fragments here
07:40:35 [raphael]
... I understand fragment as taking a part of a large thing
07:58:18 [raphael]
raphael has joined #mediafrag
07:59:19 [raphael]
Larry: is it worth at all to look at Spatial URIs? Is it for doing partial retrieval?
07:59:23 [raphael]
Raphael: mention maps applications
07:59:26 [raphael]
Larry: but they are intereactive!
07:59:30 [raphael]
Raphael: mention multi-resolution images, image industry has huge need and will to expose high resolution version of images
07:59:33 [raphael]
Larry: they do have JPEG2000 and protocols
07:59:36 [raphael]
Silvia: SMIL has ellaborate on the need for spatial fragments
07:59:40 [davy]
davy has joined #mediafrag
07:59:59 [raphael]
Jack: important needs in the SMIL community and SVG ... image maps, pan zoom, cropping
08:00:02 [raphael]
Erik: continues the presentation, after temporally assemble parts of images into a slideshow, assemble two parts of an image into a new one (stich)
08:00:05 [erik]
erik has joined #mediafrag
08:00:05 [raphael]
... Existing technologies: RSS and Atom for the playlist generation
08:00:06 [jackjansen]
jackjansen has joined #mediafrag
08:00:08 [raphael]
... W3C SMIL: XML-based markup language, requires a SMIL player
08:00:11 [raphael]
... MPEG-21: Part 17 for fragment identification of MPEG Ressources, client-side processing ... pseudo playlist
08:00:14 [raphael]
... MPEG-A: MAF (Media Application Format) that combines MPEG technologies
08:00:17 [raphael]
... XSPF (spiff): XML Shareable Playlist Format: Xiph Community
08:00:19 [raphael]
... Discussion: is it out of scope or not? specific use cases around? other technologies around?
08:00:26 [davy]
davy has joined #mediafrag
08:00:38 [RRSAgent]
I have made the request to generate raphael
08:00:52 [davy]
davy has joined #mediafrag
08:00:59 [nessy]
nessy has joined #mediafrag
08:02:42 [raphael]
Guillaume: unclear the value of the fragments here
08:02:45 [Kangchan]
Kangchan has joined #mediafrag
08:02:51 [raphael]
... I understand fragment as taking a part of a large thing
08:02:59 [raphael]
Silvia: we are mainly looking at audio and videos files, but a video is a sequence of images
08:03:24 [raphael]
Larry: there are different servers and clients
08:03:40 [spark3]
spark3 has joined #mediafrag
08:04:32 [raphael]
Silvia: one way to look at a criteria is: is it a pure client-side issue or server-side + client-side problems?
08:05:16 [raphael]
Larry: even if it is only a client-side issue, it might be worth to do some standardisation
08:05:29 [raphael]
... the main point of still images fragment is the interactivity
08:09:02 [raphael]
Raphael: is interactivity the key interest in spatial fragment
08:09:39 [raphael]
Larry: there is a lot of work in this area, would recommend to focus on the temporal issue
08:10:07 [raphael]
... it is also a good exercise to look at the out-of-scope use case, help to shape the scope
08:12:30 [raphael]
Jack: URI is good because it is the web, the client is not necessarily aware of the time dimension
08:13:32 [raphael]
... HTML has already a notion of Area, so don't encode it in a URI
08:14:51 [raphael]
Larry: need to be carreful on URIs, resources, representations
08:16:22 [raphael]
... example of an image: need to decode it, take the parts, re-encode it
08:16:32 [raphael]
... JPEG2000 might have a direct way to do that
08:17:41 [raphael]
Guillaume: create mosaic, collage of parts of media
08:17:53 [RRSAgent]
I have made the request to generate raphael
08:19:56 [raphael]
Yves: it depends if the transformation needs to be on the client or not
08:20:57 [raphael]
Jack: be carreful, to not put SVG in a URI :-)
08:21:20 [raphael]
... good balance on which processing can be on client side, and what is worth to put in a URL
08:22:05 [raphael]
... is it better to have the processing in the URL?
08:23:23 [raphael]
Erik: we question again the interest of the spatial fragment
08:25:01 [raphael]
Silvia: is it a question of the size of the media? Large: worth to have fragment, Small: not worth
08:26:33 [raphael]
Larry: define what do you mean by media
08:27:18 [raphael]
... it is reasonable to limit yourself to videos
08:27:42 [raphael]
Silvia: SMIL and Flash are interactive media, not necessarily one timeline
08:28:10 [raphael]
... we focus on a resource with one timeline
08:29:34 [raphael]
... there is a whole sweat of codecs issues
08:31:32 [raphael]
Larry: define markers in videos
08:33:21 [Yves]
time... what is the reference of time for a video, embedded time code? 0 for the start?
09:15:59 [RRSAgent]
I have made the request to generate raphael
09:16:31 [davy]
davy has joined #mediafrag
09:16:37 [raphael]
Coffee break
09:16:42 [raphael]
Map Use Case:
09:16:57 [raphael]
scribenick: erik
09:18:30 [erik]
Raphael: Map UC Description
09:18:36 [nessy]
09:20:08 [erik]
Raphael: Annotation is key
09:20:43 [guillaume]
guillaume has joined #mediafrag
09:22:23 [Kangchan]
Question : What is relation between Geolocation Working Group(with and Web Map Services
09:23:29 [erik]
Raphael: UC examples using Yahoo, Google & Microsoft
09:26:33 [erik]
Jack: what we see here are URI's for the applications, not images
09:27:46 [erik]
Raphael will look deeper into different specs over the next couple of weeks for this Map UC
09:29:43 [erik]
Davy & Jack: is this a valid UC? will our spatial URL adressing scheme will be used by Maps Applications?
09:30:56 [fsasaki]
fsasaki has joined #mediafrag
09:31:03 [erik]
Raphael: as Larry said this morning, out-of-scope UC's are valid to come up with our final WG's scope
09:32:42 [guillaume]
Must document the out of scope UC to explain why it is out of scope.
09:32:50 [erik]
Sylvia: there might be a UC when we are talking about really large images (cfr. medical images in really high resolutions)
09:34:37 [erik]
Sylvia: having a way to get a subpart of such a big image is nice to have, but implementation is something different ... a lot of complications, certainly on some server-side implimentations
09:37:26 [erik]
Guillaume: codec issues not to be underestimated, have a nice adressing scheme vs. server-side complexity
09:39:09 [erik]
Sylvia: should look further than just server-side complexity, solutions for certain codecs will come around eventually if needed
09:39:59 [erik]
Jack: pratical issues vs. fundamental issues have to be taken into account within this group
09:40:50 [erik]
Jack: media fragments are needed because some things can not e expressed today
09:41:37 [erik]
s/not e/not be
09:43:06 [erik]
Raphael: is it worth of having an overview of the TimedText WG?
09:45:58 [nessy]
Guillaume: URI fragment identifier for text/plain:
09:46:07 [Yves]
(multi-resolution formats, is a good example of a single file containing multiple resolutions, maybe better than the map application)
09:48:33 [erik]
Raphael: Zoomify is good example of UC of very big images (life sciences) using fragments
09:50:57 [erik]
Raphael: task of this group to insure interoperability of different standards? (eg. MPEG-21 URI to SVG)
09:51:14 [erik]
09:53:05 [erik]
Sylvia: defining the mappings should be out-of-scope for this WG
09:53:45 [erik]
Jack: worthwile is testing our scheme to the others out there
09:54:19 [erik]
Sylvia: last thing to do & should be straight forward by then if we did a good job
09:55:01 [erik]
Raphael: what about spatial dimension?
09:55:36 [erik]
Sylvia: temporal adressing need is biggest, but spatial adressing need is also valid
09:57:32 [raphael]
TOPIC: 3. Use Case Discussion (Part 2)
09:57:49 [erik]
Sylvia presenting the Media Annotation UC
09:57:52 [raphael]
10:00:27 [raphael]
Annotation can be attached to the full media resource or to fragments of media resources
10:00:37 [raphael]
s/Annotation/Silvia: Annotation
10:00:52 [raphael]
scribenick: raphael
10:01:21 [raphael]
Sylvia: annotations to fragment is relevant for this group
10:03:05 [raphael]
Guillaume: can the structure of the video be represented in the URI
10:03:53 [raphael]
Silvia: difference between the representation of the fragment and its semantics
10:06:08 [spark3]
if necessary, what about adding a new UC (naming use case for fragment) into the Media Annotation WG UC ?
10:07:42 [raphael]
Silvia: drawing on the board
10:15:01 [erik]
Jack: there's only 1 timeline for timed media
10:16:14 [erik]
Jack: there's only 1 coordinate system for spatial media
10:17:42 [erik]
Jack: Annotation UC is important because we're reasoning on a higher abstraction level
10:18:42 [raphael]
Jacck: loves that use case since it is purely about fundamental description and indexing of a media
10:18:48 [raphael]
10:19:44 [raphael]
Silvia: goes through the advantages of a possible URI scheme for media fragments
10:21:41 [raphael]
... actually motivating the need for media fragments
10:22:00 [raphael]
... shows the picture at
10:22:54 [raphael]
... jumps into the track problems
10:23:14 [raphael]
... there is actually 3 dimensions: space, time and track
10:27:03 [raphael]
... temporalURI just deal with cropping, no track awareness
10:32:44 [RRSAgent]
I have made the request to generate raphael
10:33:53 [raphael]
Jack: rename this use case into 'Anchoring'
10:34:00 [raphael]
... annotation = RDF community
10:34:08 [raphael]
... structuring = SMIL community
10:35:38 [raphael]
Silvia: agree to rename it into Media Anchor Definition
10:35:44 [RRSAgent]
I have made the request to generate raphael
11:58:53 [davy]
davy has joined #mediafrag
11:59:19 [davy]
davy has joined #mediafrag
12:00:42 [raphael]
Lunch break
12:00:56 [raphael]
Media Delivery Use Case:
12:04:21 [jackjansen]
jackjansen has joined #mediafrag
12:05:27 [raphael]
scribenick: Jack
12:05:38 [Yves]
Scribe: Jack
12:05:40 [jackjansen]
scribenick: jackjansen
12:05:43 [raphael]
scribenick: jackjansen
12:06:02 [jackjansen]
zakim, who is the scribe?
12:06:02 [Zakim]
I don't understand your question, jackjansen.
12:06:04 [RRSAgent]
I have made the request to generate Yves
12:06:29 [jackjansen]
TOPIC: Media Delivery use case
12:11:02 [raphael]
Davi going through the slide at:
12:11:10 [raphael]
12:17:00 [jackjansen]
Various: (discussing slide 3, # vs. ? or ,): Can we use # as the only user-visible marker and use http-ranges or something similar?
12:18:19 [raphael]
Silvia drawing a communication channel between UA and servers
12:22:16 [raphael]
Discussion about the use of the "hash" character
12:27:40 [raphael]
Yves: use case is to extract a frame of a video, and creates a new image (so a new resource), use a '?'
12:28:04 [raphael]
... use case is to keep the context, use a '#'
12:29:19 [raphael]
Summary: there is use cases for both, should be further discussed tomorrow morning
12:29:32 [jackjansen]
summary: there are use cases for both. We will get back to the subject tomorrow.
12:30:11 [spark3]
spark3 has joined #mediafrag
12:30:39 [guillaume]
dejà vue
12:31:03 [raphael]
Davy: explains the MPEG-21 Fragment identification
12:31:18 [raphael]
... use of the '#', but no delivery protocol
12:33:43 [raphael]
... mention also the proposal of Dave Singer: UA get first N bytes representing the headers with timing and bytes offset information of the media resource
12:34:39 [raphael]
... goes through an explanation of MPEG-21:
12:34:50 [raphael]
... 4 schemes
12:35:29 [raphael]
... ffp for the track
12:35:37 [raphael]
... offset for bytes range
12:38:05 [jackjansen]
all: discussing #mp() scheme
12:38:07 [raphael]
... mp for specifying the temporal or spatial fragment (only for MPEG mime-type resources)
12:38:32 [jackjansen]
Siylvia: whoever controls the mimetype also controls what is after the # in a url
12:38:50 [jackjansen]
Jack: is surprised, but pleasantly so.
12:39:45 [jackjansen]
12:42:07 [raphael]
Davy: the 4th scheme is 'mask' (only for MPEG resources)
12:48:24 [raphael]
Jack: seems they structure the video resource and point towards this structure
12:49:19 [raphael]
Raphael: how many user agents can understand this syntax?
12:49:31 [jackjansen]
all: none, that we know of
12:49:41 [raphael]
Davy: i'm not aware of ... altough there is a referenced implementation
12:50:47 [raphael]
Larry: http is not necessarily the best protocol to transport video
12:52:35 [Yves]
in video, it depends if you want exact timing, control of the lag, and in that case HTTP is not the best choice
12:54:03 [raphael]
Silvia: I would say that most of the videos is transported over http
12:59:35 [raphael]
... RTP and RTSP have their own fragments, we should learn from them
13:00:27 [raphael]
... if they do not satisfy all our requirements, we can feed them so they extend the use of fragments in these protocols
13:00:44 [raphael]
Davy: goes through TemporalURI
13:02:24 [raphael]
... this is the only that specifies a delivery protocol over http
13:06:45 [jackjansen]
Silvia: Real used to allow something similar to temporal URLs
13:06:57 [jackjansen]
Jack: thinks it may be part of the .ram files
13:07:34 [jackjansen]
Guillaume: Flash allows doc author to export subparts by name, these can then be accessed with url#name
13:08:44 [jackjansen]
Davy: continues with slide 6, http media delivery
13:08:52 [guillaume]
Guillaume: Flash could also embed internal links in movie attached to certain frames. Once compiled with specific option, fragment of the Flash movie could be accessed using #
13:14:01 [raphael]
Silvia: draw the four-way handshake
13:30:01 [raphael]
... 1st exchange: User requests
13:30:35 [raphael]
... UA does a GET <uri stripped of hash>, Range: time 20-30
13:31:53 [raphael]
... Server send back a Response 200, with the content-range: time 20-30 + content-type + ogg header + time-range bytes 50000-20000
13:32:14 [raphael]
... (needs to create a new http header, 'time-range')
13:32:33 [raphael]
Raphael: can we use content-range: bytes ... ?
13:33:48 [raphael]
... UA does a GET <URI strriped of the hash>, Range x bytes: 5000-20000
13:34:35 [raphael]
... Server send back a Response 200, with the content-range bytes + the cropped data
13:39:29 [raphael]
Silvia: it is not implemented yet as far as I know
13:39:41 [raphael]
... discussion based on a lot of discussions with proxies vendors
13:41:34 [raphael]
Davy: could we apply the same four-way handshake with RTSP?
13:42:14 [raphael]
... RTSP specifies a Range Header, similar to the HTTP byte range mechanism
13:42:57 [spark3]
spark3 has joined #mediafrag
13:43:09 [raphael]
... RTSP could support temporal fragments by a two-way handshake (using Range header)
13:43:25 [raphael]
... Problem: spatial fragments are not supported!
13:44:03 [raphael]
Jack: the spatial problem is kind of orthogonal
13:44:44 [raphael]
... the spatial fragment will not be about bytes range
13:44:59 [raphael]
Davy: cropping is more complex in images
13:46:42 [raphael]
Jack: you're right, I can create a non-continous quicktime movie
13:48:17 [raphael]
... problem is it is not necessarily possible to generate a byte range from a time range
13:49:03 [raphael]
Silvia: a single byte range
13:51:56 [jackjansen]
all: the non-contiguous ranges may occur more often than we like. But maybe
13:52:20 [jackjansen]
... we can get away with ignoring them (because all relevant formats also have a contiguous form).
13:52:29 [jackjansen]
... need to discuss after the break.
13:53:17 [jackjansen]
raphael: suggest coffee break
13:53:39 [guillaume]
or need to coalesce
13:56:26 [RRSAgent]
I have made the request to generate raphael
13:56:27 [jackjansen]
Larry: please decouple representation of how you refer to fragments form he implementations
13:57:16 [jackjansen]
... Als think about embedded metadata: if the original has a copyright statement, do you get it wth every fragment?
13:57:22 [jackjansen]
13:58:16 [jackjansen]
Sylvia: (on prev subject): wonders whether http can do multiple byte ranges
13:58:27 [jackjansen]
Larry: yes, I think so, with multipart
14:02:19 [erik]
erik has joined #mediafrag
14:10:04 [nessy]
nessy has joined #mediafrag
14:29:21 [davy]
davy has joined #mediafrag
14:29:33 [jackjansen]
jackjansen has joined #mediafrag
14:29:42 [raphael]
raphael has joined #mediafrag
14:30:06 [erik]
rssagent, draft minutes
14:31:08 [erik]
rrsagent, draft minutes
14:31:08 [RRSAgent]
I have made the request to generate erik
14:33:48 [davy]
scribenick: davy
14:34:11 [davy]
Media Delivery UC
14:34:56 [davy]
14:35:40 [davy]
raphael discusses the description written by Michael on the wiki
14:36:56 [davy]
... 3 things: bookmarking, playlists, and interlinking multimedia
14:37:11 [davy]
silvia: definition of playlists is out of scope
14:37:35 [davy]
guillaume: playlist is about presentation
14:43:28 [davy]
raphael: regarding interlinked: temporal URIs can be described in RDF (RDF doc describing an audio file)
14:45:39 [davy]
... difference between URI and RDF (or SMIL, or ...): you need to parse the metadata
14:46:19 [davy]
... RDF description of time segment could be replaced by a temporal URI
14:49:22 [davy]
silvia: interlinking multimedia is already covered in other UCs
14:53:54 [davy]
Video Browser UC
14:54:21 [davy]
silvia: large media files introduces special challenges
14:54:47 [davy]
... requirement for server-side processing
14:55:35 [davy]
... dynamic creation of thumbnails through URI mechanism
14:56:19 [davy]
guillaume: link to PNG or GIF
14:56:28 [davy]
... provide a preview function of the resource
14:57:30 [davy]
... trivial: get all the I-frames of a video resource
14:57:36 [davy]
... use them as thumbs
15:00:26 [davy]
... thumbnail extraction is quite easy
15:00:49 [davy]
silvia, jack: not so trivial, might be processing-intensive
15:02:07 [davy]
silvia: it should be possible to point to one single frame with the URI scheme
15:03:11 [davy]
jack: URI scheme does not know that frame is 'the' thumbnail
15:03:32 [davy]
15:04:21 [davy]
guillaume: you can have multiple thumbs per resource
15:05:27 [davy]
raphael: URI scheme can point to a frame, but does not have knowledge about thumbs
15:06:50 [davy]
raphael: should we be able to address in terms of frames?
15:07:09 [davy]
guillaume: no, too coding-specific
15:07:38 [davy]
RRSAgent, draft minutes
15:07:38 [RRSAgent]
I have made the request to generate davy
15:08:23 [davy]
silvia: previews of images?
15:08:35 [davy]
... preview is then a lower resolution image
15:08:49 [davy]
guillaume: that is processing
15:09:57 [davy]
... mostly, previews are already part of the media resource
15:10:51 [davy]
... hence lower image resolutions are out of scope
15:14:45 [davy]
jack: not too far?
15:15:01 [davy]
... is a preview embedded in a resource still a fragment?
15:18:17 [davy]
guillaume: compare it with tracks
15:18:26 [davy]
... preview is just another track
15:19:23 [davy]
raphael: we put this in mind and make a decision later
15:21:38 [RRSAgent]
I have made the request to generate davy
15:24:11 [davy]
silvia: previews are another sort of tracks
15:24:32 [davy]
raphael: should we also to be able to address metadata within the headers?
15:26:19 [davy]
silvia: it is not a common property of all the formats to have previews, therefore, it is not a candidate to be standardized
15:28:08 [davy]
raphael: after first fase of the WG: report the current limitations
15:28:31 [davy]
... and wait for feedback
15:28:56 [RRSAgent]
I have made the request to generate davy
15:29:05 [davy]
Moving Point Of Interest UC
15:29:12 [davy]
raphael: complex UC
15:30:04 [davy]
... should be for the second phase
15:30:55 [davy]
jack: if this ever to be going to used at server-side?
15:31:03 [davy]
... if not, it is out of scope
15:31:42 [davy]
raphael: you can share the link of the moving region
15:37:45 [davy]
erik: delivery to mobile devices is a use case introduced by the public flemish broadcaster
15:38:23 [davy]
jack: there is no reason to use URIs for that purpose, use metadata
15:39:57 [davy]
raphael: it is like concatenating spatial fragments over time
15:45:33 [davy]
guillaume: we are addressing points over space or time
15:46:02 [davy]
raphael: refer to HTML image maps
15:51:25 [davy]
raphael: region, interval can be defined by a combination of points
15:51:42 [davy]
... you need more than one point
15:53:50 [davy]
15:54:08 [davy]
raphael: we will discuss this tomorrow
15:54:42 [RRSAgent]
I have made the request to generate davy
17:05:01 [Zakim]
disconnecting the lone participant, Iles_C, in IA_MFWG()3:00AM
17:05:02 [Zakim]
IA_MFWG()3:00AM has ended
17:05:04 [Zakim]
Attendees were Iles_C
17:35:20 [Zakim]
Zakim has left #mediafrag
19:30:55 [RRSAgent]
I have made the request to generate Yves
20:24:54 [nessy]
nessy has joined #mediafrag