W3C

Use cases and requirements for Media Fragments

W3C Working Draft 30 April 2009

This version:
http://www.w3.org/TR/2009/WD-media-frags-reqs-20090430
Latest version:
http://www.w3.org/TR/media-frags-reqs
Editors:
Raphaël Troncy, CWI
Jack Jansen, CWI
Yves Lafon, W3C/ERCIM
Erik Mannens, IBBT Multimedia Lab
Silvia Pfeiffer, W3C Invited Experts
Davy Van Deursen, IBBT Multimedia Lab

Abstract

This document describes use cases and requirements for the development of the Media Fragments 1.0 specification. It also specifies the syntax for constructing media fragment URIs and explains how to handle them when used over the HTTP protocol. It finally includes a technology survey for addressing fragments of multimedia document.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is the First Public Working Draft of the Use cases and requirements for Media Fragments specification. It has been produced by the Media Fragments Working Group, which is part of the W3C Video on the Web Activity.

This document currently describes both use cases and requirements for media fragments and a preliminary specification of the syntax for constructing media fragment URIs together with the expected behavior regarding how to handle these URIs when used over the HTTP protocol. The group does not expect this document to become a W3C Recommendation. This document may be split into more documents later on. More precisely, the sections 3, 4, 5 and 8 will be included in a forthcoming WG Note while the sections 6 and 7 aim at being the core of the Media Fragments W3C Recommendation.

Please send comments about this document to public-media-fragment@w3.org mailing list (public archive).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1 Introduction
2 Terminology
3 Side Conditions
    3.1 Single Media Resource Definition
    3.2 Existing Standards
    3.3 Unique Resource
    3.4 Valid Resource
    3.5 Parent Resource
    3.6 Single Fragment
    3.7 Relevant Protocols
    3.8 No Recompression
    3.9 Minimize Impact on Existing Infrastructure
    3.10 Focus for Changes
    3.11 Browser Impact
    3.12 Fallback Action
4 Use Cases
    4.1 Linking to and Display of Media Fragments
        4.1.1 Scenario 1: Retrieve only segment of a video
        4.1.2 Scenario 2: Region of an Image
        4.1.3 Scenario 3: Portion of Music
        4.1.4 Scenario 4: Image Region of video over time
    4.2 Browsing and Bookmarking Media Fragments
        4.2.1 Scenario 1: Temporal Video Pagination
        4.2.2 Scenario 2: Audio Passage Bookmark
        4.2.3 Scenario 3: Audio Navigation
        4.2.4 Scenario 4: Caption and chapter tracks for browsing Video
    4.3 Recompositing Media Fragments
        4.3.1 Scenario 1: Reframing a photo in a slideshow
        4.3.2 Scenario 2: Mosaic
        4.3.3 Scenario 3: Video Mashup
        4.3.4 Scenario 4: Spatial Video Navigation
        4.3.5 Scenario 5: Selective previews
        4.3.6 Scenario 6: Music Samples
        4.3.7 Scenario 7: Highlighting regions (out-of-scope)
    4.4 Annotating Media Fragments
        4.4.1 Scenario 1: Spatial Tagging of Images
        4.4.2 Scenario 2: Temporal Tagging of Audio and Video
        4.4.3 Scenario 3: Named Anchors
        4.4.4 Scenario 4: Spatial and Temporal Tagging
        4.4.5 Scenario 5: Search Engine
    4.5 Adapting Media Resources
        4.5.1 Scenario 1: Changing Video quality (out-of-scope)
        4.5.2 Scenario 2: Selecting Regions in Images
        4.5.3 Scenario 3: Selecting an Image from a multi-part document (out-of-scope)
        4.5.4 Scenario 4: Retrieving an Image embedded thumbnail (out-of-scope)
        4.5.5 Scenario 5: Switching of Video Transmission
        4.5.6 Scenario 6: Toggle All Audio OFF
        4.5.7 Scenario 7: Toggle specific Audio tracks
5 Requirements for Media Fragment URIs
    5.1 Requirement r01: Temporal fragments
    5.2 Requirement r02: Spatial fragments
    5.3 Requirement r03: Track fragments
    5.4 Requirement r04: Named fragments
    5.5 Fitness Conditions on Media Containers/Resources
6 Media Fragments: syntax and semantics
    6.1 General Structure
    6.2 Fragment Dimensions
        6.2.1 Temporal Dimension
        6.2.2 Spatial Dimension
        6.2.3 Track Dimension
        6.2.4 Named Dimension
    6.3 ABNF Syntax
    6.4 Semantics
7 Retrieving Fragment on HTTP servers
    7.1 Single-step Partial GET
    7.2 Dual-step Partial GET
    7.3 Discussion
8 Technologies Survey
    8.1 Existing URI fragment schemes
        8.1.1 General specification of URI fragments
        8.1.2 Fragment specifications not for audio/video
        8.1.3 Fragment specifications for audio/video
    8.2 Existing applications using proprietary temporal media fragment URI schemes
    8.3 Media fragment specification approaches
        8.3.1 URI based
            8.3.1.1 SVG
                8.3.1.1.1 Spatial
            8.3.1.2 Temporal URI/Ogg technologies
                8.3.1.2.1 Temporal
                8.3.1.2.2 Track
                8.3.1.2.3 Named
            8.3.1.3 MPEG-21
                8.3.1.3.1 Temporal
                8.3.1.3.2 Spatial
                8.3.1.3.3 Track
                8.3.1.3.4 Named
        8.3.2 Non-URI-based
            8.3.2.1 SMIL
                8.3.2.1.1 Temporal
                8.3.2.1.2 Spatial
                8.3.2.1.3 Track
                8.3.2.1.4 Named
            8.3.2.2 MPEG-7
                8.3.2.2.1 Temporal
                8.3.2.2.2 Spatial
                8.3.2.2.3 Track
                8.3.2.2.4 Named
            8.3.2.3 SVG
                8.3.2.3.1 Temporal
                8.3.2.3.2 Spatial
            8.3.2.4 TV-Anytime
                8.3.2.4.1 Temporal
                8.3.2.4.2 Named
            8.3.2.5 ImageMaps
                8.3.2.5.1 Spatial
            8.3.2.6 HTML 5

Appendices

A References
B Evaluation of fitness per media formats
C Acknowledgements (Non-Normative)


1 Introduction

Audio and video resources on the World Wide Web are currently treated as "foreign" objects, which can only be embedded using a plugin that is capable of decoding and interacting with the media resource. Specific media servers are generally required to provide for server-side features such as direct access to time offsets into a video without the need to retrieve the entire resource. Support for such media fragment access varies between different media formats and inhibits standard means of dealing with such content on the Web.

This specification provides for a media-format independent, standard means of addressing media fragments on the Web using Uniform Resource Identifiers (URI). In the context of this document, media fragments are regarded along three different dimensions: temporal, spatial, and tracks. Further, a fragment can be marked with a name and then addressed through a URI using that name. The specified addressing schemes apply mainly to audio and video resources - the spatial fragment addressing may also be used on images.

The aim of this specification is to enhance the Web infrastructure for supporting the addressing and retrieval of subparts of time-based Web resources, as well as the automated processing of such subparts for reuse. Example uses are the sharing of such fragment URIs with friends via email, the automated creation of such fragment URIs in a search engine interface, or the annotation of media fragments with RDF. This specification will help make video a first-class citizen of the World Wide Web.

The media fragment URIs specified in this document have been implemented and demonstrated to work with media resources over the HTTP and RTP/RTSP protocols. Existing media formats in their current representations and implementations provide varying degrees of support for this specification. It is expected that over the time, media formats, media players, Web Browsers, media and Web servers, as well as Web proxies will be extended to adhere to the full requirements given in this specification.

2 Terminology

The keywords MUST, MUST NOT, SHOULD and SHOULD NOT are to be interpreted as defined in RFC 2119.

3 Side Conditions

This section lists a number of conditions which have directed the development of this specification. These conditions help clarify some of the decisions made, e.g. about what types of use cases are within the realm of this specification and which are outside. Spelling out these side conditions should help increase transparency of the specifications.

4 Use Cases

In which situations do users need media fragment URIs? This section explains the types of user interactions with media resources that media fragment URIs will enable. For each type it shows how media fragment URIs can improve the usefulness, usability, and functionality of online audio and video.

4.1 Linking to and Display of Media Fragments

In this use case, a user is only interested in consuming a fragment of a media resource rather than the complete resource. A media fragment URI allows addressing this part of the resource directly and thus enables the User Agent to receive just the relevant fragment.

4.2 Browsing and Bookmarking Media Fragments

Media resources - audio, video and even images - are often very large resources that users want to explore progressively. Progressive exploration of text is well-known in the Web space under the term "pagination". Pagination in the text space is realized by creating a series of Web pages and enabling paging through them by scripts on a server, each page having their own URI. For large media resources, such pagination can be provided by media fragment URIs, which enable direct access to media fragments.

4.2.4 Scenario 4: Caption and chapter tracks for browsing Video

Silvia has a deaf friend, Elaine, who would like to watch the holiday videos that Silvia is publishing on her website. Silvia has created subtitle tracks for her videos and also a segmentation (e.g. using CMML CMML) with unique identifiers on the clips that she describes. The clips were formed based on locations that Silvia has visited. In this way, Elaine is able to watch the videos by going through the clips and reading the subtitles for those clips that she is interested in. She watches the sections on Korea, Australia, and France, but jumps over the ones of Great Britain and Holland.

4.3 Recompositing Media Fragments

As we enable direct linking to media fragments in a URI, we can also enable simple recompositing of such media fragments. Note that because the media fragments in a composition may possibly originate from different codecs and very different files, we can not realistically expect smooth playback between the fragments.

4.4 Annotating Media Fragments

Media resources typically don't just consist of the binary data. There is often a lot of textual information available that relates to the media resource. Enabling the addressing of media fragments ultimately creates a means to attach annotations to media fragments.

4.4.2 Scenario 2: Temporal Tagging of Audio and Video

Editorial note: Silvia 
Time-aligned text such as captions, subtitles in multiple languages, and audio descriptions for audio and video don't have to be created as separate documents and link to each segment through a temporal URI. Such text can be made part of the media resource by the media author or delivered as a separate, but synchronised data stream to the media player. In either case, when it comes to using these with the HTML5 <video> tag, they should be made accessible to the Web page through a javascript API for the video/audio/image element. This needs to be addressed in the HTML5 working group.

4.5 Adapting Media Resources

When addressing a media resource as a user, one often has the desire not to retrieve the full resource, but only a subpart of interest. This may be a temporally or spatially consecutive subpart, but could also be e.g. a smaller bandwidth version of the same resource, a lower framerate video, a image with less colour depth or an audio file with a lower sampling rate. Media adaptation is the general term used for such server-side created versions of media resources.

5 Requirements for Media Fragment URIs

This section describes the list of required media fragment addressing dimensions that have resulted from the use case analysis.

It further analyses what format requirements the media resources has to adhere to in order to allow the extraction of the data that relates to that kind of addressing.

5.5 Fitness Conditions on Media Containers/Resources

There is a large number of media codecs and encapsulation formats that we need to take into account as potential media resources on the Web. This section analyses the general conditions for media formats that make them fit for supporting the different types of fragment URIs.

Media resources should fulfill the following conditions to allow extraction of fragments:

Not all media formats will be compliant with these two conditions. Hence, we distinguish the following categories:

Those media types that are capable of doing what server-side media fragments require are of interest to us. For those that aren't, the fall-back case applies (i.e. full download and then offsetting). Appendix B Evaluation of fitness per media formats lists a large number of typical formats and determines which we see fit, conditionally fit, or currently unfit for supporting the different types of media fragment URIs.

Editorial note: Silvia 

We ask for further input into the table in the attachment, in particular where there are question marks.

6 Media Fragments: syntax and semantics

This section describes the external representation of a media fragment specifier, and how this should be interpreted. The first two subsections are a semi-informal introduction, with the formal grammar and detailed semantics being specified in the last two subsections.

6.1 General Structure

To name a media fragment, one needs to find ways to convey this information. Our solution builds on URIs as of RFC 3986 and hence there are two possibilities for representing the media fragment addressing: the URI query part or the URI fragment part.

As of this writing, the group has a preference to use the URI fragment part, because this maintains the relationship between the main resource and the media fragment. Using the query part would result in a new resource being created. Hence, hash (#) is used as the separator between the base URI and the media fragment.

The fragment identifier consists of a list of name/value pairs, the dimension specifiers, separated by the primary separator &. Name and value are separated by an equal sign (=). In case value is structured, colon (:) and comma (,) are used as secondary separators. No whitespace is allowed (except inside strings).

Some examples of URIs with a media fragment, to show the general structure:

http://www.example.com/example.ogg#t=10s,20s
http://www.example.com/example.ogg#track='audio'
http://www.example.com/example.ogg#track='audio'&t=10s,20s

Media fragments support fragmenting the media along the four dimensions listed in 5 Requirements for Media Fragment URIs:

temporal

This dimension denotes a specific time range in the original media, such as "starting at second 10, continuing until second 20";

spatial

this dimension denotes a specific range of pixels in the original media, such as "a rectangle with size (100,100) with its top-left at coordinate (10,10)";

track

this dimension denotes one track (media type) in the original media, such as "the english audio track";

named

this dimension denotes a named section of the original media, such as "chapter 2".

Note that the track dimension refers to one of a set of parallel media streams ("the english audio track for a video"), not to a, possibly self-contained, section of the source media ("Audio track 2 of a CD"). The self-contained section is handled by the name dimension.

The name dimension cannot be combined with other dimensions for this version of the media fragments specification. Projection along the other three dimensions is logically commutative, therefore they can be combined, and the outcome is independent of the order of the dimensions. Each dimension can be specified at most once. The name dimension cannot be combined with the other dimensions, because the semantics depend on the underlying source media format: some media formats support naming of temporal extents, others support naming of groups of tracks, etc. Error semantics are discussed in 6.4 Semantics.

6.2 Fragment Dimensions

6.2.1 Temporal Dimension

Temporal clipping is denoted by the name t, and specified as an interval with a begin time and an end time (or an in-point and an out-point, in video editing terms). Either or both may be omitted, with the begin time defaulting to 0 seconds and the end time defaulting to the duration of the source media. The interval is half-open: the begin time is considered part of the interval whereas the end time is considered to be the first time point that is not part of the interval.

Temporal clipping can be specified either as Normal Play Time (npt) or as SMPTE timecodes, SMPTE. Begin and end times are always specified in the same format. The format is specified by name, followed by a colon (:), with npt: being the default.

In this version of the media fragments specification there is no extensibility mechanism to add time format specifiers.

t=10,20
t=npt:10,20

Normal Play Time can either be specified as seconds, with an optional fractional part and an optional s to indicate seconds, or as colon-separated hours, minutes and seconds (again with an optional fraction). Minutes and seconds must be specified as exactly two digits, hours and fractional seconds can be any number of digits. The hours, minutes and seconds specification for NPT is a convenience only, it does not signal frame accuracy.

t=120,
t=,121.5
t=120s,121.5s
t=0:02:00,121.5
t=npt:120,0:02:01.5
Editorial note: Jack 

Do we need a rationale, to explain that we picked this syntax for timecodes up from rtsp and smil?

SMPTE timecodes are a way to address a specific frame (or field) without running the risk of rounding errors causing a different frame to be selected. The format is always colon-separated hours, minutes, seconds and frames. Frames are optional, defaulting to 00. If the source format has a further subdivison of frames (such as odd/even fields in interlaced video) these can be specified further with a number after a dot (.). The SMPTE format name must always be specified, because the interpretation of the fields depends on the format. The SMPTE formats supported in this version of the specification are: smpte, smpte-25, smpte-30 and smpte-30-drop. smpte is a synonym for smpte-30.

t=smpte-30:0:02:00,0:02:01:15
t=smpte-25:0:02:00:00,0:02:01:12.1

Using SMPTE timecodes may result in frame-accurate begin and end times, but only if the timecode format used in the media fragment specifier is the same as that used in the original media item.

6.3 ABNF Syntax

In this section we present the ABNF (ABNF) syntax for a media fragment specifier. The names for the non-terminals more-or-less follow the names used in the previous subsections, with one clear difference: the start symbol is called mediasegment, because we want to leave open the possibility of reuse in a URI query in addition to the current use in a URI fragment.

segment       = mediasegment / *( pchar / "/" / "?" ) ; augmented fragment 
                                                     ; definition taken from 
                                                     ; rfc3986
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Media Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
mediasegment  = namesegment / axissegment
axissegment   = ( timesegment / spacesegment / tracksegment ) 
               *( "&" ( timesegment / spacesegment / tracksegment )
; 
; note that this does not capture the restriction to one kind of fragment 
; in the axisfragment definition, unless we list explicitely the 14 cases.
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Time Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
timesegment   = timeprefix "=" timeparam
timeprefix    = %x74                                      ; "t"
timeparam     = npttimedef / othertimedef
npttimedef    = [ deftimeformat ":"] [ clocktime ] "," [ clocktime ]
othertimedef  = timeformat ":" [frametime] "," [frametime]
deftimeformat = %x6E.70.74                                ; "npt"
timeformat    = %x73.6D.70.74.65                          ; "smpte"
               / %x73.6D.70.74.65.2D.32.35                ; "smpte-25"
               / %x73.6D.70.74.65.2D.33.30                ; "smpte-30"
               / %x73.6D.70.74.65.2D.33.30.2D.64.72.6F.70 ; "smpte-30-drop"
timeunit      = %x73                                      ; "s"
clocktime     = ( 1*DIGIT [ "." 1*DIGIT ] [timeunit] ) /
               ( 1*DIGIT ":" 2DIGIT ":" 2DIGIT [ "." 1*DIGIT] )
frametime     = 1*DIGIT ":" 2DIGIT ":" 2DIGIT [ ":" 2DIGIT [ "." 2DIGIT ] ]
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Space Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
spacesegment  = xywhdef / aspectdef
xywhdef       = xywhprefix   "=" xywhparam
aspectdef     = aspectprefix "=" aspectparam
xywhprefix    = %x78.79.77.68                             ; "xywh"
aspectprefix  = %x61.73.70.65.63.74                       ; "aspect"
xywhparam     = [ xywhunit ":" ] 1*DIGIT "," 1*DIGIT "," 1*DIGIT "," 1*DIGIT
xywhunit      = %x70.69.78.65.6C                          ; "pixel"
              / %x70.65.72.63.65.6E.74                    ; "percent"
aspectparam   = 1*DIGIT ":" 1*DIGIT
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Track Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
tracksegment  = trackprefix "=" trackparam
trackprefix   = %x74.72.61.63.6B                          ; "track"
trackparam    = utf8string
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Name Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
namesegment   = nameprefix "=" nameparam
nameprefix    = %x69.64                                   ; "id"
nameparam     = utf8string
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;; Imported definitions ;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
DIGIT         = <DIGIT, defined in rfc4234#3.4>
pchar         = <pchar, defined in rfc3986>
unreserved    = <unreserved, defined in rfc3986> 
pct-encoded   = <pct-encoded, defined in rfc3986>
utf8string    = "'" *( unreserved / pct-encoded ) "'"     ; utf-8 character
                                                          ; encoded URI-style

		

6.4 Semantics

Editorial note: Jack 

For this version of the working draft, this section is incomplete and unstructured. We expect to fill in more details as we gain implementation experience.

We also specifically request feedback from readers of this draft: if you notice errors, omissions, choices that are sub-optimal for some application area of media fragments or choices that you feel will cause implementation difficulties: please let us know.

The mimetype of the fragment should be the same as the mimetype of the source media. Among other things, this means that selection of a single video frame results in a movie, not in a still image.

Implementations are expected to first do track and time selection, on the container level, and then do spatial clipping on the codec level.

Editorial note: Jack 

Preferrably, clipping should be implemented without transcoding, provided the result in reasonably close to what was requested in the media fragment URI. This statement requires definition of "reasonable", which is TBD. The idea is that it is OK to have a video start half a second earlier than specified if that happens to be where an I-frame is, or an audio block boundary. For some container formats this is a non-issue, because the container format allows specification of logical begin and end.

We need to say something on whether A/V sync needs to be maintained, and to what granularity. This has consequences for transcoding.

We may need to say something on whether embedded timecodes in media streams (or as a separate timecode stream) are expected to be maintained (or not, or implementation-defined).

A media fragment URI may be used in a context that has its own clipping method, such as SMIL. This leads to a semantic issue of how the clipping methods combine: do they cascade, or does one override the other? Formally, this is up to the context embedding the media fragment URI, but in the absence of strong reasons to do otherwise we suggest cascading. So, the following should start playback of the original media at second 105, and stop at 115:

Attempting to do fragment selection on a dimension that does not exist in the source media, such as temporal clipping on a still image, should be considered a no-op.

The result of doing spatial clipping on a source media that has multiple video tracks is undefined if no track selection is also applied.

Editorial note: Jack 

We need to define more error semantics. Some areas:

  1. Overspecified: if the temporal (resp. spatial, track) dimension is used multiple times, only the first occurrence is considered

  2. Nonexistent (t= with begin and end past end-of-media, unknown id, unknown track)

  3. Partially existent (t= with end past EOM, xywh spec that extends past bounds): could be clipped to the actual size of the resource

  4. Non-existent that can be determined statically, for example t=20,10

  5. Incompatible: if the named dimension is used, all the other dimensions are ignored. Alternatively: this is an error.

7 Retrieving Fragment on HTTP servers

In the context of the HTTP protocol, two approaches are proposed which enable the retrieving and caching of media fragments:

Unfortunately, no approach is vastly superior, so the solution might be to use both, depending on which problem a Web application is trying to solve. Other concerns to deal with are the cachability of the resource.

7.2 Dual-step Partial GET

A user requests a media fragment URI, for example using a web browser:

UA chops off fragment and turns it into a HTTP GET request with a time range header:

Origin Server converts time range to byte range and puts all header data, occurring at the beginning of the media resource, that cannot be cached but is required by the UA to receive a fully functional media resource into the HTTP response. It also replies with a X-Accept-TimeURI header that indicates to the client that it has processed the time request and converted to bytes (similarly this could be extended to X-Accept-SpaceURI, X-Accept-TrackURI and X-Accept-NameURI). The message body of this answer contains the control section of fragf2f.mp4#12,21 (if required).

The UA buffers the data it receives for hand-over to the media subsystem. It then proceeds to put the actual fragment request through:

The Origin Server puts the data together and sends it to the UA:

The UA hands over the header and video data to the media subsystem and therefore display it to the user (9).

Illustration of two round trips between the user agent and the server

7.3 Discussion

Pro:

  1. Single-step partial GET needs only one roundtrip

  2. Single-step partial GET allows to extract a spatial region from a Motion JPEG2000

  3. Single-step partial GET usually achieves what we want without needing HTTP protocol extension for any resource with an intrinsic time → data map such as .mov, .mp4.

  4. Dual-step partial GET allows current web proxies to cache media fragments

Cons

  1. In both cases, we create a custom Range unit (e.g. 'seconds'). We would need to create custom range unit to convey the notion of seconds, pixels, tracks, etc.

  2. Dual-step partial GET need two roundtrips

  3. Dual-step partial GET does not allow to extract a spatial region from a Motion JPEG2000. Note though that all other media formats are characterized by a fixed non-cacheable header occurring at the beginning of the media stream and are thus compatible with the dual-step partial GET approach

  4. Single-step partial GET requires specialized 'media'-caches to cache media fragments

Using HTTP byte ranges to request media fragments enables existing HTTP proxies and caches to inherently support the caching of media fragments. This approach is possible if a dual-step partial GET is applied. This method, however, does not deliver complete resources, but rather generates an infinite number of resources to create the control section of the transmitted fragments, and extra care is needed when fetching different part to avoid fetching data from changing resources. Those new resources containing the control section of the fragments to be retrieved form an other resource that needs to be known by all clients, which has a big implementation cost, but has no impact on Caches.

HTTP byte ranges can only be used to request media fragments if these media fragments can be expressed in terms of byte ranges. This restriction implies that media resources should fulfill the following conditions:

Not all media formats will be compliant with these two conditions. Hence, we distinguish the following categories:

  1. The media resource meets the two conditions (i.e., fragments can be extracted in the compressed domain and no syntax element modifications are necessary). In this case, caching media fragments of such media resources is possible using HTTP byte ranges, because their media fragments are addressable in terms of byte ranges.

  2. Media fragments can be extracted in the compressed domain, but syntax element modifications are required. These media fragments are cachable using HTTP byte ranges on condition that the syntax element modifications are needed in media-headers applying to the whole media resource/fragment. In this case, those media-headers could be sent to the client in the first response of the server, which is a response to a request on a specific resource different from the byte-range content.

  3. Media fragments cannot be extracted in the compressed domain. In this case, transcoding operations are necessary to extract media fragments. Since these media fragments are not expressible in terms of byte ranges, it is not possible to cache these media fragments using HTTP byte ranges. Note that media formats which enable extracting fragments in the compressed domain, but are not compliant with category 2 (i.e., syntax element modifications are not only applicable to the whole media resource), also belong to this category.

8 Technologies Survey

8.1 Existing URI fragment schemes

Some existing URI schemes define semantics for fragment identifiers. In this section, we list these URI schemes and provide examples of their fragment identifiers.

8.1.1 General specification of URI fragments

  • URI FragmentRFC 3986
    http://www.w3.org/2008/WebVideo/Fragments/wiki/Main_Page#Preparation_of_Working_Draft
    cited from RFC3986: "The fragment identifier component of a URI allows indirect identification of a secondary resource by reference to a primary resource and additional identifying information. The identified secondary resource may be some portion or subset of the primary resource, some view on representations of the primary resource, or some other resource defined or described by those representations. A fragment identifier component is indicated by the presence of a number sign ("#") character and terminated by the end of the URI."

8.1.2 Fragment specifications not for audio/video

  • HTML named anchorsHTML 4.0
    http://www.w3.org/2008/WebVideo/Fragments/wiki/Main_Page#Preparation_of_Working_Draft
    refers to a specific named anchor within the resource http://www.w3.org/2008/WebVideo/Fragments/wiki/Main_Page
  • XPointer named elementsxpointer
    http://www.w3schools.com/xlink/dogbreeds.xml#xpointer(id("Rottweiler"))
    refers to the element with id equal to 'Rottweiler' in the target XML document http://www.w3schools.com/xlink/dogbreeds.xml
  • text (plain)RFC 5147
    http://example.com/text.txt#line=10,20
    identifies lines 11 to 20 of the text.txt MIME entity
  • SVGSVG
                      http://upload.wikimedia.org/wikipedia/commons/d/d2/Yalta_summit_1945_with_Churchill%2C_Roosevelt%2C_Stalin.jpg#svgView(14.64,15.73,146.98,147.48)
    specifies the region to be viewed of the SVG image http://upload.wikimedia.org/wikipedia/commons/d/d2/Yalta_summit_1945_with_Churchill%2C_Roosevelt%2C_Stalin.jpg

8.1.3 Fragment specifications for audio/video

  • Temporal URI/Ogg technologiestemporal URI
    http://example.com/video.ogv#t=12.3/21.16
    specifies a temporal fragment of the OGG Theora video http://example.com/video.ogv starting at 12.3 s and and ending at 21.16 s
  • MPEG-21MPEG-21
    http://www.example.com/myfile.mp4#mp(/~time('npt','10','30'))
    specifies a temporal fragment of the MP4 resource http://www.example.com/myfile.mp4 starting at 10 s and and ending at 30 s

8.2 Existing applications using proprietary temporal media fragment URI schemes

In this section, we list a number of proprietary URI schemes which are able to identify media fragments. Note that all of these schemes only provide support for addressing temporal media fragments.

  • Google Video (announcement)
    http://video.google.com/videoplay?docid=3047771997186190855&ei=MCH-SNfJD5HS2gKirMD2Dg&q=%22that%27s+a+tremendous+gift%22#50m16s
    Syntax: #50m16s
  • YouTube (announcement)
    http://www.youtube.com/watch?v=1bibCui3lFM#t=1m45s
    Syntax: #t=1m45s
  • Archive.org (uses the temporalURI specification temporal URI)
    http://www.archive.org/download/to-SF/toSF_512kb.mp4?t=74.5
    Syntax: ?t=74.5
  • Videosurf (announcement)
    http://www.videosurf.com/video/michael-jordan-1989-playoffs-gm-5-vs-cavs-the-shot-904591?t=140&e=184
    Syntax: ?t=140&e=184 (with t=start, e=end)

8.3 Media fragment specification approaches

Media fragment ApproachTemporalSpatialTrackName
URI based
SVGNoYesNoNo
Temporal URI/Ogg technologiesYesNoYesYes
MPEG-21YesYesYesYes
Non-URI-based
SMILYesYesNo?No?
MPEG-7YesYesYesYes
SVGNoYesNo?
TV-AnytimeYesNoNoYes
ImageMapsNoYesNo?

8.3.1 URI based

8.3.1.2 Temporal URI/Ogg technologies
8.3.1.2.1 Temporal

A Temporal URI temporal URI is being used to play back temporal fragments in Annodex. The clip's begin and end are specified directly in the URI. When using "#" the URI fragment identfier, it is expected that the media fragment is played after downloading the complete resource, while using "?" URI query parameters, it is expected that the media fragment is extracted on the server and downloaded as a new resource to the client. Linking to such a resource looks as follows:

<a href="http://example.com/video.ogv#t=12.3/21.16" />
<a href="http://example.com/video.ogv?t=12.3/21.16" />

It it possible to use different temporal schemes, which give frame-accurate clipping when used correctly:

<a href="http://example.com/video.ogv?t=npt:12.3/21.16" />
<a href="http://example.com/video.ogv?t=smpte-25:00:12:33:06/00:21:16:00" />
<a href="http://example.com/audio.ogv?t=clock:20021107T173045.25Z" />
8.3.1.2.2 Track

Tracks are an orthogonal concept to time-aligned annotations. Therefore, Xiph/Annodex have invented another way of describing/annotating these. It's only new (since January 2008) and is called: ROE (for Rich Open multitrack media Encapsulation) ROE. With ROE you would describe the composition of your media resource on the server. This file can also be downloaded to a client to find out about the "capabilities" of the file. It is however mainly used for authoring-on-the-fly. Depending on what a client requires, the ROE file can be used to find the different tracks and multiplex them together. Here is an example file:

<ROE>
 <head>
  <link rel="alternate" type="text/html" href="http://example.com/complete_video.html" />
 </head>
 <body>
  <track id="v" provides="video">
   <seq>
    <mediaSource id="v0" src="http://example.com/video.ogv" content-type="video/ogg" />
    <mediaSource id="v1" src="http://example.com/theora.ogv?track=v1" content-type="video/theora" />
   </seq>
  </track>
  <track id="a" provides="audio">
   <mediaSource id="a1" src="http://example.com/theora.ogv?track=a1" content-type="audio/vorbis" />
  </track>
  <track id="c1" provides="caption">
   <mediaSource src="http://example.com/cmml1.cmml" content-type="text/cmml" />
  </track>
  <track id="c2" provides="ticker">
   <mediaSource src="http://example.com/cmml2.cmml" content-type="text/cmml" />
  </track>
 </body>
</ROE>
              

This has not completely been worked through and implemented, but Metavid is using ROE as an export format to describe the different resources available as subpart to one media resource. Note that ROE is also used to create an Ogg Skeleton Skeleton in a final multiplexed file. Thus, the information inherent in ROE goes into the file (at least in theory) and can be used to extract tracks in a URI:

<video src="http://example.com/video.ogv?track=a/v/c1"/>
8.3.1.2.3 Named

To include outgoing hyperlinks into video, you have to define the time-aligned markup of your video (or audio) stream. For this purpose, Annodex uses CMML CMML. Here is an example CMML file that can be used to include out-going hyperlinks next to or into Ogg RFC 3533 streams. ("next to" means here that the CMML file is kept separate of the Ogg file, but that the client-side player knows to synchronise the two, "into" means that CMML is multiplexed as a timed text codec into the Ogg physical bitstream creating only one file that has to be exchanged). The following defines a CMML clip that has an outgoing hyperlink (this is a partial document extracted from a CMML file):

<clip id="tic1" start="npt:12.3" end="npt:21.16" title="Introduction">
 <a href="http://example.com/fish.ogv?t=5" >Watch another fish video.</a>
 <meta name="author" content="Frank"/>
 <img src="fish.png"/>
 <body>This is the introduction to the film Joe made about fish.</body>
</clip>
                

Note how there is also the possibility of naming a thumbnail, providing metadata, and giving a full description of the clip in the body tag. Interestingly, you can also address into temporal fragments of a CMML CMML file, since it is a representation of a time-continuous data resource:

<a href="http://example.com/sample.cmml?t=npt:4" />

With CMML and ROE you can address into named temporal regions of a CMML file itself:

<a href="http://example.com/sample.cmml?id="tic1" />
8.3.1.3 MPEG-21

Four different schemes are specified in MPEG-21 Part 17 MPEG-21 to address parts of media resources: ffp(), offset(), mp(), and mask():

  • ffp() is applicable for file formats conforming to the ISO Base Media File Format (aka MPEG-4 part 12 or ISO/IEC 14496-12) and is able to identifiy tracks via track_ID located in the iloc and tkhd box respectively

  • offset() is applicable to any digital resource and identifies a range of bytes in a data stream (similar functionality as the HTTP byte range mechanism).

  • mp() is applicable for media resources whose Internet media type (or MIME type) is equal to audio/mpeg, video/mpeg, video/mp4, audio/mp4, or application/mp4 and provides two complementary mechanisms for identifying fragments in a multimedia resource via:

    • a set of so-called dimensions (i.e., temporal, spatial or spatiotemporal) which are independent of the coding/container format: for the temporal dimension, the following time schemes are supported: NPT, SMPTE, MPEG-7, and UTC.

    • a hierarchical logical model of the resource. Such a logical model is dependent on the underlying container format (e.g., audio CD contains a list of tracks). The structures defined in these logical models are accessed with a syntax based on XPath.

  • mask() is applicable for media resources whose Internet media type (or MIME type) is equal to video/mp4 or video/mpeg and addresses a binary mask defined in a resource (binary masks can be achieved through MPEG-4 shape coding). Note that this mask is meant to be applied to a video resource and that the video resource may itself be the resource that contains the mask.

Note that hierarchical combinations of addressing schemes are also possible. The '*' operator is used for this purpose. When two consecutive pointer parts are separated by the '*' operator, the fragments located by the first pointer part (to the left of the '*' operator) are used as a context for evaluating the second pointer part (to the right of the '*' operator).

8.3.2 Non-URI-based

8.3.2.1 SMIL
8.3.2.1.1 Temporal

Playing temporal fragments out-of-context

SMIL allows you to play only a fragment of the video by using the clipBegin and clipEnd atributes. How this is implemented, though, is out of scope for the SMIL spec (and for http-based URLs it may well mean that implementations get the whole media item and cut it up locally):

It is possible to use different time schemes, which give frame-accurate clipping when used correctly:

Adding metadata to such a fragment is supported since SMIL 3.0:

Referring to temporal fragments in-context

The following piece of code will play back the whole video, and during the interesting section of the video allow clicking on it to follow a link:

It is also possible to have a link to the relevant section of the video. Suppose the following SMIL code is located in http://www.example.com/smilpresentation:

Now, we can link to the media fragment using the following URI:

Jumping to #tic2area will start the video at the beginning of the interesting section. The presentation will not stop at the end, however, it will continue.

8.3.2.1.2 Spatial

Playing spatial fragments out-of-context

SMIL 3.0 allows playing back only a specific rectangle of the media. The following construct will play back the center quarter of the video:

Assuming the source video is 640x480, the following line plays back the same:

This construct can be combined with the temporal clipping.

It is possible to change the panZoom rectangle over time. The following code fragment will show the full video for 10 seconds, then zoom in on the center quarter over 5 seconds, then show that for the rest of the duration. The video may be scaled up or centered, or something else, depending on SMIL layout, but this is out of scope for the purpose of this investigation.

Referring to spatial fragments in-context

The following bit of code will enable the top-right quarter of the video to be clicked to follow a link. Note the difference in the way the rectangle is specified (left, top, right, bottom) when compared to panZoom (left, top, width, height). This is an unfortunate side-effect of this attribute being compatible with HTML and panZoom being compatible with SVG.

Other shapes are possible, as in HTML and CSS. The spatial and temporal constructs can be combined. The spatial coordinates can be animated, as for panZoom.

8.3.2.2 MPEG-7
8.3.2.2.1 Temporal
Editorial note: Raphaël 
For all dimensions covered by MPEG-7 the use of indirection should not forgotten. http://www.example.com/mpeg7file.mp7#speaker refers to the "speaker" xml element of this resource. The UA needs to parse this element in order to actually point to this fragment.

A video is divided into VideoSegments that can be described by a timestamp. MediaTimes are described using a MediaTimePoint and MediaDuration, which are the starting time and shot duration respectively. The MediaTimePoint is defined as follows: YYYY-MM-DDThh:mm:ss:nnnFNNN (Y: year, M: month, D: day, T: a separation sign between date and time, h: hours, m: minutes, s: seconds, F: separation sign between n and N, n: number of fractions, N: number of fractions in a second). The MediaDuration is defined as follows: PnDTnHnMnSnNnF with nD number of days, nH number of hours, nM number of minutes, nS number of seconds, nN number of fractions and nF fractions per second. The temporal fragments can also be defined in Time Units or relative compared to a defined time. This MPEG-7 example describes a 'shot1' starting at 6sec 2002/2500 sec and lasts for 9sec 13389/25000 sec.

8.3.2.2.2 Spatial

Selecting a spatial fragment of the video is also possible, using a SpatialDecomposition-element. This MPEG-7 example describes a spatial (polygonal) mask called "speaker" which is given by the coordinates of the polygon: (40, 300), (40,210), ..., (320,300).

The spatial video fragment can be combined with temporal information thus creating a SpatialTemporalDecomposition-element.

A region of an image can also be described in MPEG-7

8.3.2.5 ImageMaps
8.3.2.5.1 Spatial

Client-side image maps: The MAP element specifies a client-side image map. An image map is associated with an element via the element's usemap attribute. The MAP element content model includes then either AREA elements or A elements for specifying the geometric regions and the link associated with them. Possible shapes are: rectangle (rect), circle (circle) or arbitrary polygon (poly)

Server-side image maps: When the user activates the link by clicking on the image, the screen coordinates are sent directly to the server where the document resides. Screen coordinates are expressed as screen pixel values relative to the image. The user agent derives a new URI from the URI specified by the href attribute of the A element, by appending ? followed by the x and y coordinates, separated by a comma. For instance, if the user clicks at the location x=10, y=27 then the derived URI is: http://www.example.com/images?10,27

8.3.2.6 HTML 5
Editorial note: Silvia 
Currently, HTML5 relies on the abilities of the used media format for providing media fragment addressing. In future, HTML5 is planning to adopt the fragment URI specifications of this document for providing fragment addressing. Input from the WHAT and HTML working groups is requested.

A References

[RFC 2119]
S. Bradner. Key Words for use in RFCs to Indicate Requirement Levels. IETF RFC 2119, March 1997. Available at http://www.ietf.org/rfc/rfc2119.txt.
[RFC 3533]
The Ogg Encapsulation Format Version 0. IETF RFC 3533, May 2003. Available at http://www.ietf.org/rfc/rfc3533.txt.
[RFC 3986]
T. Berners-Lee and R. Fielding and L. Masinter. Uniform Resource Identifier (URI): Generic Syntax. IETF RFC 3986, January 2005. Available at http://www.ietf.org/rfc/rfc3986.txt.
[RFC 5147]
E. Wilde and M. Duerst.URI Fragment Identifiers for the text/plain Media Type. IETF RFC 5147, April 2008. Available at http://tools.ietf.org/html/rfc5147.
[HTML 4.0]
D. Ragett and A. Le Hors and I. Jacobs.HTML Fragment identifiers. W3C Rec, December 1999. Available at http://www.w3.org/TR/REC-html40/intro/intro.html#fragment-uri.
[SVG]
J. Ferraiolo.SVG Fragment identifiers. W3C Rec, September 2001. Available at http://www.w3.org/TR/2001/REC-SVG-20010904/linking#FragmentIdentifiersSVG.
[xpointer]
P. Grosso and E. Maler and J. Marsh and N. Walsh.XPointer Framework. W3C Rec, March 2003. Available at http://www.w3.org/TR/xptr-framework/.
[MPEG-7]
Information Technology - Multimedia Content Description Interface (MPEG-7). Standard No. ISO/IEC 15938:2001, International Organization for Standardization(ISO), 2001.
[temporal URI]
S. Pfeiffer and C. Parker and A. Pang.Specifying time intervals in URI queries and fragments of time-based Web resources. Internet Draft, March 2005. Available at http://annodex.net/TR/draft-pfeiffer-temporal-fragments-03.html.
[CMML]
Continuous Media Markup Language (CMML), Version 2.1. IETF Internet-Draft 4th March 2006 http://www.annodex.net/TR/draft-pfeiffer-cmml-03.txt.
[ROE]
Rich Open multitrack media Exposition (ROE). Xiph Wiki. Retrieved 13 April 2009 at http://wiki.xiph.org/index.php/ROE.
[Skeleton]
Ogg Skeleton. Xiph Wiki. Retrieved 13 April 2009 at http://wiki.xiph.org/OggSkeleton.
[MPEG-21]
Information Technology - Multimedia Framework (MPEG-21). Standard No. ISO/IEC 21000:2002, International Organization for Standardization(ISO), 2002. Available at http://www.chiariglione.org/mpeg/working_documents/mpeg-21/fid/fid-is.zip.
[SMPTE]
SMPTE RP 136 Time and Control Codes for 24, 25 or 30 Frame-Per-Second Motion-Picture Systems
[ABNF]
Augmented BNF for Syntax Specifications: ABNF, Internet STD 68 (as of April 2009: RFC 5234).
[ISO Base Media File Format]
Information technology - Coding of audio-visual objects - Part 12: ISO base media file format. Retrieved 13 April 2009 at http://standards.iso.org/ittf/PubliclyAvailableStandards/c051533_ISO_IEC_14496-12_2008.zip

B Evaluation of fitness per media formats

In order to get a view on which media formats belong to which fitness category, an overview is provided for key media formats. In the following tables, the 'X' symbol indicates that the media format does not support a particular fragment axis. The tables are separated by video/audio/image codecs and container formats.

Video CodecTrackTemporalSpatialNameRemark
H.261n/afitunfitn/a
MPEG-1 Videon/afitunfitn/a
H.262/MPEG-2 Videon/afitunfitn/a
H.263n/afitunfitn/a
MPEG-4 Visualn/afitunfitn/a
H.264/MPEG-4 AVCn/afitconditionally fitn/aSpatial fragment extraction is possible with Flexible Macroblock Ordening (FMO)
AVSn/afitunfitn/a
Motion JPEGn/afitunfitn/a
Motion JPEG2000n/afitunfitn/aSpatial fragment extraction is possible in the compressed domain, but syntax element modifications are needed for every frame.
VC-1n/afitunfitn/a
Diracn/afitunfitn/aWhen Dirac is stored in the Ogg RFC 3533 container using Skeleton Skeleton, ROE ROE and CMML CMML, track, temporal and named fragments are supported.
Theoran/afitunfitn/aWhen Theora is stored in the Ogg RFC 3533 container using Skeleton Skeleton, ROE ROE and CMML CMML, track, temporal and named fragments are supported.
RealVideon/afit(?)unfit(?)n/a
DVn/afitunfitn/a
Betacamn/afitunfitn/a
OMSn/afitunfitn/a
SNOWn/afitunfitn/a

Audio CodecTrackTemporalSpatialNameRemark
MPEG-1 Audion/afitn/an/a
AACn/afitn/an/a
Vorbisn/afitn/an/aWhen Vorbis is stored in the Ogg RFC 3533 container using Skeleton Skeleton, ROE ROE and CMML CMML, track, temporal and named fragments are supported.
FLACn/afitn/an/aWhen FLAC is stored in the Ogg RFC 3533 container using Skeleton Skeleton, ROE ROE and CMML CMML, track, temporal and named fragments are supported.
Speexn/afitn/an/aWhen Speex is stored in the Ogg RFC 3533 container using Skeleton Skeleton, ROE ROE and CMML CMML, track, temporal and named fragments are supported.
AC-3/Dolby Digitaln/afitn/an/a
TTAn/afitn/an/a
WMAn/afitn/an/a
MLPn/afitn/an/a

Image CodecTrackTemporalSpatialNameRemark
JPEGn/an/aunfitn/a
JPEG2000n/an/aconditionally fitn/a
JPEG LSn/an/aunfitn/a
HD Photon/an/aconditionally fitn/a
GIFn/an/aunfitn/a
PNGn/an/aunfitn/a

Container FormatsTrackTemporalSpatialNameRemark
MOVconditionally fitn/an/aconditionally fit QTText provides named chapters
MP4conditionally fitn/an/aconditionally fit MPEG-4 TimedText provides named sections
3GPconditionally fitn/an/aconditionally fit 3GPP TimedText provides named sections
MPEG-21 FFconditionally fitn/an/aconditionally fit MPEG-21 Digital Item Declaration provides named sections
OGGconditionally fit (1)fitn/aconditionally fit (2)(1) Using ROE ROE and Skeleton Skeleton, track selection is possible; (2) Using ROE, CMML CMML and Skeleton, named addressing of temporal and track fragments is possible
Matroskaconditionally fitn/an/aconditionally fit
MXFconditionally fitn/an/aconditionally fit
ASFconditionally fitn/an/aconditionally fitMarker objects provide named anchor points
AVIconditionally fitn/an/aX
FLVconditionally fitn/an/aconditionally fit cue points provide named anchor points
RMFFfit or conditionally fit(?)n/an/a?
WAVXn/an/aX
AIFFXn/an/aX
XMF?n/an/a?
AUXn/an/aX
TIFFconditionally fitn/an/aconditionally fitCan store multiple images (i.e., tracks) in one file, possibility to insert "private tags" (i.e., proprietary information)

C Acknowledgements (Non-Normative)

This document is the work of the W3C Media Fragments Working Group.

Members of the Working Group are (at the time of writing, and in alphabetical order): Eric Carlson (Apple, Inc.), Michael Hausenblas (DERI Galway at the National University of Ireland, Galway, Ireland), Jack Jansen (CWI), Yves Lafon (W3C/ERCIM), Erik Mannens (IBBT), Thierry Michel (W3C/ERCIM), Guillaume (Jean-Louis) Olivrin (Meraka Institute), Soohong Daniel Park (Samsung Electronics Co., Ltd.), Conrad Parker (W3C Invited Experts), Silvia Pfeiffer (W3C Invited Experts), David Singer (Apple, Inc.), Raphaël Troncy (CWI), Vassilis Tzouvaras (K-Space), Davy Van Deursen (IBBT)

The people who have contributed to discussions on public-media-fragment@w3.org are also gratefully acknowledged. In particular: Pierre-Antoine Champin, Ken Harrenstien, Henrik Nordstrom, Sam Sneddon and Felix Sasaki.