/**/

W3C

Media Fragments URI 1.0

W3C Working Draft 17 March 27 October 2011

This version:
http://www.w3.org/TR/2011/WD-media-frags-20110317
http://www.w3.org/TR/2011/WD-media-frags-20111027/
Latest version:
http://www.w3.org/TR/media-frags
Previous version:
http://www.w3.org/TR/2010/WD-media-frags-20100624
http://www.w3.org/TR/2011/WD-media-frags-20110317/
Editors:
Raphaël Troncy , EURECOM
Erik Mannens , IBBT Multimedia Lab, University of Ghent
Silvia Pfeiffer , W3C Invited Expert
Davy Van Deursen , IBBT Multimedia Lab, University of Ghent
Contributors:
Michael Hausenblas , DERI, National University of Ireland, Galway
Philip Jägenstedt , Opera Software
Jack Jansen , CWI, Centrum Wiskunde & Informatica, Amsterdam
Yves Lafon , W3C
Conrad Parker , W3C Invited Expert
Thomas Steiner , Google, Inc.

Abstract

This document describes the Media Fragments 1.0 specification. It specifies the syntax for constructing media fragment URIs and explains how to handle them when used over the HTTP protocol. The syntax is based on the specification of particular name-value pairs that can be used in URI fragment and URI query requests to restrict a media resource to a certain fragment.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is the Second Last Call Working Draft Candidate Recommendation of the Media Fragments URI 1.0 specification. It has been produced by the Media Fragments Working Group , which is part of the W3C Video on the Web Activity . The Working Group expects to advance this specification to Recommendation Status.

This The W3C Membership and other interested parties are invited to review this Candidate Recommendation document and send comments through 20 November 2011. Please send comments about this document to public-media-fragment@w3.org mailing list ( public archive ). Use "[CR Media Fragment]" in the subject line of your email. We expect that sufficient feedback to determine its future will have been received by 20 November 2011. This specification will remain a Candidate Recommendation until at least 20 November 2011.

The Media Fragments Working Draft Group will advance this specification to Proposed Recommendation when the following exit criteria have been met:

  1. Sufficient reports of implementation experience have been gathered to demonstrate that the Media Fragments URI syntax and processing features are implementable and are interpreted in a consistent manner. To do so, the Working Group will insure that all features have been implemented at least twice in an interoperable way.
  2. The implementations have been developed independently.
  3. The Working Group has adopted a public test suite for the User Agents and the Servers and has produced an implementation report for this specification.

The Implementation results are publicly released and are intended solely to be used as proof of Media Fragments URI implementability. It is only a snap shot of the actual implementation behaviors at one moment of time, as these implementations may not be immediately available to the public. The interoperability data is not intended to be used for assessing or grading the performance of any individual implementation. Any feedback on implementation and use of this specification would be very welcome. To the extent possible, please provide a separate email message for each distinct comment.

This Candidate Recommendation version of the Media Fragments URI 1.0 specification incorporates requests for changes from comments sent during the first and second Last Call Review, as agreed with the commenters (see Disposition of Last Call comments ) and changes following implementation experiences from the Working Group. The Working Group wishes to have these changes reviewed before proceeding would like to Candidate Recommendation. point out that the processing of media fragment URI when used over the HTTP protocol is now described in a separate document Protocol for Media Fragments 1.0 Resolution in HTTP .

The W3C Membership For convenience, the differences between this CR version and other interested parties the Second Last Call Working Draft are invited to review highlighted in the CR Diff document . The differences between the Second Last Call Working Draft and send comments through 10 April 2011. Please send comments about this the First Last Call Working Draft are also highlighted in the CR Diff document to public-media-fragment@w3.org mailing list ( public archive ). .

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .

Table of Contents

1 Introduction
2 Standardisation Issues
    2.1 Terminology
    2.2 Media Fragments Standardisation
        2.2.1 URI Fragments
        2.2.2 URI Queries
3 URI fragment and URI query
    3.1 When to choose URI fragments? When to choose URI queries?
    3.2 Resolving URI fragments within the user agent
    3.3 Resolving URI fragments with server help
    3.4 Resolving URI fragments in a proxy cacheable manner
    3.5 Resolving URI queries
    3.6 Combining URI fragments and URI queries
4 Media Fragments Syntax
    4.1 General Structure
    4.2 Fragment Dimensions
        4.2.1 Temporal Dimension
            4.2.1.1 Normal Play Time (NPT)
            4.2.1.2 SMPTE time codes
            4.2.1.3 Wall-clock time code
        4.2.2 Spatial Dimension
        4.2.3 Track Dimension
        4.2.4 Named ID Dimension
        4.2.5 Common Syntax
5 Media Fragments Processing
    5.1 Processing Media Fragment URI
        5.1.1 Processing name-value components
        5.1.2 Processing name-value lists
    5.2 Protocol for URI fragment Resolution in HTTP
        5.2.1 UA mapped byte ranges             5.2.1.1 UA requests URI fragment for the first time             5.2.1.2 UA requests URI fragment it already has buffered             5.2.1.3 UA requests URI fragment of a changed resource         5.2.2 Server mapped byte ranges             5.2.2.1 Server mapped byte ranges with corresponding binary data             5.2.2.2 Server mapped byte ranges with corresponding binary data and codec setup data             5.2.2.3 Proxy cacheable server mapped byte ranges         5.2.3 Server triggered redirect     5.3 Protocol for URI query Resolution in HTTP
6 Media Fragments Semantics
    6.1 Errors on the General URI level Valid Media Fragment URIs
        6.1.1 Non-existent dimension: Valid temporal dimension
        6.1.2 Under-specified Dimension Valid spatial dimension
        6.1.3 Valid track dimension
        6.1.4 Valid id dimension
    6.2 Errors detectable based on the temporal dimensions URI
        6.2.1 Valid requests Errors on the general URI level
        6.2.2 Empty Errors on the temporal dimension
        6.2.3 Non-existent Errors on the spatial dimension
        6.2.4 Validity error Errors on the track dimension
        6.2.5 SMPTE time code mismatch Errors on the id dimension
    6.3 Errors detectable based on information of the source media
        6.3.1 Errors on the general level
        6.3.2 Errors on the temporal dimension
        6.3.3 Errors on the spatial dimensions dimension
    6.4         6.3.4 Errors on the track dimensions dimension
    6.5         6.3.5 Errors on the named dimensions id dimension
7 Notes to Implementors (non-normative)
    7.1 Browsers Rendering Media Fragments
    7.2 Clients Displaying Media Fragments
    7.3 All Media Fragment Clients
    7.4 Media Fragment Servers
    7.5 Media Fragment Web Applications
8 Conclusions
    8.1 Qualification of Media Resources

Appendices

A References
B Collected ABNF Syntax for URI (Non-Normative)
C Collected ABNF Syntax for HTTP Headers (Non-Normative)
D Processing media fragment URIs in RTSP (Non-Normative)
    D.1 How to map Media Fragment URIs to RTSP protocol methods
        D.1.1 Dealing with the media fragment URI dimensions in RTSP
            D.1.1.1 Temporal Media Fragment URIs
            D.1.1.2 Track Media Fragment URIs
            D.1.1.3 Spatial Media Fragment URIs
            D.1.1.4 Named Id Media Fragment URIs
        D.1.2 Putting the media fragment URI dimensions together in RTSP
        D.1.3 Caching and RTSP for media fragment URIs
E Acknowledgements (Non-Normative)
F Change Log (Non-Normative)


1 Introduction

Audio and video resources on the World Wide Web are currently treated as "foreign" objects, which can only be embedded using a plugin that is capable of decoding and interacting with the media resource. Specific media servers are generally required to provide for server-side features such as direct access to time offsets into a video without the need to retrieve the entire resource. Support for such media fragment access varies between different media formats and inhibits standard means of dealing with such content on the Web.

This specification provides for a media-format independent, standard means of addressing media fragments on the Web using Uniform Resource Identifiers (URI). In the context of this document, media fragments are regarded along three four different dimensions: temporal, spatial, and tracks. Further, a temporal fragment can be marked with a name and then addressed through a URI using that name. name, using the id dimension. The specified addressing schemes apply mainly to audio and video resources - the spatial fragment addressing may also be used on images.

The aim of this specification is to enhance the Web infrastructure for supporting the addressing and retrieval of subparts of time-based Web resources, as well as the automated processing of such subparts for reuse. Example uses are the sharing of such fragment URIs with friends via email, the automated creation of such fragment URIs in a search engine interface, or the annotation of media fragments with RDF. Such use case examples as well as other side conditions on this specification and a survey of existing media fragment addressing approaches are provided in the requirements Use cases and requirements for Media Fragments document that accompanies this specification document.

The media fragment URIs specified in this document have been implemented and demonstrated to work with media resources over the HTTP protocol. This specification is not defining the protocol aspect of RTSP handling of a media fragment in the normative sections. We expect the media fragment URI syntax to be generic and a possible mapping between this syntax and RTSP messages can be found in an appendix of this specification D Processing media fragment URIs in RTSP . Existing media formats in their current representations and implementations provide varying degrees of support for this specification. It is expected that over time, media formats, media players, Web Browsers, media and Web servers, as well as Web proxies will be extended to adhere to the full specification. This specification will help make video a first-class citizen of the World Wide Web.

2 Standardisation Issues

2.1 Terminology

The keywords MUST , MUST NOT , SHOULD and SHOULD NOT are to be interpreted as defined in RFC 2119 .

According to RFC 3986 , the term "URI" does not include relative references. In this document, we consider both URIs and relative references. Consequently, we use the term "URI reference" as defined in RFC 3986 (section 4.1). For simplicity reasons, this document, however, only uses the term "media fragment URI" in place of "media fragment URI reference".

The following terms are used frequently in this document and need to be clearly understood:

  • URI fragment: The fragment = anything behind a "#" in component is indicated by the presence of a URI number sign ("#") character and terminated by the end of the URI.
  • URI query: The query = anything behind a "?" component is indicated by the first question mark ("?") character and before a "#" in terminated by a URI number sign ("#") character or by the end of the URI.
  • media Media fragment URI = URI: a URI addressing subparts of a media resource - that could be through URI queries or URI fragments

2.2 Media Fragments Standardisation

The basis for the standardisation of media fragment URIs is the URI specification, RFC 3986 . Providing media fragment identification information in URIs refers here to the specification of the structure of a URI fragment or a URI query. This document will explain how URI fragments and URI queries are structured to identify media fragments. It normalises the name-value parameters used in URI fragments and URI queries to address media fragments. These build on existing CGI parameter conventions.

In this section, we look at implications of standardising the structure of media fragment URIs.

2.2.1 URI Fragments

The URI specification RFC 3986 says about the format of a URI fragment in Section 3.5:

"The fragment's format and resolution is [..] dependent on the media type [RFC2046] of a potentially retrieved representation. [..] Fragment identifier semantics are independent of the URI scheme and thus cannot be redefined by scheme specifications."

This essentially means that only media type definitions (as registered through the process defined in RFC 4288 ) are able to introduce a standard structure on URI fragments for that mime type. One part of the registration process of a media type can include information about how fragment identifiers in URIs are constructed for use in conjunction with this media type.

Note that the registration of URI fragment construction rules as expressed in Section 4.11 of RFC 4288 is only a SHOULD-requirement. An analysis of all media type registrations showed that there is not a single media type registration in the audio/*, image/*, video/* branches that is currently defining fragments or fragment semantics.

The Media Fragment WG has no authority to update registries of all targeted media types. To the best of our knowledge there are only few media types that actually have a specified fragment format even if it is not registered with the media type: these include Ogg, MPEG-4, and MPEG-21. Further, only a small number of software packages actually supports these fragment formats. For all others, the semantics of the fragment are considered to be unknown.

As such, the intention of this document is to propose a specification to all media type owners in the audio/*, image/*, and video/* branches for a structured approach to URI fragments and for specification of commonly agreed dimensions to address media fragments (i.e. subparts of a media resource) through URI fragments. We recommend media type owners to harmonize their existing schemes with the ones proposed in this document and update or add the fragment semantics specification to their media type registration.

2.2.2 URI Queries

The URI specification RFC 3986 says about the format of a URI query in Section 3.4:

"The query component [..] serves to identify a resource within the scope of the URI's scheme and naming authority (if any). [..] Query components are often used to carry identifying information in the form of "key=value" pairs [..]."

URI query specifications are more closely linked to the URI scheme, some of which do not even use a query component. We are mostly concerned with the HTTP RFC 2616 and the RTP/RTSP rfc2326 protocols here, which both support query components. HTTP says nothing about how a URI query has to be interpreted. RTSP explicitly says that fragment and query identifiers do not have a well-defined meaning at this time, with the interpretation left to the RTSP server.

The URI specification RFC 3986 says generally that the data within the URI is often parsed by both the user agent and one or more servers. It refers in particular to HTTP in Section 7.3:

"In HTTP, for example, a typical user agent will parse a URI into its five major components, access the authority's server, and send it the data within the authority, path, and query components. A typical server will take that information, parse the path into segments and the query into key/value pairs, and then invoke implementation-specific handlers to respond to the request."

Since the interpretation of query components resides with the functionality of servers, the intention of this document wrt query components is to recommend standard name-value pair formats for use in addressing media fragments through URI queries. We recommend server and server-type software providers to harmonize their existing schemes in use with media resources to support the nomenclature proposed in this specification.

3 URI fragment and URI query

Editorial note  
This section is non-normative

To address a media fragment, one needs to find ways to convey the fragment information. This specification builds on URIs RFC 3986 . Every URI is defined as consisting of four parts, as follows:

<scheme name> : <hierarchical part> [ ? <query> ] [ # <fragment> ]

There are therefore two possibilities for representing the media fragment addressing in URIs: the URI query part or the URI fragment part .

3.1 When to choose URI fragments? When to choose URI queries?

For media fragment addressing, both approaches - URI query and URI fragment - are useful.

The main difference between a URI query and a URI fragment is that a URI query produces a new resource, while a URI fragment provides a secondary resource that has a relationship to the primary resource. URI fragments are resolved from the primary resource without another retrieval action. This means that a user agent should be capable to resolve a URI fragment on a resource it has already received without having to fetch more data from the server.

A further requirement put on a URI fragment is that the media type of the retrieved fragment should be the same as the media type of the primary resource. Among other things, this means that a URI fragment that points to a single video frame out of a longer video results in a one-frame video, not in a still image. To extract a still image, one would need to create a URI query scheme - something not envisaged here, but easy to devise.

There are different types of media fragment addressing in this specification. As noted in the Use cases and requirements for Media Fragments document (section "Fitness Conditions on Media Containers/Resources"): not all container formats and codecs are "fit" for supporting the different types of fragment URIs. "Fitness" relates to the fact that a media fragment can be extracted from the primary resource without syntax element modifications or transcoding of the bitstream.

Resources that are "fit" can therefore be addressed with a URI fragment. Resources that are "conditionally fit" can be addressed with a URI fragment with an additional retrieval action that retrieves the modified syntax elements but leaves the codec data untouched. Resources that are "unfit" require transcoding. Such transcoded media fragments cannot be addressed with URI fragments, but only with URI queries.

Therefore, when addressing a media fragment with the URI mechanism, the author has to know whether this media fragment can be produced from the (primary) resource itself without any transcoding activities or whether it requires transcoding. In the latter case, the only choice is to use a URI query and to use a server that supports transcoding and delivery of a (primary) derivative resource to satisfy the query.

3.2 Resolving URI fragments within the user agent

A user agent may itself resolve and control the presentation of media fragment URIs. The simplest case arises where the user agent has already downloaded the entire resource and can perform the extraction from its locally cached copy. For some media types, it may also be possible to perform the extraction over the network without any special protocol assistance. For temporal fragments this requires a user agent to be able to seek on the media resource using existing protocol mechanisms.

An example of a URI fragment used to address a media fragment is http://www.example.org/video.ogv#t=60,100 . In this case, the user agent knows that the primary resource is http://www.example.org/video.ogv and that it is only expected to display the portion of the primary resource that relates to the fragment #t=60,100 , i.e. seconds 60-100. Thus, the relationship between the primary resource and the media fragment is maintained.

In traditional URI fragment retrieval, a user agent requests the complete primary resource from the server and then applies the fragmentation locally. In the media fragment case, this would result in a retrieval action on the complete media resource, on which the user agent would then locally perform its fragment extraction - something generally unviable for such large resources.

Therefore, media resources are not always retrieved over HTTP using a single request. They may be retrieved as a sequence of byte range requests on the original resource URI, or may be retrieved as a sequence of requests to different URIs each representing a small part of the media. The reasons for such mechanisms include bandwidth conservation, where a client chooses to space requests out over time during playback in order to maximize bandwidth available for other activities, and bandwidth adaptation, where a client selects among various representations with varying bitrate depending on the current bandwidth availability.

A user agent that knows how to map media fragments to byte ranges will be able to satisfy a URI fragment request such as the above example by itself. This is typically the case for user agents that know how to seek to media fragments over the network. For example, a user agent that deals with a media file that includes an index of its seekable structures can resolve the media fragment addresses to byte ranges from the index. This is the case e.g. with seekable QuickTime files. Another example is a user agent that knows how to seek on a media file through a sequence of byte range requests and eventually receives the correct media fragment. This is the case e.g. with Ogg files in Firefox versions above 3.5.

Similarly, a user agent that knows how to map media fragments to a sequence of URIs can satisfy a URI fragment request by itself. This is typically the case for user agents that perform adaptive streaming. For example, a user agent that deals with a media resource that contains a sequence of URIs, each a media file of a few seconds duration, can resolve the media fragment addresses to a subsequence of those URIs. This is the case with QuickTime adaptive bitrate streaming or IIS Smooth Streaming.

If such a user agent natively supports the media fragment syntax as specified in this document, it is deemed conformant to this specification for fragments and for the particular dimension.

3.3 Resolving URI fragments with server help

For user agents that natively support the media fragment syntax, but have to use their own seeking approach, this specification provides an optimisation that can make the byte offset seeking more efficient. It requires a conformant server with which the user agent will follow a protocol defined later in this document.

In this approach, the user agent asks the server to do the byte range mapping for the media fragment address itself and send back the appropriate byte ranges. This can not be done through the URI, but has to be done through adding protocol headers. User agents that interact with a conformant server to follow this protocol will receive the appropriate byte ranges directly and will not need to do costly seeking over the network.

Note that it is important that the server also informs the user agent what actual media fragment range it was able to retrieve. This is important since in the compressed domain it is not possible to extract data at an arbitrary resolution, but only at the resolution that the data was packaged in. For example, even if a user asked for http://www.example.org/video.ogv#t=60,100 and the user agent sent a range request of t=60,100 to the server, the server may only be able to return the range t=58,103 as the closest decodable range that encapsulates all the required data.

Note that if done right, the native user agent support for media fragments and the improved server support can be integrated without problems: the user agent just needs to include the byte range and the media fragment range request in one request. A server that does not understand the media fragment range request will only react to the byte ranges, while a server that understands them will ignore the byte range request and only reply with the correct byte ranges. The user agent will understand from the response whether it received a reply to the byte ranges or the media fragment ranges request and can react accordingly.

3.5 Resolving URI queries

The described URI fragment addressing methods only work for byte-identical segments of a media resource, since we assume a simple mapping between the media fragment and bytes that each infrastructure element can deal with. Where it is impossible to maintain byte-identity and some sort of transcoding of the resource is necessary, the user agent is not able to resolve the fragmentation by itself and a server interaction is required. In this case, URI queries have to be used since they result in a server interaction and can deliver a transcoded resource.

Another use for URI queries is when a user agent actually wants to receive a completely new resource instead of just a byte range from an existing (primary) resource. This is, for example, the case for playlists of media fragment resources. Even if a media fragment could be resolved through a URI fragment, the URI query may be more desirable since it does not carry with itself the burden of the original primary resource - its file headers may be smaller, its duration may be smaller, and it does not automatically allow access to the remainder of the original primary resource.

When URI queries are used, the retrieval action has to additionally make sure to create a fully valid new resource. For example, for the Ogg format, this implies a reconstruction of Ogg headers to accurately describe the new resource (e.g. a non-zero start-time or different encoding parameters). Such a resource will be cached in Web proxies as a different resource to the original primary resource.

An example URI query that includes a media fragment specification is http://www.example.org/video.ogv?t=60,100 . This results in a video of duration 40s (assuming the original video was more than 100s long).

Note that this resource has no per-se relationship to the original primary resource. As a user agent uses such a URI with e.g. a HTML5 video element, the browser has no knowledge about the original resource and can only display this video as a 40s long video starting at 0s. The context of the original resource is lost.

A user agent may want to display the original start time of the resource as the start time of the video in order to be consistent with the information in the URI. It is possible to achieve this in one of two ways: either the video file itself has some knowledge that it is an extract from a different file and starts at an offset - or the user agent is told through the retrieval action which original primary resource the retrieved resource relates to and can find out information about it through another retrieval action. This latter option will be regarded later in this document.

An example for a media resource that has knowledge about itself of the required kind are Ogg files. Ogg files that have a skeleton track and were created correctly from the primary resource will know that their start time is not 0s but 60s in the above example. The browser can simply parse this information out of the received bitstream and may display a timeline that starts at 60s and ends at 100s in the video controls if it so desires.

Another option is that the browser parses the URI and knows about how media resources have a fragment specification that follows a standard. Then the browser can interpret the query parameters and extract the correct start and end times and also the original primary resource. It can then also display a timeline that starts at 60s and ends at 100s in the video controls. Further it can allow a right-click menu to click through to the original resource if required.

A use case where the video controls may neither start at 0s nor at 60s is a mashed-up video created through a list of media fragment URIs. In such a playlist, the user agent may prefer to display a single continuous timeline across all the media fragments rather than a collection of individual timelines for each fragment. Thus, the 60s to 100s fragment may e.g. be mapped to an interval at 3min20 to 4min.

No new protocol headers are required to execute a URI query for media fragment retrieval. Some optional protocol headers that improve the information exchange will be recommended later in this document.

3.6 Combining URI fragments and URI queries

A combination of a URI query for a media fragment with a URI fragment yields a URI fragment resolution on top of the newly created resource. Since a URI with a query part creates a new resource, we have to do the fragment offset on the new resource. This is simply a conformant behaviour to the URI standard RFC 3986 .

For example, http://www.example.org/video.ogv?t=60,100#t=20 will lead to the 20s fragment offset being applied to the new resource starting at 60 going to 100. Thus, the reply to this is a 40s long resource whose playback will start at an offset of 20s.

Editorial note: Silvia  
We should at the end of the document set up a table with all the different addressing types and http headers and say what we deem is conformant and how to find out whether a server or user agent is conformant or not.

4 Media Fragments Syntax

This section describes the external representation of a media fragment specifier, and how this should be interpreted.

Guiding principles for the definition of the media fragments syntax were as follows:

  • a. The MF syntax for queries and fragments should be identical
  • b. The MF syntax should be unambiguous
  • c. The MF syntax should allow any UTF-8 character in track or id names
  • d. The MF syntax should adhere to applicable formal standards
  • e. The MF syntax should adhere to de-facto usage of queries and fragments
  • f. The MF syntax should be as concise as possible, with no unneeded grammatical fluff

4.1 General Structure

A list of name-value pairs is encoded in the query or fragment component of a URI. The name and value components are separated by an equal sign ( = ), while multiple name-value pairs are separated by an ampersand ( & ). name = fragment - "&" - "="

The names and values can be arbitrary Unicode strings, encoded in UTF-8 and percent-encoded as per RFC 3986 . Here are some examples of URIs with name-value pairs in the fragment component, to demonstrate the general structure: http://www.example.com/example.ogv#a=b&c=d

http://www.example.com/example.ogv#a=b&c=d

http://www.example.com/example.ogv#t=10,20
http://www.example.com/example.ogv#track=audio&t=10,20
http://www.example.com/example.ogv#id=Cap%C3%ADtulo%202

While arbitrary name-value pairs can be encoded in this manner, this specification defines a fixed set of dimensions. The dimension keyword name is encoded in the name component, while dimension-specific syntax is encoded in the value component.

Section 5.1.1 Processing name-value components defines in more detail how to process the name-value pair syntax, arriving at a list of name-value Unicode string pairs. The syntax definitions in 4.2 Fragment Dimensions apply to these Unicode strings.

4.2 Fragment Dimensions

Media fragments support addressing the media along four dimensions:

temporal

This dimension denotes a specific time range in the original media, such as "starting at second 10, continuing until second 20";

spatial

this dimension denotes a specific range of pixels in the original media, such as "a rectangle with size (100,100) with its top-left at coordinate (10,10)";

track

this dimension denotes one or more tracks in the original media, such as "the english audio and the video track";

named id

this dimension denotes a named section of temporal fragment within the original media, such as "chapter 2". 2", and can be seen as a convenient way of specifying a temporal fragment.

The temporal, spatial and track All dimensions are logically independent and can be combined; the outcome is independent of the order of the dimensions. Note however that the id dimension a shortcut is for the temporal dimension; combining both dimensions need to be treated as described in section 6.2.1 Errors on the general URI level .

The track dimension refers to one of a set of parallel media streams (e.g. "the english audio track for a video"), not to a (possibly self-contained) section of the source media (e.g. "Audio track 2 of a CD").

The name dimension cannot be combined with the other dimensions, because the semantics depend on the underlying source media format: some media formats support naming of temporal extents, others support naming of groups of tracks, etc. Error semantics are discussed in 6 Media Fragments Semantics .

4.2.1 Temporal Dimension

Temporal clipping is denoted by the name t , and specified as an interval with a begin time and an end time (or an in-point and an out-point, in video editing terms). Either or both may be omitted, with the begin time defaulting to 0 seconds and the end time defaulting to the duration of the source media. The interval is half-open: the begin time is considered part of the interval whereas the end time is considered to be the first time point that is not part of the interval. If a single number only is given, this is corresponds to the begin time except if it is preceded by a comma that would in this case indicate the end time. Examples: t=10,20 # => results in the time interval [10,20)

Examples:

t=10,20   # => results in the time interval [10,20)

t=,20     # => results in the time interval [0,20)
t=10,     # => results in the time interval [10,end)

t=10
#
=>
also

results
in
the
time
interval
[10,end)

Temporal clipping can be specified either as Normal Play Time (npt) RFC 2326 , as SMPTE timecodes, SMPTE , or as real-world clock time (clock) RFC 2326 . Begin and end times are always specified in the same format. The format is specified by name, followed by a colon ( : ), with npt: being the default. timeprefix = %x74 ; "t"

In this version of the media fragments specification there is no extensibility mechanism to add time format specifiers.

4.2.1.1 Normal Play Time (NPT)

Normal Play Time can either be specified as seconds, with an optional fractional part to indicate miliseconds, or as colon-separated hours, minutes and seconds (again with an optional fraction). Minutes and seconds must be specified as exactly two digits, hours and fractional seconds can be any number of digits. The hours, minutes and seconds specification for NPT is a convenience only, it does not signal frame accuracy. The specification of the "npt:" identifier is optional since NPT is the default time scheme. This specification builds on the RTSP specification of NPT RFC 2326 . npt-sec = 1*DIGIT [ "." *DIGIT ] ; definitions taken

npt-sec       =  1*DIGIT [ "." *DIGIT ]                     ; definitions taken

npt-hhmmss    =  npt-hh ":" npt-mm ":" npt-ss [ "." *DIGIT] ; from RFC 2326,
npt-mmss      =  npt-mm ":" npt-ss [ "." *DIGIT] 
npt-hh        =   1*DIGIT     ; any positive number
npt-mm        =   2DIGIT      ; 0-59
npt-ss        =   2DIGIT      ; 0-59
npttimedef    = [ deftimeformat ":"] ( npttime  [ "," npttime ] ) / ( "," npttime )
deftimeformat = %x6E.70.74                                ; "npt"
npttime
=
npt-sec
/
npt-mmss
/
npt-hhmmss

npttime       = npt-sec / npt-mmss / npt-hhmmss

Examples:

t=npt:10,20 # => results in the time interval [10,20) t=npt:120, # => results in the time interval [120,end) t=npt:,121.5 # => results in the time interval [0,121.5) t=0:02:00,121.5 # => results in the time interval [120,121.5)
t=npt:10,20
#
=>
results
in
the
time
interval
[10,20)
t=npt:,121.5
#
=>
results
in
the
time
interval
[0,121.5)
t=0:02:00,121.5
#
=>
results
in
the
time
interval
[120,121.5)

t=npt:120,0:02:01.5
#
=>
also
results
in
the
time
interval
[120,121.5)
4.2.1.2 SMPTE time codes

SMPTE time codes are a way to address a specific frame (or field) without running the risk of rounding errors causing a different frame to be selected. The format is always colon-separated hours, minutes, seconds and frames. Frames are optional, defaulting to 00. If the source format has a further subdivison of frames (such as odd/even fields in interlaced video) these can be specified further with a number after a dot ( . ). The SMPTE format name must always be specified, because the interpretation of the fields depends on the format. The SMPTE formats supported in this version of the specification are:

  • smpte ,
  • smpte-25 ,
  • smpte-30 and
  • smpte-30-drop .

smpte is a synonym for smpte-30 . smptetimedef = smpteformat ":"( frametime [ "," frametime ] ) / ( "," frametime )

Examples:

t=smpte-30:0:02:00,0:02:01:15 # => results in the time interval [120,121.5) t=smpte-25:0:02:00:00,0:02:01:12.40 # => results in the time interval [120,121.5)
t=smpte-30:0:02:00,0:02:01:15
#
=>
results
in
the
time
interval
[120,121.5)
t=smpte-25:0:02:00:00,0:02:01:12.40
#
=>
results
in
the
time
interval
[120,121.5)

#
(80
or
100
subframes
per
frame
seem
typical)

Using SMPTE timecodes may result in frame-accurate begin and end times, but only if the timecode format used in the media fragment specifier is the same as that used in the original media item.

4.2.1.3 Wall-clock time code

Wall-clock time codes are a way to address real-world clock time that is associated typically with a live video stream. These are the same time codes that are being used by RTSP RFC 2326 , by SMIL SMIL , and by HTML5 HTML 5 . The scheme uses ISO 8601 UTC timestamps (http://www.iso.org/iso/date_and_time_format). The format separates the date from the time with a "T" character and the string ends with "Z", which includes time zone capabilities. To that effect, the ABNF grammar is referring to RFC 3339 , which include the relevant part of ISO 8601 in ABNF form. The time scheme identifier is "clock". >

datetime      = <date-time, defined in RFC 3339>

clocktimedef  = clockformat ":"( clocktime [ "," clocktime ] ) / ( "," clocktime )
clockformat   = %x63.6C.6F.63.6B                          ; "clock"
clocktime     = (datetime / walltime / date)
; WARNING: if your date-time contains '+' (or any other reserved character, per RFC 3986),
;
it
should
be
percent-encoded
when
used
in
a
URI.

; it should be percent-encoded when used in a URI.

For convenience, the definition is copied here

;
date-fullyear   = 4DIGIT
date-month      = 2DIGIT  ; 01-12
date-mday       = 2DIGIT  ; 01-28, 01-29, 01-30, 01-31 based on
                          ; month/year
time-hour       = 2DIGIT  ; 00-23
time-minute     = 2DIGIT  ; 00-59
time-second     = 2DIGIT  ; 00-58, 00-59, 00-60 based on leap second
                          ; rules
time-secfrac    = "." 1*DIGIT
time-numoffset  = ("+" / "-") time-hour ":" time-minute
time-offset     = "Z" / time-numoffset

defined
in

partial-time    = time-hour ":" time-minute ":" time-second
                  [time-secfrac]
full-date       = date-fullyear "-" date-month "-" date-mday
full-time       = partial-time time-offset


RFC
3339


;
date-fullyear
=
4DIGIT
date-month
=
2DIGIT
;
01-12
date-mday
=
2DIGIT
;
01-28,
01-29,
01-30,
01-31
based
on
;
month/year
time-hour
=
2DIGIT
;
00-23
time-minute
=
2DIGIT
;
00-59
time-second
=
2DIGIT
;
00-58,
00-59,
00-60
based
on
leap
second
;
rules
time-secfrac
=
"."
1*DIGIT
time-numoffset
=
("+"
/
"-")
time-hour
":"
time-minute
time-offset
=
"Z"
/
time-numoffset
partial-time
=
time-hour
":"
time-minute
":"
time-second
[time-secfrac]
full-date
=
date-fullyear
"-"
date-month
"-"
date-mday
full-time
=
partial-time
time-offset

date-time
=
full-date
"T"
full-time

Examples:

t=clock:2009-07-26T11:19:01Z,2009-07-26T11:20:01Z # => results in a 1 min interval # on 26th Jul 2009 from 11hrs, 19min, 1sec t=clock:2009-07-26T11:19:01Z # => starts on 26th Jul 2009 from 11hrs, 19min, 1sec
t=clock:2009-07-26T11:19:01Z,2009-07-26T11:20:01Z
#
=>
results
in
a
1
min
interval
#
on
26th
Jul
2009
from
11hrs,
19min,
1sec
t=clock:2009-07-26T11:19:01Z
#
=>
starts
on
26th
Jul
2009
from
11hrs,
19min,
1sec

t=clock:,2009-07-26T11:20:01Z
#
=>
ends
on
26th
Jul
2009
from
11hrs,
20min,
1sec

4.2.2 Spatial Dimension

Spatial clipping selects an area of pixels from visual media streams. For this release of the media fragment specification, only rectangular selections are supported. The rectangle can be specified as pixel coordinates or percentages.

Pixels coordinates are interpreted after taking into account the resource's dimensions, aspect ratio, clean aperture, resolution, and so forth, as defined for the format used by the resource. If an anamorphic format does not define how to apply the aspect ratio to the video data's dimensions to obtain the "correct" dimensions, then the user agent must apply the ratio by increasing one dimension and leaving the other unchanged.

Rectangle selection is denoted by the name xywh . The value is an optional format pixel: or percent: (defaulting to pixel) and 4 comma-separated integers. The integers denote x, y, width and height, respectively, with x=0, y=0 being the top left corner of the image. If percent is used, x and width are interpreted as a percentage of the width of the original media, and y and height are interpreted as a percentage of the original height. xywhprefix = %x78.79.77.68 ; "xywh"

Examples:

xywh=160,120,320,240 # => results in a 320x240 box at x=160 and y=120 xywh=pixel:160,120,320,240 # => results in a 320x240 box at x=160 and y=120
xywh=160,120,320,240
#
=>
results
in
a
320x240
box
at
x=160
and
y=120
xywh=pixel:160,120,320,240
#
=>
results
in
a
320x240
box
at
x=160
and
y=120

xywh=percent:25,25,50,50
#
=>
results
in
a
50%x50%
box
at
x=25%
and
y=25%

If the clipping region is pixel-based and the image is multi-resolution (like an ICO file), the fragment MUST be ignored, so that the url represents the entire image. More generally, pixel-clip an image that does not have a single well defined pixel resolution (width and height) is not recommended.

4.2.3 Track Dimension

Track selection allows the extraction of tracks (audio, video, subtitles, etc) from a media container that supports multiple tracks. Track selection is denoted by the name track . The value is a string. Percent-escaping can be used in the string to specify unsafe characters (including separators such as semi-colon), see the grammar below for details. Multiple track specification is allowed, but requires the specification of multiple track parameters. Interpretation of the string depends on the container format of the original media: some formats allow numbers only, some allow full names. trackprefix = %x74.72.61.63.6B ; "track" trackparam = unistring

Examples:

track=1 # => results in only extracting track 1 track=video&track=subtitle # => results in extracting track 'video' and track 'subtitle'
track=1
#
=>
results
in
only
extracting
track
1
track=video&track=subtitle
#
=>
results
in
extracting
track
'video'
and
track
'subtitle'

track=Wide%20Angle%20Video
#
=>
results
in
only
extracting
track
'Wide
Angle
Video'

As the allowed track names are determined by the original source media, this information has to be known before construction of the media fragment. There is no support for generic media type names (audio, video) across container formats: most container formats allow multiple tracks of each media type, which would lead to ambiguities.

Note that there are existing discovery mechanisms for retrieving the track names of a media resource, such as the Rich Open multitrack media Exposition format (ROE) ROE or the Media Annotations API Media Annotations . Editorial note: Davy   Further, HTML5 media We can also reference the HTML5 Media Multitrack API here, when it's mentioned in has a discovery mechanism for retrieving the HTML5 spec. track names of a media resource through the audioTracks, videoTracks, and textTracks IDL attributes of the HTMLMediaElement. For example, to discover all the names of the available tracks of a video resource, you may want to use the following JavaScript excerpt.

<video id="v1" src="video" controls> </video>
<script type="text/javascript">
  var video = document.getElementsByTagName("video")[0];
  var track_names = [];
  var idx = 0;
  for (i=0; i< video.audioTracks.length; i++, idx++) {
    track_names[idx] = video.audioTracks.getLabel(i);
  }
  for (i=0; i< video.videoTracks.length; i++, idx++) {
    track_names[idx] = video.audioTracks.getLabel(i);
  }
  for (i=0; i< video.textTracks.length; i++, idx++) {
    track_names[idx] = video.textTracks[i].label;
  }
</script>
4.2.5 Common Syntax >

4.2.5 Common Syntax

DIGIT         = <DIGIT, defined in RFC 5234>

pchar         = <pchar, defined in RFC 3986>
unreserved    = <unreserved, defined in RFC 3986>
pct-encoded   = <pct-encoded, defined in RFC 3986>
fragment      = <pct-encoded, defined in RFC 3986>
unichar       = <any Unicode code point>
unistring
=
*unichar

unistring     = *unichar

For convenience, the following definitions are copied here. Only the definitions in the original documents are considered normative

;
defined
in

RFC
5234


ALPHA         =  %x41-5A / %x61-7A   ; A-Z / a-z
DIGIT         =  %x30-39 ; 0-9
HEXDIG        =  DIGIT / "A" / "B" / "C" / "D" / "E" / "F"

; defined in 
unreserved    = ALPHA / DIGIT / "-" / "." / "_" / "~"
pct-encoded   = "%" HEXDIG HEXDIG
sub-delims    = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "="
pchar         = unreserved / pct-encoded / sub-delims / ":" / "@"


ALPHA
=
%x41-5A
/
%x61-7A
;
A-Z
/
a-z
DIGIT
=
%x30-39
;
0-9
HEXDIG
=
DIGIT
/
"A"
/
"B"
/
"C"
/
"D"
/
"E"
/
"F"
;
defined
in

RFC
3986

unreserved
=
ALPHA
/
DIGIT
/
"-"
/
"."
/
"_"
/
"~"
pct-encoded
=
"%"
HEXDIG
HEXDIG
sub-delims
=
"!"
/
"$"
/
"&"
/
"'"
/
"("
/
")"
/
"*"
/
"+"
/
","
/
";"
/
"="
pchar
=
unreserved
/
pct-encoded
/
sub-delims
/
":"
/
"@"

fragment
=
*(
pchar
/
"/"
/
"?"
)

5 Media Fragments Processing

This section defines the different exchange scenarios for the situations explained in section 3 URI fragment and URI query over the HTTP protocol.

The formal grammar defined in the section 4 Media Fragments Syntax describes what producers of media fragment should output. It is not taking into account possible percent-encoding that are valid according to RFC 3986 and the grammar is not a specification of how a media fragment should be parsed. Therefore, section 5.1 Processing Media Fragment URI defines how to parse media fragment URIs.

In a well known context where the MIME TYPE of the resource requested is known, various recipes are proposed depending on the dimension addressed in the media fragment URI, the container and codec formats used by the media resource, or some advanced processing features implemented by the User Agent. Hence, if the container format of the media resource is fully indexable (e.g. MP4, Ogg or WebM) and if the time dimension is requested in the media fragment URI, the User Agent MAY priviledge the recipe described in the section 5.2 Protocol for URI fragment Resolution in HTTP since it will be in a position of issuing directly a normal RANGE request expressed in terms of byte ranges. On the other hand, if the container format of the media resource is a legacy format such as AVI, the Use Agent MAY priviledge the recipe described in the section 5.2.2 Server mapped byte ranges , issuing a RANGE request expressed with a custom unit such as seconds and waiting for the server to provide the mapping in terms of byte ranges. Finally, if the track dimension is requested in the media fragment URI, the User Agent MAY priviledge the recipe described in the section 5.2.3 Server triggered redirect . The User Agent MAY also implement a so-called optimistic processing of URI fragments in particular cases where the MIME TYPE of the resource requested is not yet known. Hence, if a URL fragment occurs within a particular context such as the value of the @src attribute of a media element (audio, video or source) and if the time dimension is requested in the media fragment URI, the User Agent MAY follow the scenario specified in section 5.2.2 Server mapped byte ranges and issues directly a range request using custom units assuming that the resource requested is likely to be a media resource. If the MIME-type of this resource turns out to be a media type, the server SHOULD interpret the RANGE request as specified in section 5.2.2 Server mapped byte ranges . Otherwise it SHOULD just ignore the RANGE header.

5.1 Processing Media Fragment URI

This sections defines how to parse media fragment URIs defined in section 4 Media Fragments Syntax , along with notes on some of the caveats to be aware of. Implementors are free to use any equivalent technique(s).

Editorial note: Raphael  
To generate a simple figure that shows this processing: URI parsing (percent decoding) => name=value pairs => (rfc2047encoding) HTTP

5.1.1 Processing name-value components

This section defines how to convert an octet string (from the query or fragment component of a URI) into a list of name-value Unicode string pairs.

  1. Parse the octet string according to the namevalues syntax, yielding a list of name-value pairs, where name and value are both octet string. In accordance with RFC 3986 , the name and value components must be parsed and separated before percent-encoded octets are decoded.

  2. For each name-value pair:

    1. Decode percent-encoded octets in name and value as defined by RFC 3986 . If either name or value are not valid percent-encoded strings, then remove the name-value pair from the list.

    2. Convert name and value to Unicode strings by interpreting them as UTF-8 . If either name or value are not valid UTF-8 strings, then remove the name-value pair from the list.

Note that the output is well defined for any input.

Examples:

Input Output Notes
"t=1" [("t", "1")] simple case
"t=1&t=2" [("t", "1"), ("t", "2")] repeated name
"a=b=c" [("a", "b=c")] "=" in value
"a&b=c" [("a", ""), ("b", "c")] missing value
"%74=%6ept%3A%310" [("t", "npt:10")] unnecssary percent-encoding
"id=%xy&t=1" [("t", "1")] invalid percent-encoding
"id=%E4r&t=1" [("t", "1")] invalid UTF-8

While the processing defined in this section is designed to be largely compatible with the parsing of the URI query component in many HTTP server environments, there are incompatible differences that implementors should be aware of:

  • "&" is the only primary separator for name-value pairs, but some server-side languages also treat ";" as a separator.

  • name-value pairs with invalid percent-encoding should be ignored, but some server-side languages silently mask such errors.

  • The "+" character should not be treated specially, but some server-side languages replace it with a space (" ") character.

  • Multiple occurrences of the same name must be preserved, but some server-side languages only preserve the last occurrence.

5.1.2 Processing name-value lists

This section defines how to convert a list of name-value Unicode string pairs into the media fragment dimensions.

Given the dimensions defined in section 4.2 Fragment Dimensions , each has a pair of production rules that corresponds to the name and value component respectively:

Keyword Dimension
t 4.2.1 Temporal Dimension
xywh 4.2.2 Spatial Dimension
track 4.2.3 Track Dimension
id 4.2.4 Named ID Dimension
  1. Initially, all dimension are undefined.

  2. For each name-value pair:

    1. If name matches a keyword in the above table, interpret value as per the corresponding section.

    2. Otherwise, the name-value pair does not represent a media fragment dimension. Validators should emit a warning. User agents must ignore the name-value pair.

Note: Because the name-value pairs are processed in order, the last valid occurence of any dimension is the one that is used.

5.2 Protocol for URI fragment Resolution in HTTP

This section defines the protocol steps in HTTP RFC 2616 to resolve and deliver a media fragment specified as a URI fragment.

Various recipes are proposed and described in a separate document 5.2.1 UA mapped byte ranges Editorial note

 
This section is ready to implement. As described in section

3.2 Resolving 5.3 Protocol for URI fragments within the user agent query Resolution in HTTP

, the most optimal case is a user agent that knows how to map media fragments to byte ranges. This is the case typically where a user agent has already downloaded those parts of a media resource that allow it to do or guess the mapping, e.g. headers or a resource, or an index of a resource.

In this case, This section describes the protocol steps used in HTTP exchanges are exactly the same as for any other Web resource where byte ranges are requested RFC 2616 . How the UA retrieves the byte ranges is dependent on the media type of the media resource. We here show examples with only one byte range retrieval per time range, which may in practice turn into several such retrieval actions necessary to acquire the correct time range. Here are the three principle cases a media fragment enabled UA resolve and deliver a media Server will encounter: 5.2.1.1 UA requests URI fragment for the first time A user requests specified as a media fragment URI: URI query.

User → UA (1): http://www.example.com/video.ogv#t=10,20 The UA has to check if a local copy of the requested fragment recipe proposed is available in its buffer - not in this case. But it knows how to map the fragment to byte ranges: 19147 - 22890. So, it requests these byte ranges from the server: UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* Range: bytes=19147-22890 The server extracts the bytes corresponding to the requested range and replies described in a 206 HTTP response: separate document Origin Server (3) → Proxy (4) → UA (5):

HTTP/1.1 206 Partial Content Accept-Ranges: bytes Content-Length: 3743 Content-Type: video/ogg Content-Range: bytes 19147-22880/35614993 Etag: "b7a60-21f7111-46f3219476580" {binary data}
Assuming the UA has received the byte ranges that it requires to serve t=10,20, which may well be slightly more, it will serve the decoded content to the User from the appropriate time offset. Otherwise it may keep requesting byte ranges to retrieve the required time segments.

5.2.1.2 UA requests URI fragment it already has buffered A user requests a media fragment URI: 6 Media Fragments Semantics

User → UA (1): http://www.example.com/video.ogv#t=10,20 The UA has to check if a local copy of the requested fragment is available in its buffer - it is in In this case. But the resource could have changed on the server, so it needs to send a conditional GET. It knows the byte ranges: 19147 - 22890. So, it requests these byte ranges from the server under condition of it having changed: UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* If-Modified-Since: Sat, 01 Aug 2009 09:34:22 GMT If-None-Match: "b7a60-21f7111-46f3219476580" Range: bytes=19147-22890 The server checks if the resource has changed section, we discuss how Media Fragment URIs should be interpreted by checking the date - in this case, the resource was not modified. So, the server replies with a 304 HTTP response. (Note UAs. Valid and error cases are presented. In case of errors, we distinguish between errors that a If-Range header cannot can be used, because if the entity has changed, detected solely based on the entire resource would Media Fragment URI and errors that can only be sent.) Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 304 Not Modified Accept-Ranges: bytes Content-Length: 3743 Content-Type: video/ogg Content-Range: bytes 19147-22880/35614993 Etag: "b7a60-21f7111-46f3219476580" So, detected when the UA serves has information of the decoded media resource to the User our of its existing buffer. (such as duration or track information).

6.1 Valid Media Fragment URIs

5.2.1.3 UA requests URI fragment of a changed resource

A user requests For each dimension, a number of valid media fragment URI fragments and the UA sends the exact same GET request as described in the previous subsection. their semantics are presented.

6.1.1 Valid temporal dimension

This time, the server checks if the resource has changed by checking the date and it has been modified. Since the byte mapping may not be correct any longer, the server can only tell the UA that To describe the resource has changed and leave all further actions to different cases for temporal media fragments, we make the UA. So, it sends a 412 HTTP response: following definitions:

    Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 412 Precondition Failed Accept-Ranges: bytes Content-Length: 3743 Content-Type: video/ogg Content-Range: bytes 19147-22880/22222222 Etag: "xxxxx-yyyyyyy-zzzzzzzzzzzzz" So,
  • s: the UA can only assume start point of the resource has changed and re-retrieve what it needs to get back to being able to retrieve fragments. For most resources this may mean retrieving media, which is always zero (in NPT);
  • e: the header end point of the file. After this it is possible again to do media (i.e. duration) and e > 0;
  • a: a byte range retrieval. positive integer, a >= 0;
  • b: a positive integer, b >= 0.
5.2.2 Server mapped byte ranges

As described Further, as stated in section 3.3 Resolving URI fragments with server help 4.2.1 Temporal Dimension , some User Agents cannot undertake the fragment-to-byte mapping themselves, because temporal intervals are half-open (i.e., the mapping begin time is not obvious. This typically applies to media formats where the setup considered part of the decoding pipeline does not imply knowledge of how to map fragments to byte ranges, e.g. Ogg without OggIndex. Thus, the User Agent would be capable of decoding a continuous resource, but would not know which bytes to request for a media fragment. In this case, interval whereas the User Agent could either guess what byte ranges it has end time is considered to retrieve and the retrieval action would follow be the previous case. Or it could hope first time point that the server provides a special service, which would allow it to retrieve the byte ranges with a simple request is not part of the media fragment ranges. interval). Thus, the HTTP request of the User Agent will include a request for the fragment hoping if we state below that the server can do the byte range mapping and send back the appropriate byte ranges. This "the media is realized by introducing new dimensions for the HTTP Range header, next played from x to y", this means that the byte dimension. The specification for all new Range Request Header dimensions is given through the following ABNF as an extension frame corresponding to the HTTP Range Request Header definition (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2): y will not be played.

Range = "Range" ":" ranges-specifier ranges-specifier = byte-ranges-specifier | fragment-specifier ; ; note that ranges-specifier is extended from ; to cover alternate fragment range specifiers ; fragment-specifier = "include-setup" | fragment-range *( "," fragment-range ) [ ";" "include-setup" ] fragment-range = time-ranges-specifier | track-ranges-specifier | name-ranges-specifier ; ; note that this doesn't capture the restriction to one fragment dimension occurring ; maximally once only in the fragment-specifier definition. ; time-ranges-specifier = npttimeoption / smptetimeoption / clocktimeoption npttimeoption = pfxdeftimeformat "=" npt-sec "-" [ npt-sec ] smptetimeoption = pfxsmpteformat "=" frametime "-" [ frametime ] clocktimeoption = pfxclockformat "=" datetime "-" [ datetime ] track-ranges-specifier = trackprefix "=" trackparam *( ";" trackparam ) name-ranges-specifier = nameprefix "=" nameparam

This specification is meant to be analogous to the one in URIs, but it is a bit stricter. The time unit is not optional. For instance, it can be "npt", "smpte", "smpte-25", "smpte-30", "smpte-30-drop" or "clock" for temporal. Where "ntp" is used for t=a,b with a temporal range, only specification in seconds is possible. Where "clocktime" is used for <= b

  • t=a with a temporal range, only "datetime" is possible and "walltime" < e: media is fully specified in HHMMSS played from a to e.
  • t=,b with fraction and full timezone. Indeed, all optional elements in the URI specification basically become required in the Range header. There b <= e: media is an optional 'include-setup' flag on the fragment range specifier - this flag signals played from s to the server whether delivery of the decoder setup information (i.e. typically file header information) b.
  • t=,b with e < b: media is also required as part of the reply played from s to this request. This can help avoid an extra roundtrip where a Media Fragment URI is, e.g. directly typed into e.
  • t=a,b with a Web browser. If there were multiple track parameters provided in the = 0, b = e: whole media fragment URI, they are all aggregated together here in a single track ranges specifier, where the track names are separated by semi-colon. Note that if resource is played.
  • t=a,b with a track name did include < b, a semi-colon in the < e and b <= e: media fragment URI, it is now percent escaped. Note that the specification does not foresee played from a Range dimension for spatial media fragments since they are typically resolved and interpreted by the User Agent (i.e., spatial fragment extraction is not performed on server-side) for the following reasons: spatial media fragments are typically not expressible in terms of byte ranges. Spatial fragment extraction would thus require transcoding operations resulting in new resources rather than fragments of the original media resource. As described in section to b (the normal case).
  • 3 URI fragment
  • t=a,b with a < b, a < e and URI query , spatial fragment extraction e < b: media is in this case better represented by URI queries. played from a to e.
  • %74=10,20 resolve percent encoding to t=10,20.
  • When a User Agent receives an extracted spatial media fragment, it is not trivial
  • t=%31%30 resolve percent encoding to visualize the context of this fragment (see also section t=10.
  • 7.2 Clients Displaying Media Fragments
  • t=10%2C20 resolve percent encoding to t=10,20.
  • ). More specifically, spatial context requires a meaningful background, which will not be available at the User Agent when the spatial fragment is extracted by the server.
  • t=%6ept:10 resolve percent encoding to t=npt:10.
  • t=npt%3a10 resolve percent encoding to t=npt:10.
Editorial note: Davy
 

Special attention should be paid for named fragments and more specifically when a named fragment represents a 6.1.2 Valid spatial fragment. We should clearly describe 1. what named fragments are and 2. how they are resolved. dimension

Next to To describe the introduction of new dimensions different cases for the HTTP Range request header, spatial media fragments, we also introduce a new HTTP response header, called Content-Range-Mapping, which provides the mapping of the retrieved byte range to make the original Range request, which was not in bytes. It serves two purposes: following definitions:

    It Indicates
  • a: the actual mapped range in terms x coordinate of fragment dimensions. This is necessary since the server might not be able to provide a byte range mapping that corresponds exactly to the requested range. Therefore, the User Agent needs to be aware of this variance. spatial region (a >= 0).
  • It provides context information regarding the parent resource in case the Range request contained a temporal dimension. More specifically, the header contains
  • b: the start and end time y coordinate of the parent resource. This way, the User Agent is able to understand and visualize spatial region (b >= 0).
  • c: the temporal context of width the media fragment. spatial region (c > 0).
  • The specification for the Content-Range-Mapping header is based on
  • d: the specification height of the Content-Range header (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16) and is shown below. Note that spatial region (d > 0).
  • w: the Content-Range-Mapping header adds in case width of the temporal dimension media resource (w > 0).
  • h: the instance start and end in terms height of seconds after a slash "/" character in analogy to the Content-Range header. Also, we introduce an extension to the Accept-Ranges header (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.5). Content-Range-Mapping = "Content-Range-Mapping" ":" '{' ( content-range-mapping-spec [ ";" def-include-setup ] ) / def-include-setup '}' '=' '{' byte-content-range-mapping-spec '}' def-include-setup = %x69.6E.63.6C.75.64.65.2D.73.65.74.75.70 ; "include-setup" byte-range-mapping-spec = bytes-unit SP byte-range-resp-spec *( "," byte-range-resp-spec ) "/" ( instance-length / "*" ) content-range-mapping-spec = time-mapping-spec | track-mapping-spec | name-mapping-spec time-mapping-spec = timeprefix ":" time-mapping-options time-mapping-options = npt-mapping-option / smpte-mapping-option / clock-mapping-option npt-mapping-option = deftimeformat SP npt-sec "-" npt-sec "/" [ npt-sec ] "-" [ npt-sec ] smpte-mapping-option = smpteformat SP frametime "-" frametime "/" [ frametime ] "-" [ frametime ] clock-mapping-option = clockformat SP datetime "-" datetime "/" [ datetime ] "-" [ datetime ] track-mapping-spec = trackprefix SP trackparam *( ";" trackparam ) media resource (h > 0).
  • name-mapping-spec = nameprefix SP nameparam
Accept-Ranges = "Accept-Ranges" ":" acceptable-ranges acceptable-ranges = 1#range-unit *( "," 1#range-unit )| "none" ; ; note this does not represent the restriction that range-units can only appear once at most ; range-unit = bytes-unit | other-range-unit bytes-unit = "bytes" other-range-unit = token | timeprefix | trackprefix | nameprefix

Three cases can be distinguished when a User Agent needs assistance by a server to perform the byte range mapping. In the next subsections, we'll go through the protocol exchange action step by step. The following spatial fragments are all valid:

5.2.2.1 Server mapped byte ranges with corresponding binary data
    User → UA (1): http://www.example.com/video.ogv#t=10,20 The
  • xywh=a,b,c,d with a+c <= w and b+d <= h: the UA has to check if displays a local copy of the requested spatial fragment is available in its buffer. If it is, we revert back to the processing described in sections with coordinates (in pixel xywh format) a,b,c,d (the normal pixel case).
  • 5.2.1.2
  • xywh=a,b,c,d with a+c > w, a < w, and b+d < h: the UA requests URI displays a spatial fragment it already has buffered with coordinates (in pixel xywh format) a,b,w-a,d.
  • xywh=a,b,c,d with a+c < w, b+d > h, and 5.2.1.3 b < h: the UA requests URI fragment of displays a changed resource spatial fragment with coordinates (in pixel xywh format) a,b,c,h-d.
  • , since
  • xywh=a,b,c,d with a+c > w, a < w, b+d > h, and b < h: the UA already knows the mapping to byte ranges. If the requested displays a spatial fragment is not available in its buffer, with coordinates (in pixel xywh format) a,b,w-a,h-d.
  • xywh=pixel:a,b,c,d with a+c <= w and b+d <= h: the UA sends an HTTP request to the server, including displays a Range header spatial fragment with temporal dimension. The request is shown below: UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* Range: t:npt=10-20 coordinates (in pixel xywh format) a,b,c,d (the normal pixel case).
  • If
  • xywh=percent:a,b,c,d with a+c <= 100, b+d <= 100: the server does not understand UA displays a Range header, it MUST ignore the header field that includes that range-set. This is in sync to the HTTP RFC spatial fragment with coordinates (in pixel xywh format) a/w*100,b/h*100,c/w*100,d/h*100 (the normal percent case).
  • RFC 2616
. This means that where

The result of doing spatial clipping on a server does not support media fragments, the complete resource will be delivered. It also means that we can combine both, byte range and fragment range headers in one request, since the server will only react to the Range header it understands. Assuming the server can map the given Range to one or more byte ranges, it will reply with these in a 206 HTTP response. Where has multiple byte ranges are required to satisfy the Range request, these are transmitted as a multipart message-body. The media type for this purpose is called "multipart/byteranges". This video tracks is in sync with that the HTTP RFC RFC 2616 . Here spatial clipping is the reply applied to the example above, assuming a single byte range is sufficient: all tracks.

6.1.3 Valid track dimension

Origin Server (3) → Proxy (4) → UA (5): The following track fragments are valid:

HTTP/1.1 206 Partial Content Accept-Ranges: bytes, t, track, id Content-Length: 3743 Content-Type: video/ogg Content-Range: bytes 19147-22880/35614993 Content-Range-Mapping: { t:npt 9.85-21.16/0.0-653.79 } = { bytes 19147-22880/35614993 } Etag: "b7a60-21f7111-46f3219476580" {binary data} Note the presence of the new reply header called Content-Range-Mapping, which provides
  • track=4 with '4' the mapping name of a track within the retrieved byte range to the original Content-Range request, which was not in bytes. As we return both, byte and temporal ranges, source media: the UA and any intermediate caching proxy is enabled to map byte positions with time offsets and fall back to byte range request where the fragment is re-requested. Also note that through only plays the extended list in track with name '4'.
  • track=n%40m3%20%26%3D with 'n@m3 &=' the Accept-Ranges it is possible to identify which fragment schemes name of a server supports. In track within the case where a media fragment results in a multipart message-body, source media: the Content-Range headers will be spread throughout UA only plays the binary data ranges, but track with name 'n@m3 &='.
  • track=4&track=5 with '4' and '5' both the Content-Range-Mapping name of tracks within the media fragment will only be with source media: the main header. Note that requesting track fragments typically result in multipart message-bodies, on condition that UA only plays the parent resource is characterized by interleaved tracks. For example: tracks with name '4' and '5'.
  • Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 206 Partial Content Accept-Ranges: bytes, t, track, id Content-Length: 3743 Content-Type: video/ogg Content-Range-Mapping: { track audio1;video1 } = { bytes 123-2589, 14560-27891,58909-81230/35614993 } Content-type: multipart/byteranges; boundary=THIS_STRING_SEPARATES Etag: "b7a60-21f7111-46f3219476580"
--THIS_STRING_SEPARATES Content-type: video/ogg Content-Range: bytes 123-2589/35614993 {binary data} --THIS_STRING_SEPARATES Content-type: video/ogg Content-Range: bytes 14560-27891/35614993 {binary data} --THIS_STRING_SEPARATES Content-type: video/ogg Content-Range: bytes 58909-81230/35614993 {binary data} --THIS_STRING_SEPARATES--
Note that a caching proxy that does not understand a Range header must not cache "206 Partial Content" responses as per HTTP RFC RFC 2616

. Thus, the new Range requests won't be cached by legacy Web proxies. 6.1.4 Valid id dimension

The following id fragments are valid:

    5.2.2.2 Server mapped byte ranges
  • id=song1 with corresponding binary data and codec setup data When the User Agent needs help from the server to setup the initial decoding pipeline (i.e., the User Agent has no codec setup information at its disposal), the User Agent can request, next to 'song1' an id fragment in the bytes source media corresponding to the requested fragment, the bytes necessary to setup its decoder. This is possible by adding the 'include-setup' flag to temporal fragment t=a,b: the Range header, as illustrated below: UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* Range: t:npt=10-20;include-setup Analogous only plays from a to section 5.2.2.1 Server mapped byte ranges with corresponding binary data b.
  • , the server can map the given Range to one or more byte ranges, it will reply
  • id=n%40m3 with these 'n@m3' an id fragment in a 206 HTTP response. Additionally, the server adds the bytes source media corresponding with the requested setup information to the response. Since this setup information usually appears in front of a media resource, temporal fragment t=a,b: the response typically results in a multipart message-body. The response is shown below: Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 206 Partial Content Accept-Ranges: bytes, t, track, id Content-Length: 3795 Content-Type: video/ogg Content-Range-Mapping: { t:npt 11.85-21.16/0.0-653.79;include-setup } = { bytes 0-52,19147-22880/35614993 } Content-type: multipart/byteranges; boundary=THIS_STRING_SEPARATES Etag: "b7a60-21f7111-46f3219476580" --THIS_STRING_SEPARATES Content-type: video/ogg Content-Range: bytes 0-52/35614993 {binary data} --THIS_STRING_SEPARATES Content-type: video/ogg Content-Range: bytes 19147-22880/35614993 {binary data} --THIS_STRING_SEPARATES-- only plays from a to b.
Note that the Content-Range-Mapping header indicates that the codec setup information is included in the response. In this example, the response consists of two parts of byte ranges: the first part corresponds to the setup information, the second part corresponds to the requested fragment.
5.2.2.3 Proxy cacheable server mapped byte ranges As described in section

3.4 Resolving 6.2 Errors detectable based on the URI fragments in a proxy cacheable manner

, the server mapped byte ranges approach can be extended to play with existing caching Web proxy infrastructure. This is important, since video is a huge bandwidth eater in the current Internet and falling back to using existing Web proxy infrastructure is important, particularly since progressive download

Both syntactical and direct access mechanisms for video rely heavily on this functionality. Over time, semantical errors are treated similar. More specifically, the proxy infrastructure will learn how to cache media fragment URIs directly as described in UA SHOULD ignore name-value pairs causing errors detectable based on the previous section and then will not require this extra effort. URI.

To enable media-fragment-URI-supporting UAs to make their retrieval cacheable, Below, we introduce some extra HTTP headers, which will help tell the server and the proxy what to do. There is an Accept-Range-Redirect request header which signals to provide more details for each of the server that only a redirect to dimensions. We look at errors in the correct byte ranges is necessary different dimensions and the result should be delivered their values in the Range-Redirect header. The ABNF for these additional two HTTP headers is given as follows: subsequent sub-sections. We start with errors on the more general levels.

Accept-Range-Redirect = "Accept-Range-Redirect" ":" bytes-unit

Range-Redirect = "Range-Redirect" ":" byte-range-resp-spec *( "," byte-range-resp-spec ) Let's play it through 6.2.1 Errors on an example. A user requests a media fragment URI: the general URI level

User → UA (1): http://www.example.com/video.ogv#t=10,20 The UA has to check if a local copy of the requested fragment is available in its buffer. In our case here, it is not. If it was, we would revert back to following list provides the processing described in sections 5.2.1.2 UA requests URI fragment it already has buffered and 5.2.1.3 UA requests URI fragment different kind of a changed resource , since the UA already knows the mapping to byte ranges. The UA issues a HTTP GET request with errors that can occur on the fragment general URI level and requesting to retrieve just the mapping to byte ranges: how they should be treated:

  • UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* Range: t:npt=10-20 Accept-Range-Redirect: bytes The server converts the given time range to a byte range Unknown dimension: only dimensions described in this specification (i.e., t, xywh, track, and sends an empty reply that refers the UA to the right byte range for id) are considered as known dimensions; all other dimensions are considered as unknown. Unknown dimensions SHOULD be ignored by the correct time range. UA.
  • Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 307 Temporary Redirect Location: http://www.example.com/video.ogv Accept-Ranges: bytes, t, track, id Content-Length: 0 Content-Type: video/ogg Content-Range-Mapping: { t:npt 11.85-21.16/0.0-653.79 } = { bytes 19147-22880/* } Range-Redirect: 19147-22880 Vary: Accept-Range-Redirect Note that codec setup information can also be requested in combination with Multiple occurrences of the Accept-Range-Redirect header, which can same dimension: only the last valid occurrence of a dimension (e.g., t=10 in #t=2&t=10) is interpreted, all previous occurrences (valid or invalid) SHOULD be realized ignored by adding the 'include-setup' flag to the Range request header. UA. The UA proceeds track dimension is an exception to put this rule: multiple track dimensions are allowed (e.g., #track=1&track=2 selects both tracks 1 and 2).
  • Combining dimensions: the actual fragment request through as id dimension combined with a normal byte range request as temporal dimension results in section 5.2.1.1 UA requests URI fragment for multiple occurrences of the first time temporal dimension (see previous item).
  • :
UA (5) → Proxy (6) → Origin Server (7): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* Range: 19147-22880
The Origin Server puts the data together and sends it to the UA: Origin Server (7) → Proxy (8) → UA (9): HTTP/1.1 206 Partial Content Accept-Ranges: bytes, t, track, id Content-Length: 3743 Content-Type: video/ogg Content-Range: bytes 19147-22880/35614993 Etag: "b7a60-21f7111-46f3219476580"

{binary data} The UA decodes the data and displays it from 6.2.2 Errors on the requested offset. temporal dimension

The caching Web proxy in value cannot be parsed for the middle has now cached temporal dimension or the byte range, since it adhered parsed value is invalid according to the normal byte range request protocol. All existing caching proxies will work with this. New caching Web proxies may learn to interpret media specification. Invalid temporal fragments natively, so won't require SHOULD be ignored by the extra packet exchange described in this section. UA.

Examples:

    5.2.3 Server triggered redirect When
  • t=a,b with a server decides not to serve the requested media fragment in terms >= b (the case of byte ranges (i.e., serving the requested media an empty temporal fragment (a=b) is also considered as specified in section an error)
  • 5.2.2 Server mapped byte ranges
  • t=a,
  • ), it can redirect the UA to a representation of this fragment (for instance by transforming the media fragment URI into a media fragment query, as specified in section
  • t=asdf
  • 5.3 Protocol for URI query Resolution in HTTP
  • t=5,ekj
  • ). This is particularly useful in cases where too many byte ranges would need to e extracted to satisfy the range request. A user requests a media fragment URI using a URI fragment:
  • t=agk,9
  • User → UA (1): http://www.example.com/video.ogv#track=video1
  • t='0'
  • Subsequently, the UA requests the media fragment from the server using the Range header:
  • t=10-20
  • UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv HTTP/1.1 Host: www.example.com Accept: video/* Range: track=video1
  • t=10:20
  • The server decides not to serve the requested media fragment in terms of byte ranges (for instance, because the track media fragment results in too many byte ranges). The server redirects the UA to an alternate representation. For example, the URI fragment can be transformed into a URI query. Further, a Link header is added stating that the redirected location is a fragment of the originally requested resource.
  • t=10,20,40
  • Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 307 Temporary Redirect Location: http://www.example.com/video.ogv?track=video1 Accept-Ranges: bytes, t, track, id Content-Length: 0 Content-Type: video/ogg Link: <http://www.example.com/video.ogv#track=video1>; rel="fragment" Vary: *
  • t%3D10 where %3D is equivalent to =; percent encoding does not resolve
Editorial note: Davy
 

5.3 Protocol for URI query Resolution in HTTP This section describes 6.2.5 Errors on the protocol steps used in HTTP RFC 2616 to resolve and deliver a media fragment specified as a URI query. A user requests a media fragment URI using a URI query: id dimension

User → UA (1): http://www.example.com/video.ogv?t=10,20 This is a full resource, so it is a simple HTTP retrieval process. The UA has to check if a local copy of the requested resource is available in its buffer. If yes, it does a conditional GET with e.g. an If-Modified-Since and If-None-Match HTTP header. Assuming the resource has not been retrieved before, value cannot be parsed for the following is sent to id dimension. Invalid id fragments SHOULD be ignored by the server: UA.

UA (1) → Proxy (2) → Origin Server (3): GET /video.ogv?t=10,20 HTTP/1.1 Host: www.example.com Accept: video/* If the server doesn't understand these query parameters, it typically ignores them and returns the complete resource. This is not a requirement by the URI or the HTTP standard, but the way it is typically implemented in Web browsers. Examples:

A media fragment supporting server has to create a complete media resource for the URI query, which
  • id=na=me (invalid character in the case of Ogg requires creation of a new resource by adapting the existing Ogg file headers and combining them with the extracted byte range that relates to the given fragment. Some name of the codec data may also need to be re-encoded since, e.g. t=10 does not fall clearly on a decoding boundary, but the retrieved resource must match as closely as possible the URI query. This new resource is sent back as a reply: fragment)
  • Origin Server (3) → Proxy (4) → UA (5): HTTP/1.1 200 OK Content-Length: 3782 Content-Type: video/ogg Etag: "b7a60-21f7111-46f3219476580" Link: <http://www.example.com/video.ogv#t=10,20>; rel="alternate"
{binary data}

6.3 Errors detectable based on information of the source media

Note Errors that a Link header MAY can only be provided indicating the relationship between the requested URI query and the original media fragment URI. This enables detected when the UA to retrieve further has information about of the original resource, source media are treated differently. Examples of such as its full length. In this case, the user agent is also enable to choose to display information are the dimensions duration of a video, the primary resource resolution of an image, track information, or the ones created by the query. The UA serves mime type of the decoded media resource to (i.e., all information that is not detectable solely based on the user. Caching in Web proxies works as it has always worked - most modern Web servers and UAs implement a caching strategy for URIs URI). Note that contain a query using one lot of the three methods for marking freshness: heuristic freshness analysis, the Cache-Control header, or the Expires header. In this case, many copies of different segments of information is located within the original resource video.ogv may end up in proxy caches. An intelligent media proxy in future may devise a strategy to buffer such resources in a more efficient manner, where headers and byte ranges are stored differently. setup information

Further, media fragment URI queries can be extended to enable UAs to use the Range-Redirect HTTP header to also revert back to a byte range request. This is analogous to section 5.2.2.3 Proxy cacheable server mapped byte ranges .

Note that a server that does not support media fragments through either URI fragment or query addressing will return Below, we provide more details for each of the full resource in either case. It is therefore not possible to first try URI fragment addressing and when that fails to try URI query addressing. dimensions.

6.2 6.3.2 Errors on the temporal dimensions Assuming a single temporal dimension is present, we now analyse what fragment values may be specified here and how they should be handled.

For this, To describe the different cases for temporal media fragments, we make use the definitions from 6.1.1 Valid temporal dimension . The invalidity of the following definitions: s: temporal fragments can only be detected by the start point of UA if it knows the media duration (for non-existent temporal fragments) and s >= 0 e: the end point frame rate (for smpte temporal fragments) of the media (i.e. duration = e - s ) and s < e a: a positive integer, a >= 0 b: a positive integer, b >= 0 source media.

    6.2.1 Valid requests For
  • t=a,b with a <= b t=, results in the HTTP status code 206: deliver from s to e t=a, with s <= a, > 0, a < b, a >= e results in the HTTP status code 206: deliver from and b > e: a non-existent temporal fragment, the UA seeks to e the end of the media (i.e., e).
  • t=a, or t=a with a < s results in >= e: a non-existent temporal fragment, the HTTP status code 206: deliver from s UA seeks to e t=,b with s < b, b <= e results in the HTTP status code 206: deliver from s to b t=,b with e < b results in end of the HTTP status code 206: deliver from s to e media (i.e., e).
  • t=a,b t=smpte-25:0:00:04 with s = a, b = e restulst in the HTTP status code 206: deliver from s to e t=a,b with s <= a, a < b, source media having smpte-30 labels: a < e and b <= e results in mismatch between the HTTP status SMPTE time code 206: deliver from a to b (the normal case) t=a,b with s <= a, a < b, a < e and e < b results used in the HTTP status code 206: deliver from a to e t=a,b with a < s, a < b, s < b URI and b <= e results the SMPTE labels encoded in the HTTP status code 206: deliver from s to b t=a,b with a < s, a < b and e < b requested media resource results in an invalid temporal fragment. In this case, the HTTP status code 206: deliver from s to e %74=10,20 resolve percent encoding to t=10,20 t=%31%30 resolve percent encoding to t=10 t=10%2C20 resolve percent encoding to t=10,20 t=%6ept:10 resolve percent encoding to t=npt:10 t=npt%3a10 resolve percent encoding to t=npt:10 temporal fragment SHOULD be ignored by the UA.
  • Editorial note: Raphael   The following paragraph is controversial since it could lead to non-interoperable implementations. Editorial note: Silvia   If the UA needs to retrieve a large part of the resource or even the full resource, it will probably decide to make multiple range requests rather than a single one. If
  • t=smpte-25:0:00:04 with the resource is, however, small, it may decide to just retrieve source media having no smpte labels (i.e., the full resource without a range request. The UA should make this choice given context information, e.g. if it knows that it will be a lot source media is non smpte-encoded): implementation of data, it will retrieve it in smaller chunks. If it chooses this case needs to request the full resource in one go and not make use of a Range request, the result will be a 200 rather than a 206. defined.
  • 6.2.2 Empty The resolved time segment is empty.
  • t=a,a results in a HTTP Range request with 'include-setup' as specified in section 5.2.2.2 Server mapped byte ranges t=smpte-25:0:00:04 with corresponding binary data and codec setup data unless the UA is already set up for source media having non contiguous smpte timecodes: implementation of this resource in which case it will not undertake an unnecessary retrieval request. Results in 206 for the setup data. Effect: retrieve whatever the browser needs to set up playback, but otherwise nothing be defined.

6.2.3 Non-existent 6.3.3 Errors on the spatial dimension

The value resolves to a non-existent fragment. If To describe the UA is already set up different cases for decoding the resource and it can identify that the fragment is non-existent (i.e. knows about start and end times), it will avoid undertaking an unnecessary retrieval action. Otherwise it will undertake the RANGE retrieval request with spatial media fragments, we use the 'include-setup' as specified in section definitions from 5.2.2.2 Server mapped byte ranges with corresponding binary data and codec setup data 6.1.2 Valid spatial dimension and will receive a 206 with just . The invalidity of the setup data. If following spatial fragments can only be detected by the UA is set up for decoding, but cannot identify that the fragment is non-existent and does if it knows the retrieval action without resolution of the 'include-setup', it will result in a 416. source media.

  • t=a,b xywh=a,b,c,d with s < a, a < b, a >= e and b > e retrieves nothing since interval beyond end of the resource t=a,b with a < s, a < b, b <= s and w and/or b < e retrieves nothing since interval ahead of start of the resource t=a, with a >= e retrieves nothing since interval beyond end of h: the resource t=,a with a <= s retrieves nothing since interval ahead of start top-left coordinate (a,b) of the resource rectangular lies outside the source media and is therefore invalid. The UA SHOULD ignore this spatial fragment.
Effect: retrieve whatever the browser needs to set up playback, but otherwise nothing

6.2.4 Validity error 6.3.4 Errors on the track dimension

The value cannot invalidity of track fragments can be parsed for the dimension. If detected if the UA is already set up for decoding the resource, it will identify that the fragment is invalid and avoid undertaking an unnecessary retrieval action. Otherwise it will undertake the RANGE retrieval request with the 'include-setup' as specified knows which tracks are available in section 5.2.2.2 Server mapped byte ranges with corresponding binary data and codec setup data and will receive a 206 with just the setup data. Examples: t=a,b with a > b retrieves nothing since inverted interval t=asdf t=5,ekj t=agk,9 t='0' t=10-20 t=10:20 t=10,20,40 t%3D10 where %3D is equivalent to =; percent encoding does not resolve Effect: retrieve whatever source media. If the browser needs to set up playback, but otherwise nothing 6.2.5 SMPTE time code mismatch When there is UA detects a mismatch between the SMPTE time code used by non-existing track in the UA and Media Fragment URI, it SHOULD ignore the encoding settings of track fragment. For example, the requested source media resource (e.g., use consists of smpte-25 time code when the media resource is encoded at 30fps), the server MUST ignore the RANGE header two tracks: 'videohigh' and returns the whole resource (i.e., a 200). 6.3 Errors on the spatial dimensions Assuming 'audiomed'. The track fragment track=foo points to a single spatial dimension is present, we now analyse what content can appear here non-existing track fragment and how it should be handled. ignored if the UA knows which tracks are available.

Editorial note: Silvia   This list still has to be provided.
This list still has to be provided.

7 Notes to Implementors (non-normative)

This section contains notes to implementors. Some of the information here is already stated formally elsewhere in the document, and the reference here is mainly a heads-up. Other items are really outside the scope of this specification, but the notes here reflect what the authors think would be good practice.

The sub-sections are not mutually exclusive. Hence, an implementer of a web browser as a media fragment client should read the sections 7.1 Browsers Rendering Media Fragments , 7.2 Clients Displaying Media Fragments and 7.3 All Media Fragment Clients .

7.1 Browsers Rendering Media Fragments

The pixel coordinates defined in the section 4.2.2 Spatial Dimension are intended to be identical to the intrinsic width and height defined in HTML5 .

For spatial URI fragments, the next section describes two distinct use cases, highlighting and cropping. HTML rendering clients, however, are expected to implement cropping as the default rendering mechanism.

7.2 Clients Displaying Media Fragments

When dealing with media fragments, there is a question whether to display the media fragment in context or without context. In general, it is recommended to display a URI fragment in context since it is part of a larger resource. On the other hand, a URI query results in a new resource, so it is recommended to display it as a complete resource without context. The next paragraphs discuss for each axis the context of a media fragment and provides suggestions regarding the visualization of the URI fragment within its context.

For a temporal URI fragment, it is recommended to start playback at a time offset that equals to the start of the fragment and pause at the end of the fragment. When the "play" button is hit again, the resource will continue loading and play back beyond the end of the fragment. When seeking to specific offsets, the resource will load and play back from those seek points. It is also recommended to introduce a "reload" button to replay just the URI fragment. In this way, a URI fragment basically stands for "focusing attention". Additionally, temporal URI fragments could be highlighted on the transport bar.

For a spatial URI fragment, we foresee two distinct use cases: highlighting the spatial region in-context and cropping to the region. In the first case, the spatial region could be indicated by means of a bounding box or the background (i.e., all the pixels that are not contained within the region) could be blurred or darkened. In the second case, the region alone would be presented as a cropped area. How a document author specifies which use case is intended is outside the scope of this specification, we suggest implementors of the specification provide a means for this, for example through attributes or stylesheet elements.

Finally, for track URI fragments, it is recommended to play only the tracks identified by the track URI fragment. If no tracks are specified, the default tracks should be played. Different tracks could be selected using drop-down boxes or buttons; the selected tracks are then highlighted during playback. The way the UA retrieves information regarding the available tracks of a particular resource is out of scope for this specification.

7.3 All Media Fragment Clients

Resolution Order: Where multiple dimensions are combined in one URI fragment request, implementations are expected to first do track temporal, id, and temporal track selection on the container level, and then do spatial clipping on the codec level. Named selection is done for whatever the name stands for: a track, a temporal section, or a spatial region.

Media Fragment Grammar: Note that the grammar for Media Fragment URI only specifies the grammar for features standardised by this specification. If a string does not parse correctly it does not necessarily mean the URI is wrong, it only means it is not a Media Fragment according to this specification. It may be correct for some extended form, or for a completely different fragment specification method. For this reason, error recovery on syntax errors in media fragment specifiers is unwise.

External Clipping: There is no obligatory resolution method for a situation where a media fragment URI is being used in the context of another clipping method. Formally, it is up to the context embedding the media fragment URI to decide whether the outside clipping method overrides the media fragment URI or cascades, i.e. is defined on the resulting resource. In the absence of strong reasons to do otherwise we suggest cascading. An example is a SMIL element as follows: <smil:video clipBegin="5" clipEnd="15" src="http://www.example.com/example.mp4#t=100,200"/> . This should start playback of the original media resource at second 105, and stop at 115.

Content-Range-Mapping: The Content-Range-Mapping header returned sometimes refers to a completely different range than the one that was specified as the Range: in the request. This can happen if a byte-based range is requested from a cache server that is not Media Fragment aware, and that server had previously cached the data as a result of a time range request. Technically, the information in the Content-Range-Mapping header is still correct, but it is completely unrelated to the request issued.

7.4 Media Fragment Servers

Media type: The media type of a resource retrieved through a URI fragment request is the same as that of the primary resource. Thus, retrieval of e.g. a single frame from a video will result in a one-frame-long video. Or, retrieval of all the audio tracks from a video resource will result in a video and not a audio resource. When using a URI query approach, media type changes are possible. E.g. a spatial fragment from a video at a certain time offset could be retrieved as a jpeg using a specific HTTP "Accept" header in the request.

Synchronisation: Synchronisation between different tracks of a media resource needs to be maintained when retrieving media fragments of that resource. This is true for both, URI fragment and URI query retrieval. With URI queries, when transcoding is required, a non-perceivable change in the synchronisation is acceptable.

Embedded Timecodes: When a media resource contains embedded time codes, these need to be maintained for media fragment retrieval, in particular when the URI fragment method is used. When URI queries are used and transcoding takes place, the embedded time codes should remain when they are useful and required.

SMPTE Timecodes: Standardisation of SMPTE timecodes in this document is primarily intended to allow frame-accurate references to sections of video files, they can be seen as a form of content-based addressing.

Reasonable Clipping: Temporal clipping needs to be as close as reasonably possible to what the media fragment specified, and not omit any requested data. "Reasonably close" means the nearest compression entity to the requested fragment that completely contains the requested fragment. This means, e.g. for temporal fragments if a request is made for http://www.example.org/video.ogv#t=60,100 , but the closest decodable range is t=58,102 because this is where a packet boundary lies for audio and video, then it will be this range that is returned. The UA is then capable of displaying only the requested subpart, and should also just do that. For some container formats this is a non-issue, because the container format allows specification of logical begin and end.

Reasonable byte ranges: If a single temporal range request would result in a disproportionally large number of byte ranges it may be better if the server returns a redirect to the query form of the media fragment. This situation could happen, happen for example, if the underlying media file is organized in a strange way.

7.5 Media Fragment Web Applications

Media Fragment URIs are only defined on media resources. However, many Web developers that create Web pages with video or audio want to provide their users the ability to jump directly to media fragments - in particular to time offsets in a video - through providing a URI scheme for the Web page.

The way in which to realize this without requiring an extra server interaction is by using a URI fragment scheme on the Web page which is parsed by JavaScript and communicates the media fragment to the audio or video resource loader. In HTML5 it would need to change the @src attribute of the appropriate <audio> or <video> element with the appropriate URI fragment and then call the load() function to make the element (re)load the resource with that URI.

A URI scheme for such a Web page may involve ampersand-separated name-value pairs as defined in this specification, e.g. http://example.com/videopage.html#t=60,100 .

However, the Web developer has to create a scheme that works with the remainder of the Web page fragment addressing functionality. If, for example, the Web page makes use of the ID attributes of the elements on the page for scrolling down on the page, adding media fragment URI addressing to the Web page addressing will fail. For example, if http://example.com/videopage.html#first works and scrolls to an offset on that Web page, http://example.com/videopage.html#first&t=60,100 will not do the same scrolling. The Web developer will then need to parse the fragment parameter and implement the scrolling functionality in JavaScript manually using the scrollTo() or scrollTop() functions.

8 Conclusions

8.1 Qualification of Media Resources

HTTP byte ranges can only be used to request media fragments if these media fragments can be expressed in terms of byte ranges. This restriction implies that media resources should fulfil In other words, the following conditions: The media fragments can be extracted in the compressed domain; domain.

No syntax element modifications in the bitstream are needed to perform the extraction. Not all If a media formats will be compliant with these two conditions. Hence, we distinguish the following categories: The fragments of a media resource meets the two conditions (i.e., fragments can be extracted in the compressed domain and no syntax element modifications are necessary). In this case, expressable in terms of byte ranges, caching media fragments of such media resources is possible using HTTP byte ranges, because their media fragments are addressable in terms of byte ranges. Media fragments can be extracted in the compressed domain, but syntax element modifications are required. These In case media fragments are cacheable using HTTP byte ranges on condition that the syntax element modifications are needed in media-headers applying to the whole media resource/fragment. In this case, those media-headers could be sent to the client in the first response of the server, which is a response to a request on a specific media resource different from the byte-range content. Media fragments cannot be extracted in the compressed domain. In this case, domain, transcoding operations are necessary to extract media fragments. Since these media fragments are not expressible in terms of byte ranges, it is not possible to cache these media fragments using HTTP byte ranges. Note that media formats which enable extracting fragments in the compressed domain, but are not compliant with category 2 (i.e., syntax element modifications are not only applicable to the whole media resource), also belong to this category.

A References

[RFC 2119]
S. Bradner. Key Words for use in RFCs to Indicate Requirement Levels . IETF RFC 2119, March 1997. Available at http://www.ietf.org/rfc/rfc2119.txt .
[RFC 2326]
Real Time Streaming Protocol (RTSP) . IETF RFC 2326, April 1998. Available at http://www.ietf.org/rfc/rfc2326.txt .
[RFC 2327]
Session Description Protocol (SDP) . IETF RFC 2327, April 1998. Available at http://www.ietf.org/rfc/rfc2327.txt .
[RFC 2616]
Hypertext Transfer Protocol -- HTTP/1.1 . IETF RFC 2616, June 1999. Available at http://www.ietf.org/rfc/rfc2616.txt .
[RFC 3339]
G. Klyne and C. Newman. Date and Time on the Internet: Timestamps . IETF RFC 3339, July 2002. Available at http://www.ietf.org/rfc/rfc3339.txt .
[RFC 3533]
The Ogg Encapsulation Format Version 0 . IETF RFC 3533, May 2003. Available at http://www.ietf.org/rfc/rfc3533.txt .
[RFC 3986]
T. Berners-Lee and R. Fielding and L. Masinter. Uniform Resource Identifier (URI): Generic Syntax . IETF RFC 3986, January 2005. Available at http://www.ietf.org/rfc/rfc3986.txt .
[RFC 5234]
D. Crocker, Ed. Augmented BNF for Syntax Specifications: ABNF . IETF RFC 5234, January 2008. Available at http://tools.ietf.org/html/rfc5234 .
[RFC 4288]
N. Freed and J. Klensin Media Type Specifications and Registration Procedures . IETF RFC 4288, December 2005. Available at http://www.ietf.org/rfc/rfc4288.txt .
[RFC 5147]
E. Wilde and M. Duerst. URI Fragment Identifiers for the text/plain Media Type . IETF RFC 5147, April 2008. Available at http://tools.ietf.org/html/rfc5147 .
[HTML 4.0]
D. Ragett and A. Le Hors and I. Jacobs. HTML Fragment identifiers . W3C Rec, December 1999. Available at http://www.w3.org/TR/REC-html40/intro/intro.html#fragment-uri .
[HTML 5]
Ian Hickson, Google (ed). HTML5 . W3C Working Draft, 25th August 2009. Available at http://www.w3.org/TR/2009/WD-html5-20090825/ .
[SVG]
J. Ferraiolo. SVG Fragment identifiers . W3C Rec, September 2001. Available at http://www.w3.org/TR/2001/REC-SVG-20010904/linking#FragmentIdentifiersSVG .
[SMIL]
Sjoerd Mullender, CWI (ed). Synchronized Multimedia Integration Language (SMIL 3.0) . W3C Recommendation 01 December 2008. Available at http://www.w3.org/TR/2008/REC-SMIL3-20081201/ .
[xpointer]
P. Grosso and E. Maler and J. Marsh and N. Walsh. XPointer Framework . W3C Rec, March 2003. Available at http://www.w3.org/TR/xptr-framework/ .
[MPEG-7]
Information Technology - Multimedia Content Description Interface (MPEG-7) . Standard No. ISO/IEC 15938:2001, International Organization for Standardization(ISO), 2001.
[temporal URI]
S. Pfeiffer and C. Parker and A. Pang. Specifying time intervals in URI queries and fragments of time-based Web resources . Internet Draft, March 2005. Available at http://annodex.net/TR/draft-pfeiffer-temporal-fragments-03.html .
[CMML]
Continuous Media Markup Language (CMML), Version 2.1 . IETF Internet-Draft 4th March 2006 http://www.annodex.net/TR/draft-pfeiffer-cmml-03.txt .
[ROE]
Rich Open multitrack media Exposition (ROE) . Xiph Wiki. Retrieved 13 April 2009 at http://wiki.xiph.org/index.php/ROE .
[Skeleton]
Ogg Skeleton . Xiph Wiki. Retrieved 13 April 2009 at http://wiki.xiph.org/OggSkeleton .
[MPEG-21]
Information Technology - Multimedia Framework (MPEG-21) . Standard No. ISO/IEC 21000:2002, International Organization for Standardization(ISO), 2002. Available at http://www.chiariglione.org/mpeg/working_documents/mpeg-21/fid/fid-is.zip .
[SMPTE]
SMPTE RP 136 Time and Control Codes for 24, 25 or 30 Frame-Per-Second Motion-Picture Systems
[ISO Base Media File Format]
Information technology - Coding of audio-visual objects - Part 12: ISO base media file format . Retrieved 13 April 2009 at http://standards.iso.org/ittf/PubliclyAvailableStandards/c051533_ISO_IEC_14496-12_2008.zip
[Use cases and requirements for Media Fragments]
Use cases and requirements for Media Fragments . W3C Working Draft 30 April 2009: http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-reqs/
[UTF-8]
UTF-8, a transformation format of ISO 10646 . http://tools.ietf.org/html/rfc3629
[ECMA-262 5th edition]
ECMA-262 5th edition : http://www.ecma-international.org/publications/standards/Ecma-262.htm
[Media Annotations]
API for Media Resource 1.0 : http://www.w3.org/TR/mediaont-api-1.0/
[Web Linking]
Web Linking : http://tools.ietf.org/html/draft-nottingham-http-link-header-10
[HTML5 Media]
HTML5 Media : http://www.w3.org/TR/html5/video.html B Collected ABNF Syntax for URI (Non-Normative) unichar = <any Unicode code point>

B Collected ABNF Syntax for URI (Non-Normative)

unichar       = <any Unicode code point>

unistring     = *unichar
; defined in RFC 5234
ALPHA         =  %x41-5A / %x61-7A   ; A-Z / a-z
DIGIT         =  %x30-39 ; 0-9
HEXDIG        =  DIGIT / "A" / "B" / "C" / "D" / "E" / "F"
; defined in RFC 3986
unreserved    = ALPHA / DIGIT / "-" / "." / "_" / "~"
pct-encoded   = "%" HEXDIG HEXDIG
sub-delims    = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "="
pchar         = unreserved / pct-encoded / sub-delims / ":" / "@"
fragment      = *( pchar / "/" / "?" )
; defined in RFC 2326
npt-sec       = 1*DIGIT [ "." *DIGIT ]                     ; definitions taken
npt-hhmmss    = npt-hh ":" npt-mm ":" npt-ss [ "." *DIGIT] ; from RFC 2326
npt-hh        =   1*DIGIT     ; any positive number
npt-mm        =   2DIGIT      ; 0-59
npt-ss        =   2DIGIT      ; 0-59
; defined in RFC 3339
date-fullyear   = 4DIGIT
date-month      = 2DIGIT  ; 01-12
date-mday       = 2DIGIT  ; 01-28, 01-29, 01-30, 01-31 based on
                          ; month/year
time-hour       = 2DIGIT  ; 00-23
time-minute     = 2DIGIT  ; 00-59
time-second     = 2DIGIT  ; 00-58, 00-59, 00-60 based on leap second
                          ; rules
time-secfrac    = "." 1*DIGIT
time-numoffset  = ("+" / "-") time-hour ":" time-minute
time-offset     = "Z" / time-numoffset
partial-time    = time-hour ":" time-minute ":" time-second
                  [time-secfrac]
full-date       = date-fullyear "-" date-month "-" date-mday
full-time       = partial-time time-offset
date-time       = full-date "T" full-time
; Mediafragment definitions
segment       = mediasegment / *( pchar / "/" / "?" )     ; augmented fragment
                                                          ; definition taken from
                                                          ; RFC 3986
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Common Prefixes ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
deftimeformat    = %x6E.70.74                                      ; "npt"
pfxdeftimeformat = %x74.3A.6E.70.74                                ; "t:npt"
smpteformat      = %x73.6D.70.74.65                                ; "smpte"
                  / %x73.6D.70.74.65.2D.32.35                      ; "smpte-25"
                  / %x73.6D.70.74.65.2D.33.30                      ; "smpte-30"
                  / %x73.6D.70.74.65.2D.33.30.2D.64.72.6F.70       ; "smpte-30-drop"
pfxsmpteformat   = %x74.3A.73.6D.70.74.65                          ; "t:smpte"
                  / %x74.3A.73.6D.70.74.65.2D.32.35                ; "t:smpte-25"
                  / %x74.3A.73.6D.70.74.65.2D.33.30                ; "t:smpte-30"
                  / %x74.3A.73.6D.70.74.65.2D.33.30.2D.64.72.6F.70 ; "t:smpte-30-drop"
clockformat      = %x63.6C.6F.63.6B                                ; "clock"
pfxclockformat   = %x74.3A.63.6C.6F.63.6B                          ; "clock"
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Media Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
mediasegment     = namesegment / axissegment
axissegment      = (  )
               *( "&" (  )
;
; note that this does not capture the restriction of only one timesegment or spacesegment
; in the axisfragment definition, unless we list explicitely all the cases,
;

mediasegment      = ( timesegment / spacesegment / tracksegment / idsegment )
               *( "&" ( timesegment / spacesegment / tracksegment  / idsegment )

timesegment      = timeprefix "=" timeparam
timeprefix       = %x74                                      ; "t"
timeparam        = npttimedef / smptetimedef / clocktimedef
npttimedef       = [ deftimeformat ":"] ( npttime  [ "," npttime ] ) / ( "," npttime )
npttime       = npt-sec / npt-hhmmss
smptetimedef  = smpteformat ":"( frametime [ "," frametime ] ) / ( "," frametime )
frametime     = 1*DIGIT ":" 2DIGIT ":" 2DIGIT [ ":" 2DIGIT [ "." 2DIGIT ] ]
clocktimedef  = clockformat ":"( clocktime [ "," clocktime ] ) / ( "," clocktime )
clocktime     = (datetime / walltime / date)
datetime      = date-time                                 ; inclusion of RFC 3339
spacesegment  = xywhprefix   "=" xywhparam
xywhprefix    = %x78.79.77.68                             ; "xywh"
xywhparam     = [ xywhunit ":" ] 1*DIGIT "," 1*DIGIT "," 1*DIGIT "," 1*DIGIT
xywhunit      = %x70.69.78.65.6C                          ; "pixel"
              / %x70.65.72.63.65.6E.74                    ; "percent"
tracksegment  = trackprefix "=" trackparam
trackprefix   = %x74.72.61.63.6B                          ; "track"
trackparam    = unistring
namesegment   = nameprefix "=" nameparam
nameprefix    = %x69.64                                   ; "id"
nameparam     = unistring

idsegment   = idprefix "=" idparam
idprefix    = %x69.64					                  ; "id"
idparam     = unistring

C Collected ABNF Syntax for HTTP Headers (Non-Normative)

C Collected ABNF Syntax for HTTP Headers (Non-Normative)

; defined in RFC 2616

CHAR                   = [any US-ASCII character (octets 0 - 127)]
token                  = 1*[any CHAR except CTLs or separators]`
first-byte-pos         = 1*DIGIT
last-byte-pos          = 1*DIGIT
bytes-unit             = "bytes"
range-unit             = bytes-unit | other-range-unit
byte-range-resp-spec   = (first-byte-pos "-" last-byte-pos)
Range                  = "Range" ":" ranges-specifier
Accept-Ranges          = "Accept-Ranges" ":" acceptable-ranges
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; HTTP Request Headers ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
ranges-specifier       = byte-ranges-specifier | fragment-specifier
;
; note that ranges-specifier is extended from RFC 2616
; to cover alternate fragment range specifiers
;
fragment-specifier     = "include-setup" | fragment-range *( "," fragment-range )
                                                           [ ";" "include-setup" ]
fragment-range         = time-ranges-specifier | track-ranges-specifier | name-ranges-specifier

fragment-range         = time-ranges-specifier | id-ranges-specifier

;
; note that this doesn't capture the restriction to one fragment dimension occurring
; maximally once only in the fragment-specifier definition.
;
time-ranges-specifier  = timeprefix ":" time-ranges-options
time-ranges-options    = npttimeoption / smptetimeoption / clocktimeoption
npttimeoption          = deftimeformat "=" npt-sec   "-" [ npt-sec ]
smptetimeoption        = smpteformat   "=" frametime "-" [ frametime ]
clocktimeoption        = clockformat   "=" datetime  "-" [ datetime ]
track-ranges-specifier = trackprefix "=" trackparam *( ";" trackparam )
name-ranges-specifier  = nameprefix  "=" nameparam

id-ranges-specifier = idprefix "=" idparam

;;
Accept-Range-Redirect  = "Accept-Range-Redirect" ":" bytes-unit
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; HTTP Response Headers ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
Content-Range-Mapping      = "Content-Range-Mapping" ":" '{'
                             ( content-range-mapping-spec [ ";" def-include-setup ] ) / def-include-setup
                             '}' '=' '{'
                             byte-content-range-mapping-spec '}'
def-include-setup          = %x69.6E.63.6C.75.64.65.2D.73.65.74.75.70  ; "include-setup"
byte-range-mapping-spec    = bytes-unit SP
                             byte-range-resp-spec *( "," byte-range-resp-spec ) "/"
                             ( instance-length / "*" )
content-range-mapping-spec = time-mapping-spec | track-mapping-spec | name-mapping-spec

content-range-mapping-spec = time-mapping-spec | id-mapping-spec

time-mapping-spec          = timeprefix ":" time-mapping-options
time-mapping-options       = npt-mapping-option / smpte-mapping-option / clock-mapping-option
npt-mapping-option         = deftimeformat SP npt-sec   "-" npt-sec   "/"
                                          [ npt-sec ]   "-" [ npt-sec ]
smpte-mapping-option       = smpteformat   SP frametime "-" frametime "/"
                                          [ frametime ] "-" [ frametime ]
clock-mapping-option       = clockformat   SP datetime  "-" datetime  "/"
                                          [ datetime ]  "-" [ datetime ]
track-mapping-spec         = trackprefix SP trackparam *( ";" trackparam )
name-mapping-spec          = nameprefix  SP nameparam

id-mapping-spec          = idprefix SP idparam

;;
acceptable-ranges          = 1#range-unit *( "," 1#range-unit )| "none"
;
; note this does not represent the restriction that range-units can only appear once at most;
; this has also been adapted from RFC 2616
; to allow multiple range units.
;
other-range-unit           = token | timeprefix | trackprefix | nameprefix

other-range-unit           = token | timeprefix | idprefix

;;
Range-Redirect             = "Range-Redirect" ":" byte-range-resp-spec *( "," byte-range-resp-spec )

D Processing media fragment URIs in RTSP (Non-Normative)

This appendix explains how the media fragment specification is mapped to an RTSP protocol activity. We assume here that you have a general understanding of the RTSP protocol mechanism as defined in RFC 2326 . The general sequence of messages sent between an RTSP UA and server can be summarized as follows:

Note that the RTSP protocol is intentionally similar in syntax and operation to HTTP.

D.1 How to map Media Fragment URIs to RTSP protocol methods

D.1.1 Dealing with the media fragment URI dimensions in RTSP

We illustrated for each of the four media fragment dimensions how they can be mapped onto RTSP commands. The following examples are used to illustrated each of the dimensions: (1) temporal: #t=10,20 (2) tracks: #track=audio&track=video (3) spatial: #xywh=160,120,320,24 (4) id: #id=Airline%20Edit

E Acknowledgements (Non-Normative)

This document is the work of the W3C Media Fragments Working Group . Members of the Working Group are (at the time of writing, and in alphabetical order): Eric Carlson (Apple, Inc.), Chris Double (Mozilla Foundation), Michael Hausenblas (DERI Galway at the National University of Ireland, Galway, Ireland), Philip Jägenstedt (Opera Software), Jack Jansen (CWI), Yves Lafon (W3C), Erik Mannens (IBBT), Thierry Michel (W3C/ERCIM), Guillaume (Jean-Louis) Olivrin (Meraka Institute), Soohong Daniel Park (Samsung Electronics Co., Ltd.), Conrad Parker (W3C Invited Experts), Silvia Pfeiffer (W3C Invited Experts), Nobuhisa Shiraishi (NEC Corporation), David Singer (Apple, Inc.), Thomas Steiner (Google, Inc.), Raphaël Troncy (EURECOM), Davy Van Deursen (IBBT),

The people who have contributed t