Copyright © 2011 W3C ® ( MIT , ERCIM , Keio ), All Rights Reserved. W3C liability , trademark and document use rules apply.
This document describes the Media Fragments 1.0 specification. It specifies the syntax for constructing media fragment URIs and explains how to handle them when used over the HTTP protocol. The syntax is based on the specification of particular name-value pairs that can be used in URI fragment and URI query requests to restrict a media resource to a certain fragment.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This
is
the
Second
Last
Call
Working
Draft
Candidate
Recommendation
of
the
Media
Fragments
URI
1.0
specification.
It
has
been
produced
by
the
Media
Fragments
Working
Group
,
which
is
part
of
the
W3C
Video
on
the
Web
Activity
.
The
Working
Group
expects
to
advance
this
specification
to
Recommendation
Status.
This
The
W3C
Membership
and
other
interested
parties
are
invited
to
review
this
Candidate
Recommendation
document
and
send
comments
through
20
November
2011.
Please
send
comments
about
this
document
to
public-media-fragment@w3.org
mailing
list
(
public
archive
).
Use
"[CR
Media
Fragment]"
in
the
subject
line
of
your
email.
We
expect
that
sufficient
feedback
to
determine
its
future
will
have
been
received
by
20
November
2011.
This
specification
will
remain
a
Candidate
Recommendation
until
at
least
20
November
2011.
The
Media
Fragments
Working
Draft
Group
will
advance
this
specification
to
Proposed
Recommendation
when
the
following
exit
criteria
have
been
met:
The Implementation results are publicly released and are intended solely to be used as proof of Media Fragments URI implementability. It is only a snap shot of the actual implementation behaviors at one moment of time, as these implementations may not be immediately available to the public. The interoperability data is not intended to be used for assessing or grading the performance of any individual implementation. Any feedback on implementation and use of this specification would be very welcome. To the extent possible, please provide a separate email message for each distinct comment.
This
Candidate
Recommendation
version
of
the
Media
Fragments
URI
1.0
specification
incorporates
requests
for
changes
from
comments
sent
during
the
first
and
second
Last
Call
Review,
as
agreed
with
the
commenters
(see
Disposition
of
Last
Call
comments
)
and
changes
following
implementation
experiences
from
the
Working
Group.
The
Working
Group
wishes
to
have
these
changes
reviewed
before
proceeding
would
like
to
Candidate
Recommendation.
point
out
that
the
processing
of
media
fragment
URI
when
used
over
the
HTTP
protocol
is
now
described
in
a
separate
document
Protocol
for
Media
Fragments
1.0
Resolution
in
HTTP
.
The
W3C
Membership
For
convenience,
the
differences
between
this
CR
version
and
other
interested
parties
the
Second
Last
Call
Working
Draft
are
invited
to
review
highlighted
in
the
CR
Diff
document
.
The
differences
between
the
Second
Last
Call
Working
Draft
and
send
comments
through
10
April
2011.
Please
send
comments
about
this
the
First
Last
Call
Working
Draft
are
also
highlighted
in
the
CR
Diff
document
to
public-media-fragment@w3.org
mailing
list
(
public
archive
).
.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .
1
Introduction
2
Standardisation
Issues
2.1
Terminology
2.2
Media
Fragments
Standardisation
2.2.1
URI
Fragments
2.2.2
URI
Queries
3
URI
fragment
and
URI
query
3.1
When
to
choose
URI
fragments?
When
to
choose
URI
queries?
3.2
Resolving
URI
fragments
within
the
user
agent
3.3
Resolving
URI
fragments
with
server
help
3.4
Resolving
URI
fragments
in
a
proxy
cacheable
manner
3.5
Resolving
URI
queries
3.6
Combining
URI
fragments
and
URI
queries
4
Media
Fragments
Syntax
4.1
General
Structure
4.2
Fragment
Dimensions
4.2.1
Temporal
Dimension
4.2.1.1
Normal
Play
Time
(NPT)
4.2.1.2
SMPTE
time
codes
4.2.1.3
Wall-clock
time
code
4.2.2
Spatial
Dimension
4.2.3
Track
Dimension
4.2.4
Named
ID
Dimension
4.2.5
Common
Syntax
5
Media
Fragments
Processing
5.1
Processing
Media
Fragment
URI
5.1.1
Processing
name-value
components
5.1.2
Processing
name-value
lists
5.2
Protocol
for
URI
fragment
Resolution
in
HTTP
5.2.1
UA
mapped
byte
ranges
5.2.1.1
UA
requests
URI
fragment
for
the
first
time
5.2.1.2
UA
requests
URI
fragment
it
already
has
buffered
5.2.1.3
UA
requests
URI
fragment
of
a
changed
resource
5.2.2
Server
mapped
byte
ranges
5.2.2.1
Server
mapped
byte
ranges
with
corresponding
binary
data
5.2.2.2
Server
mapped
byte
ranges
with
corresponding
binary
data
and
codec
setup
data
5.2.2.3
Proxy
cacheable
server
mapped
byte
ranges
5.2.3
Server
triggered
redirect
5.3
Protocol
for
URI
query
Resolution
in
HTTP
6
Media
Fragments
Semantics
6.1
Errors
on
the
General
URI
level
Valid
Media
Fragment
URIs
6.1.1
Non-existent
dimension:
Valid
temporal
dimension
6.1.2
Under-specified
Dimension
Valid
spatial
dimension
6.1.3
Valid
track
dimension
6.1.4
Valid
id
dimension
6.2
Errors
detectable
based
on
the
temporal
dimensions
URI
6.2.1
Valid
requests
Errors
on
the
general
URI
level
6.2.2
Empty
Errors
on
the
temporal
dimension
6.2.3
Non-existent
Errors
on
the
spatial
dimension
6.2.4
Validity
error
Errors
on
the
track
dimension
6.2.5
SMPTE
time
code
mismatch
Errors
on
the
id
dimension
6.3
Errors
detectable
based
on
information
of
the
source
media
6.3.1
Errors
on
the
general
level
6.3.2
Errors
on
the
temporal
dimension
6.3.3
Errors
on
the
spatial
dimensions
dimension
6.4
6.3.4
Errors
on
the
track
dimensions
dimension
6.5
6.3.5
Errors
on
the
named
dimensions
id
dimension
7
Notes
to
Implementors
(non-normative)
7.1
Browsers
Rendering
Media
Fragments
7.2
Clients
Displaying
Media
Fragments
7.3
All
Media
Fragment
Clients
7.4
Media
Fragment
Servers
7.5
Media
Fragment
Web
Applications
8
Conclusions
8.1
Qualification
of
Media
Resources
A
References
B
Collected
ABNF
Syntax
for
URI
(Non-Normative)
C
Collected
ABNF
Syntax
for
HTTP
Headers
(Non-Normative)
D
Processing
media
fragment
URIs
in
RTSP
(Non-Normative)
D.1
How
to
map
Media
Fragment
URIs
to
RTSP
protocol
methods
D.1.1
Dealing
with
the
media
fragment
URI
dimensions
in
RTSP
D.1.1.1
Temporal
Media
Fragment
URIs
D.1.1.2
Track
Media
Fragment
URIs
D.1.1.3
Spatial
Media
Fragment
URIs
D.1.1.4
Named
Id
Media
Fragment
URIs
D.1.2
Putting
the
media
fragment
URI
dimensions
together
in
RTSP
D.1.3
Caching
and
RTSP
for
media
fragment
URIs
E
Acknowledgements
(Non-Normative)
F
Change
Log
(Non-Normative)
Audio and video resources on the World Wide Web are currently treated as "foreign" objects, which can only be embedded using a plugin that is capable of decoding and interacting with the media resource. Specific media servers are generally required to provide for server-side features such as direct access to time offsets into a video without the need to retrieve the entire resource. Support for such media fragment access varies between different media formats and inhibits standard means of dealing with such content on the Web.
This
specification
provides
for
a
media-format
independent,
standard
means
of
addressing
media
fragments
on
the
Web
using
Uniform
Resource
Identifiers
(URI).
In
the
context
of
this
document,
media
fragments
are
regarded
along
three
four
different
dimensions:
temporal,
spatial,
and
tracks.
Further,
a
temporal
fragment
can
be
marked
with
a
name
and
then
addressed
through
a
URI
using
that
name.
name,
using
the
id
dimension.
The
specified
addressing
schemes
apply
mainly
to
audio
and
video
resources
-
the
spatial
fragment
addressing
may
also
be
used
on
images.
The aim of this specification is to enhance the Web infrastructure for supporting the addressing and retrieval of subparts of time-based Web resources, as well as the automated processing of such subparts for reuse. Example uses are the sharing of such fragment URIs with friends via email, the automated creation of such fragment URIs in a search engine interface, or the annotation of media fragments with RDF. Such use case examples as well as other side conditions on this specification and a survey of existing media fragment addressing approaches are provided in the requirements Use cases and requirements for Media Fragments document that accompanies this specification document.
The media fragment URIs specified in this document have been implemented and demonstrated to work with media resources over the HTTP protocol. This specification is not defining the protocol aspect of RTSP handling of a media fragment in the normative sections. We expect the media fragment URI syntax to be generic and a possible mapping between this syntax and RTSP messages can be found in an appendix of this specification D Processing media fragment URIs in RTSP . Existing media formats in their current representations and implementations provide varying degrees of support for this specification. It is expected that over time, media formats, media players, Web Browsers, media and Web servers, as well as Web proxies will be extended to adhere to the full specification. This specification will help make video a first-class citizen of the World Wide Web.
The keywords MUST , MUST NOT , SHOULD and SHOULD NOT are to be interpreted as defined in RFC 2119 .
According to RFC 3986 , the term "URI" does not include relative references. In this document, we consider both URIs and relative references. Consequently, we use the term "URI reference" as defined in RFC 3986 (section 4.1). For simplicity reasons, this document, however, only uses the term "media fragment URI" in place of "media fragment URI reference".
The following terms are used frequently in this document and need to be clearly understood:
The basis for the standardisation of media fragment URIs is the URI specification, RFC 3986 . Providing media fragment identification information in URIs refers here to the specification of the structure of a URI fragment or a URI query. This document will explain how URI fragments and URI queries are structured to identify media fragments. It normalises the name-value parameters used in URI fragments and URI queries to address media fragments. These build on existing CGI parameter conventions.
In this section, we look at implications of standardising the structure of media fragment URIs.
The URI specification RFC 3986 says about the format of a URI fragment in Section 3.5:
"The fragment's format and resolution is [..] dependent on the media type [RFC2046] of a potentially retrieved representation. [..] Fragment identifier semantics are independent of the URI scheme and thus cannot be redefined by scheme specifications."
This essentially means that only media type definitions (as registered through the process defined in RFC 4288 ) are able to introduce a standard structure on URI fragments for that mime type. One part of the registration process of a media type can include information about how fragment identifiers in URIs are constructed for use in conjunction with this media type.
Note that the registration of URI fragment construction rules as expressed in Section 4.11 of RFC 4288 is only a SHOULD-requirement. An analysis of all media type registrations showed that there is not a single media type registration in the audio/*, image/*, video/* branches that is currently defining fragments or fragment semantics.
The Media Fragment WG has no authority to update registries of all targeted media types. To the best of our knowledge there are only few media types that actually have a specified fragment format even if it is not registered with the media type: these include Ogg, MPEG-4, and MPEG-21. Further, only a small number of software packages actually supports these fragment formats. For all others, the semantics of the fragment are considered to be unknown.
As such, the intention of this document is to propose a specification to all media type owners in the audio/*, image/*, and video/* branches for a structured approach to URI fragments and for specification of commonly agreed dimensions to address media fragments (i.e. subparts of a media resource) through URI fragments. We recommend media type owners to harmonize their existing schemes with the ones proposed in this document and update or add the fragment semantics specification to their media type registration.
The URI specification RFC 3986 says about the format of a URI query in Section 3.4:
"The query component [..] serves to identify a resource within the scope of the URI's scheme and naming authority (if any). [..] Query components are often used to carry identifying information in the form of "key=value" pairs [..]."
URI query specifications are more closely linked to the URI scheme, some of which do not even use a query component. We are mostly concerned with the HTTP RFC 2616 and the RTP/RTSP rfc2326 protocols here, which both support query components. HTTP says nothing about how a URI query has to be interpreted. RTSP explicitly says that fragment and query identifiers do not have a well-defined meaning at this time, with the interpretation left to the RTSP server.
The URI specification RFC 3986 says generally that the data within the URI is often parsed by both the user agent and one or more servers. It refers in particular to HTTP in Section 7.3:
"In HTTP, for example, a typical user agent will parse a URI into its five major components, access the authority's server, and send it the data within the authority, path, and query components. A typical server will take that information, parse the path into segments and the query into key/value pairs, and then invoke implementation-specific handlers to respond to the request."
Since the interpretation of query components resides with the functionality of servers, the intention of this document wrt query components is to recommend standard name-value pair formats for use in addressing media fragments through URI queries. We recommend server and server-type software providers to harmonize their existing schemes in use with media resources to support the nomenclature proposed in this specification.
Editorial note | |
This section is non-normative |
To address a media fragment, one needs to find ways to convey the fragment information. This specification builds on URIs RFC 3986 . Every URI is defined as consisting of four parts, as follows:
<scheme name> : <hierarchical part> [ ? <query> ] [ # <fragment> ]
There are therefore two possibilities for representing the media fragment addressing in URIs: the URI query part or the URI fragment part .
For media fragment addressing, both approaches - URI query and URI fragment - are useful.
The main difference between a URI query and a URI fragment is that a URI query produces a new resource, while a URI fragment provides a secondary resource that has a relationship to the primary resource. URI fragments are resolved from the primary resource without another retrieval action. This means that a user agent should be capable to resolve a URI fragment on a resource it has already received without having to fetch more data from the server.
A further requirement put on a URI fragment is that the media type of the retrieved fragment should be the same as the media type of the primary resource. Among other things, this means that a URI fragment that points to a single video frame out of a longer video results in a one-frame video, not in a still image. To extract a still image, one would need to create a URI query scheme - something not envisaged here, but easy to devise.
There are different types of media fragment addressing in this specification. As noted in the Use cases and requirements for Media Fragments document (section "Fitness Conditions on Media Containers/Resources"): not all container formats and codecs are "fit" for supporting the different types of fragment URIs. "Fitness" relates to the fact that a media fragment can be extracted from the primary resource without syntax element modifications or transcoding of the bitstream.
Resources that are "fit" can therefore be addressed with a URI fragment. Resources that are "conditionally fit" can be addressed with a URI fragment with an additional retrieval action that retrieves the modified syntax elements but leaves the codec data untouched. Resources that are "unfit" require transcoding. Such transcoded media fragments cannot be addressed with URI fragments, but only with URI queries.
Therefore, when addressing a media fragment with the URI mechanism, the author has to know whether this media fragment can be produced from the (primary) resource itself without any transcoding activities or whether it requires transcoding. In the latter case, the only choice is to use a URI query and to use a server that supports transcoding and delivery of a (primary) derivative resource to satisfy the query.
A user agent may itself resolve and control the presentation of media fragment URIs. The simplest case arises where the user agent has already downloaded the entire resource and can perform the extraction from its locally cached copy. For some media types, it may also be possible to perform the extraction over the network without any special protocol assistance. For temporal fragments this requires a user agent to be able to seek on the media resource using existing protocol mechanisms.
An
example
of
a
URI
fragment
used
to
address
a
media
fragment
is
http://www.example.org/video.ogv#t=60,100
.
In
this
case,
the
user
agent
knows
that
the
primary
resource
is
http://www.example.org/video.ogv
and
that
it
is
only
expected
to
display
the
portion
of
the
primary
resource
that
relates
to
the
fragment
#t=60,100
,
i.e.
seconds
60-100.
Thus,
the
relationship
between
the
primary
resource
and
the
media
fragment
is
maintained.
In traditional URI fragment retrieval, a user agent requests the complete primary resource from the server and then applies the fragmentation locally. In the media fragment case, this would result in a retrieval action on the complete media resource, on which the user agent would then locally perform its fragment extraction - something generally unviable for such large resources.
Therefore, media resources are not always retrieved over HTTP using a single request. They may be retrieved as a sequence of byte range requests on the original resource URI, or may be retrieved as a sequence of requests to different URIs each representing a small part of the media. The reasons for such mechanisms include bandwidth conservation, where a client chooses to space requests out over time during playback in order to maximize bandwidth available for other activities, and bandwidth adaptation, where a client selects among various representations with varying bitrate depending on the current bandwidth availability.
A user agent that knows how to map media fragments to byte ranges will be able to satisfy a URI fragment request such as the above example by itself. This is typically the case for user agents that know how to seek to media fragments over the network. For example, a user agent that deals with a media file that includes an index of its seekable structures can resolve the media fragment addresses to byte ranges from the index. This is the case e.g. with seekable QuickTime files. Another example is a user agent that knows how to seek on a media file through a sequence of byte range requests and eventually receives the correct media fragment. This is the case e.g. with Ogg files in Firefox versions above 3.5.
Similarly, a user agent that knows how to map media fragments to a sequence of URIs can satisfy a URI fragment request by itself. This is typically the case for user agents that perform adaptive streaming. For example, a user agent that deals with a media resource that contains a sequence of URIs, each a media file of a few seconds duration, can resolve the media fragment addresses to a subsequence of those URIs. This is the case with QuickTime adaptive bitrate streaming or IIS Smooth Streaming.
If such a user agent natively supports the media fragment syntax as specified in this document, it is deemed conformant to this specification for fragments and for the particular dimension.
For user agents that natively support the media fragment syntax, but have to use their own seeking approach, this specification provides an optimisation that can make the byte offset seeking more efficient. It requires a conformant server with which the user agent will follow a protocol defined later in this document.
In this approach, the user agent asks the server to do the byte range mapping for the media fragment address itself and send back the appropriate byte ranges. This can not be done through the URI, but has to be done through adding protocol headers. User agents that interact with a conformant server to follow this protocol will receive the appropriate byte ranges directly and will not need to do costly seeking over the network.
Note
that
it
is
important
that
the
server
also
informs
the
user
agent
what
actual
media
fragment
range
it
was
able
to
retrieve.
This
is
important
since
in
the
compressed
domain
it
is
not
possible
to
extract
data
at
an
arbitrary
resolution,
but
only
at
the
resolution
that
the
data
was
packaged
in.
For
example,
even
if
a
user
asked
for
http://www.example.org/video.ogv#t=60,100
and
the
user
agent
sent
a
range
request
of
t=60,100
to
the
server,
the
server
may
only
be
able
to
return
the
range
t=58,103
as
the
closest
decodable
range
that
encapsulates
all
the
required
data.
Note that if done right, the native user agent support for media fragments and the improved server support can be integrated without problems: the user agent just needs to include the byte range and the media fragment range request in one request. A server that does not understand the media fragment range request will only react to the byte ranges, while a server that understands them will ignore the byte range request and only reply with the correct byte ranges. The user agent will understand from the response whether it received a reply to the byte ranges or the media fragment ranges request and can react accordingly.
The current setup of the World Wide Web relies heavily on the use of caching Web proxies to speed up the delivery of content. In the case of URI fragments that are resolved by the server as indicated in the previous section, existing Web proxies have no means of caching these requests since they only understand byte ranges.
To make use of the existing Web proxy infrastructure of the Web, we need to make sure that the user agent only asks for byte ranges, so they can be served from the cache. This is possible if the server - instead of replying with the actual data - replies with the mapped byte ranges for the requested media fragment range. Then, the user agent is able to resend his range request this time with bytes only, which can possibly already be satisfied from the cache. Details of this will be specified later.
Editorial note: Raphael | |
Should we not foresee future "smart" media caches that would be able to actually cache range request in other units than bytes? |
The described URI fragment addressing methods only work for byte-identical segments of a media resource, since we assume a simple mapping between the media fragment and bytes that each infrastructure element can deal with. Where it is impossible to maintain byte-identity and some sort of transcoding of the resource is necessary, the user agent is not able to resolve the fragmentation by itself and a server interaction is required. In this case, URI queries have to be used since they result in a server interaction and can deliver a transcoded resource.
Another use for URI queries is when a user agent actually wants to receive a completely new resource instead of just a byte range from an existing (primary) resource. This is, for example, the case for playlists of media fragment resources. Even if a media fragment could be resolved through a URI fragment, the URI query may be more desirable since it does not carry with itself the burden of the original primary resource - its file headers may be smaller, its duration may be smaller, and it does not automatically allow access to the remainder of the original primary resource.
When URI queries are used, the retrieval action has to additionally make sure to create a fully valid new resource. For example, for the Ogg format, this implies a reconstruction of Ogg headers to accurately describe the new resource (e.g. a non-zero start-time or different encoding parameters). Such a resource will be cached in Web proxies as a different resource to the original primary resource.
An
example
URI
query
that
includes
a
media
fragment
specification
is
http://www.example.org/video.ogv?t=60,100
.
This
results
in
a
video
of
duration
40s
(assuming
the
original
video
was
more
than
100s
long).
Note that this resource has no per-se relationship to the original primary resource. As a user agent uses such a URI with e.g. a HTML5 video element, the browser has no knowledge about the original resource and can only display this video as a 40s long video starting at 0s. The context of the original resource is lost.
A user agent may want to display the original start time of the resource as the start time of the video in order to be consistent with the information in the URI. It is possible to achieve this in one of two ways: either the video file itself has some knowledge that it is an extract from a different file and starts at an offset - or the user agent is told through the retrieval action which original primary resource the retrieved resource relates to and can find out information about it through another retrieval action. This latter option will be regarded later in this document.
An example for a media resource that has knowledge about itself of the required kind are Ogg files. Ogg files that have a skeleton track and were created correctly from the primary resource will know that their start time is not 0s but 60s in the above example. The browser can simply parse this information out of the received bitstream and may display a timeline that starts at 60s and ends at 100s in the video controls if it so desires.
Another option is that the browser parses the URI and knows about how media resources have a fragment specification that follows a standard. Then the browser can interpret the query parameters and extract the correct start and end times and also the original primary resource. It can then also display a timeline that starts at 60s and ends at 100s in the video controls. Further it can allow a right-click menu to click through to the original resource if required.
A use case where the video controls may neither start at 0s nor at 60s is a mashed-up video created through a list of media fragment URIs. In such a playlist, the user agent may prefer to display a single continuous timeline across all the media fragments rather than a collection of individual timelines for each fragment. Thus, the 60s to 100s fragment may e.g. be mapped to an interval at 3min20 to 4min.
No new protocol headers are required to execute a URI query for media fragment retrieval. Some optional protocol headers that improve the information exchange will be recommended later in this document.
A combination of a URI query for a media fragment with a URI fragment yields a URI fragment resolution on top of the newly created resource. Since a URI with a query part creates a new resource, we have to do the fragment offset on the new resource. This is simply a conformant behaviour to the URI standard RFC 3986 .
For
example,
http://www.example.org/video.ogv?t=60,100#t=20
will
lead
to
the
20s
fragment
offset
being
applied
to
the
new
resource
starting
at
60
going
to
100.
Thus,
the
reply
to
this
is
a
40s
long
resource
whose
playback
will
start
at
an
offset
of
20s.
Editorial note: Silvia | |
We should at the end of the document set up a table with all the different addressing types and http headers and say what we deem is conformant and how to find out whether a server or user agent is conformant or not. |
This section describes the external representation of a media fragment specifier, and how this should be interpreted.
Guiding principles for the definition of the media fragments syntax were as follows:
A
list
of
name-value
pairs
is
encoded
in
the
query
or
fragment
component
of
a
URI.
The
name
and
value
components
are
separated
by
an
equal
sign
(
=
),
while
multiple
name-value
pairs
are
separated
by
an
ampersand
(
&
).
name = fragment - "&" - "="
name = fragment - "&" - "=" value = fragment - "&" namevalue = name [ "=" value ] namevalues = namevalue *( "&" namevalue )
The
names
and
values
can
be
arbitrary
Unicode
strings,
encoded
in
UTF-8
and
percent-encoded
as
per
RFC
3986
.
Here
are
some
examples
of
URIs
with
name-value
pairs
in
the
fragment
component,
to
demonstrate
the
general
structure:
http://www.example.com/example.ogv#a=b&c=d
http://www.example.com/example.ogv#a=b&c=d http://www.example.com/example.ogv#t=10,20 http://www.example.com/example.ogv#track=audio&t=10,20 http://www.example.com/example.ogv#id=Cap%C3%ADtulo%202
While arbitrary name-value pairs can be encoded in this manner, this specification defines a fixed set of dimensions. The dimension keyword name is encoded in the name component, while dimension-specific syntax is encoded in the value component.
Section 5.1.1 Processing name-value components defines in more detail how to process the name-value pair syntax, arriving at a list of name-value Unicode string pairs. The syntax definitions in 4.2 Fragment Dimensions apply to these Unicode strings.
Media fragments support addressing the media along four dimensions:
This dimension denotes a specific time range in the original media, such as "starting at second 10, continuing until second 20";
this dimension denotes a specific range of pixels in the original media, such as "a rectangle with size (100,100) with its top-left at coordinate (10,10)";
this dimension denotes one or more tracks in the original media, such as "the english audio and the video track";
this
dimension
denotes
a
named
section
of
temporal
fragment
within
the
original
media,
such
as
"chapter
2".
2",
and
can
be
seen
as
a
convenient
way
of
specifying
a
temporal
fragment.
The
temporal,
spatial
and
track
All
dimensions
are
logically
independent
and
can
be
combined;
the
outcome
is
independent
of
the
order
of
the
dimensions.
Note
however
that
the
id
dimension
a
shortcut
is
for
the
temporal
dimension;
combining
both
dimensions
need
to
be
treated
as
described
in
section
6.2.1
Errors
on
the
general
URI
level
.
The track dimension refers to one of a set of parallel media streams (e.g. "the english audio track for a video"), not to a (possibly self-contained) section of the source media (e.g. "Audio track 2 of a CD").
Temporal
clipping
is
denoted
by
the
name
t
,
and
specified
as
an
interval
with
a
begin
time
and
an
end
time
(or
an
in-point
and
an
out-point,
in
video
editing
terms).
Either
or
both
may
be
omitted,
with
the
begin
time
defaulting
to
0
seconds
and
the
end
time
defaulting
to
the
duration
of
the
source
media.
The
interval
is
half-open:
the
begin
time
is
considered
part
of
the
interval
whereas
the
end
time
is
considered
to
be
the
first
time
point
that
is
not
part
of
the
interval.
If
a
single
number
only
is
given,
this
is
corresponds
to
the
begin
time
except
if
it
is
preceded
by
a
comma
that
would
in
this
case
indicate
the
end
time.
Examples:
t=10,20 # => results in the time interval [10,20)
Examples:
t=10,20 # => results in the time interval [10,20) t=,20 # => results in the time interval [0,20)t=10, # => results in the time interval [10,end)t=10 # =>alsoresults in the time interval [10,end)
Temporal
clipping
can
be
specified
either
as
Normal
Play
Time
(npt)
RFC
2326
,
as
SMPTE
timecodes,
SMPTE
,
or
as
real-world
clock
time
(clock)
RFC
2326
.
Begin
and
end
times
are
always
specified
in
the
same
format.
The
format
is
specified
by
name,
followed
by
a
colon
(
:
),
with
npt:
being
the
default.
timeprefix = %x74 ; "t"
In this version of the media fragments specification there is no extensibility mechanism to add time format specifiers.
Normal
Play
Time
can
either
be
specified
as
seconds,
with
an
optional
fractional
part
to
indicate
miliseconds,
or
as
colon-separated
hours,
minutes
and
seconds
(again
with
an
optional
fraction).
Minutes
and
seconds
must
be
specified
as
exactly
two
digits,
hours
and
fractional
seconds
can
be
any
number
of
digits.
The
hours,
minutes
and
seconds
specification
for
NPT
is
a
convenience
only,
it
does
not
signal
frame
accuracy.
The
specification
of
the
"npt:"
identifier
is
optional
since
NPT
is
the
default
time
scheme.
This
specification
builds
on
the
RTSP
specification
of
NPT
RFC
2326
.
npt-sec = 1*DIGIT [ "." *DIGIT ] ; definitions taken
npt-sec = 1*DIGIT [ "." *DIGIT ] ; definitions taken npt-hhmmss = npt-hh ":" npt-mm ":" npt-ss [ "." *DIGIT] ; from RFC 2326, npt-mmss = npt-mm ":" npt-ss [ "." *DIGIT] npt-hh = 1*DIGIT ; any positive number npt-mm = 2DIGIT ; 0-59 npt-ss = 2DIGIT ; 0-59 npttimedef = [ deftimeformat ":"] ( npttime [ "," npttime ] ) / ( "," npttime ) deftimeformat = %x6E.70.74 ; "npt"npttime = npt-sec / npt-mmss / npt-hhmmssnpttime = npt-sec / npt-mmss / npt-hhmmss
Examples:
t=npt:10,20 # => results in the time interval [10,20) t=npt:,121.5 # => results in the time interval [0,121.5) t=0:02:00,121.5 # => results in the time interval [120,121.5) t=npt:120,0:02:01.5 # => also results in the time interval [120,121.5)
SMPTE
time
codes
are
a
way
to
address
a
specific
frame
(or
field)
without
running
the
risk
of
rounding
errors
causing
a
different
frame
to
be
selected.
The
format
is
always
colon-separated
hours,
minutes,
seconds
and
frames.
Frames
are
optional,
defaulting
to
00.
If
the
source
format
has
a
further
subdivison
of
frames
(such
as
odd/even
fields
in
interlaced
video)
these
can
be
specified
further
with
a
number
after
a
dot
(
.
).
The
SMPTE
format
name
must
always
be
specified,
because
the
interpretation
of
the
fields
depends
on
the
format.
The
SMPTE
formats
supported
in
this
version
of
the
specification
are:
smpte
,
smpte-25
,
smpte-30
and
smpte-30-drop
.
smpte
is
a
synonym
for
smpte-30
.
smptetimedef = smpteformat ":"( frametime [ "," frametime ] ) / ( "," frametime )
smptetimedef = smpteformat ":"( frametime [ "," frametime ] ) / ( "," frametime ) smpteformat = %x73.6D.70.74.65 ; "smpte" / %x73.6D.70.74.65.2D.32.35 ; "smpte-25" / %x73.6D.70.74.65.2D.33.30 ; "smpte-30" / %x73.6D.70.74.65.2D.33.30.2D.64.72.6F.70 ; "smpte-30-drop"frametime = 1*DIGIT ":" 2DIGIT ":" 2DIGIT [ ":" 2DIGIT [ "." 2DIGIT ] ]frametime = 1*DIGIT ":" 2DIGIT ":" 2DIGIT [ ":" 2DIGIT [ "." 2DIGIT ] ]
Examples:
t=smpte-30:0:02:00,0:02:01:15 # => results in the time interval [120,121.5) t=smpte-25:0:02:00:00,0:02:01:12.40 # => results in the time interval [120,121.5) # (80 or 100 subframes per frame seem typical)
Using SMPTE timecodes may result in frame-accurate begin and end times, but only if the timecode format used in the media fragment specifier is the same as that used in the original media item.
Wall-clock
time
codes
are
a
way
to
address
real-world
clock
time
that
is
associated
typically
with
a
live
video
stream.
These
are
the
same
time
codes
that
are
being
used
by
RTSP
RFC
2326
,
by
SMIL
SMIL
,
and
by
HTML5
HTML
5
.
The
scheme
uses
ISO
8601
UTC
timestamps
(http://www.iso.org/iso/date_and_time_format).
The
format
separates
the
date
from
the
time
with
a
"T"
character
and
the
string
ends
with
"Z",
which
includes
time
zone
capabilities.
To
that
effect,
the
ABNF
grammar
is
referring
to
RFC
3339
,
which
include
the
relevant
part
of
ISO
8601
in
ABNF
form.
The
time
scheme
identifier
is
"clock".
>
datetime = <date-time, defined in RFC 3339> clocktimedef = clockformat ":"( clocktime [ "," clocktime ] ) / ( "," clocktime ) clockformat = %x63.6C.6F.63.6B ; "clock" clocktime = (datetime / walltime / date) ; WARNING: if your date-time contains '+' (or any other reserved character, per RFC 3986),; it should be percent-encoded when used in a URI.; it should be percent-encoded when used in a URI.
For convenience, the definition is copied here
;date-fullyear = 4DIGIT date-month = 2DIGIT ; 01-12 date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on ; month/year time-hour = 2DIGIT ; 00-23 time-minute = 2DIGIT ; 00-59 time-second = 2DIGIT ; 00-58, 00-59, 00-60 based on leap second ; rules time-secfrac = "." 1*DIGIT time-numoffset = ("+" / "-") time-hour ":" time-minute time-offset = "Z" / time-numoffsetdefined inpartial-time = time-hour ":" time-minute ":" time-second [time-secfrac] full-date = date-fullyear "-" date-month "-" date-mday full-time = partial-time time-offsetRFC 3339 ; date-fullyear = 4DIGIT date-month = 2DIGIT ; 01-12 date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on ; month/year time-hour = 2DIGIT ; 00-23 time-minute = 2DIGIT ; 00-59 time-second = 2DIGIT ; 00-58, 00-59, 00-60 based on leap second ; rules time-secfrac = "." 1*DIGIT time-numoffset = ("+" / "-") time-hour ":" time-minute time-offset = "Z" / time-numoffset partial-time = time-hour ":" time-minute ":" time-second [time-secfrac] full-date = date-fullyear "-" date-month "-" date-mday full-time = partial-time time-offset date-time = full-date "T" full-time
Examples:
t=clock:2009-07-26T11:19:01Z,2009-07-26T11:20:01Z # => results in a 1 min interval # on 26th Jul 2009 from 11hrs, 19min, 1sec t=clock:2009-07-26T11:19:01Z # => starts on 26th Jul 2009 from 11hrs, 19min, 1sec t=clock:,2009-07-26T11:20:01Z # => ends on 26th Jul 2009 from 11hrs, 20min, 1sec
Spatial clipping selects an area of pixels from visual media streams. For this release of the media fragment specification, only rectangular selections are supported. The rectangle can be specified as pixel coordinates or percentages.
Pixels coordinates are interpreted after taking into account the resource's dimensions, aspect ratio, clean aperture, resolution, and so forth, as defined for the format used by the resource. If an anamorphic format does not define how to apply the aspect ratio to the video data's dimensions to obtain the "correct" dimensions, then the user agent must apply the ratio by increasing one dimension and leaving the other unchanged.
Rectangle
selection
is
denoted
by
the
name
xywh
.
The
value
is
an
optional
format
pixel:
or
percent:
(defaulting
to
pixel)
and
4
comma-separated
integers.
The
integers
denote
x,
y,
width
and
height,
respectively,
with
x=0,
y=0
being
the
top
left
corner
of
the
image.
If
percent
is
used,
x
and
width
are
interpreted
as
a
percentage
of
the
width
of
the
original
media,
and
y
and
height
are
interpreted
as
a
percentage
of
the
original
height.
xywhprefix = %x78.79.77.68 ; "xywh"
xywhprefix = %x78.79.77.68 ; "xywh" xywhparam = [ xywhunit ":" ] 1*DIGIT "," 1*DIGIT "," 1*DIGIT "," 1*DIGIT xywhunit = %x70.69.78.65.6C ; "pixel"/ %x70.65.72.63.65.6E.74 ; "percent"/ %x70.65.72.63.65.6E.74 ; "percent"
Examples:
xywh=160,120,320,240 # => results in a 320x240 box at x=160 and y=120 xywh=pixel:160,120,320,240 # => results in a 320x240 box at x=160 and y=120 xywh=percent:25,25,50,50 # => results in a 50%x50% box at x=25% and y=25%
If the clipping region is pixel-based and the image is multi-resolution (like an ICO file), the fragment MUST be ignored, so that the url represents the entire image. More generally, pixel-clip an image that does not have a single well defined pixel resolution (width and height) is not recommended.
Track
selection
allows
the
extraction
of
tracks
(audio,
video,
subtitles,
etc)
from
a
media
container
that
supports
multiple
tracks.
Track
selection
is
denoted
by
the
name
track
.
The
value
is
a
string.
Percent-escaping
can
be
used
in
the
string
to
specify
unsafe
characters
(including
separators
such
as
semi-colon),
see
the
grammar
below
for
details.
Multiple
track
specification
is
allowed,
but
requires
the
specification
of
multiple
track
parameters.
Interpretation
of
the
string
depends
on
the
container
format
of
the
original
media:
some
formats
allow
numbers
only,
some
allow
full
names.
trackprefix = %x74.72.61.63.6B ; "track"
trackparam
=
unistring
Examples:
track=1 # => results in only extracting track 1 track=video&track=subtitle # => results in extracting track 'video' and track 'subtitle' track=Wide%20Angle%20Video # => results in only extracting track 'Wide Angle Video'
As the allowed track names are determined by the original source media, this information has to be known before construction of the media fragment. There is no support for generic media type names (audio, video) across container formats: most container formats allow multiple tracks of each media type, which would lead to ambiguities.
Note
that
there
are
existing
discovery
mechanisms
for
retrieving
the
track
names
of
a
media
resource,
such
as
the
Rich
Open
multitrack
media
Exposition
format
(ROE)
ROE
or
the
Media
Annotations
API
Media
Annotations
.
Editorial
note:
Davy
Further,
HTML5
media
We
can
also
reference
the
HTML5
Media
Multitrack
API
here,
when
it's
mentioned
in
has
a
discovery
mechanism
for
retrieving
the
HTML5
spec.
track
names
of
a
media
resource
through
the
audioTracks,
videoTracks,
and
textTracks
IDL
attributes
of
the
HTMLMediaElement.
For
example,
to
discover
all
the
names
of the available tracks of a video resource, you may want to use the following JavaScript excerpt.
<video id="v1" src="video" controls> </video> <script type="text/javascript"> var video = document.getElementsByTagName("video")[0]; var track_names = []; var idx = 0; for (i=0; i< video.audioTracks.length; i++, idx++) { track_names[idx] = video.audioTracks.getLabel(i); } for (i=0; i< video.videoTracks.length; i++, idx++) { track_names[idx] = video.audioTracks.getLabel(i); } for (i=0; i< video.textTracks.length; i++, idx++) { track_names[idx] = video.textTracks[i].label; } </script>
Name-based
ID-based
selection
is
denoted
by
the
name
id
,
with
the
value
being
a
string
enclosed
in
single
quotes.
enclosed.
Percent-escaping
can
be
used
in
the
string
to
include
unsafe
characters
such
as
a
single
quote,
characters,
see
the
grammer
below
for
details.
Interpretation
of
the
string
depends
on
the
underlying
container
format:
some
container
formats
support
named
chapters
temporal
fragments
or
numbered
chapters
(leading
to
temporal
clipping),
some
may
support
naming
of
groups
of
tracks
or
other
objects.
fragments.
As
with
track
selection,
determining
which
names
are
valid
requires
knowledge
of
the
original
media
item.
nameprefix = %x69.64 ; "id"
nameparam
=
unistring
An id fragment can be seen as a shortcut for a temporal fragment (i.e., a named temporal fragment). Hence, an id fragment can always be resolved to a temporal fragment.
Examples:
id=1 # => results in only extracting the section called '1' id=chapter-1 # => results in only extracting the section called 'chapter-1' id=Airline%20Edit # => results in only extracting the section called 'Airline Edit'
DIGIT = <DIGIT, defined in RFC 5234> pchar = <pchar, defined in RFC 3986> unreserved = <unreserved, defined in RFC 3986> pct-encoded = <pct-encoded, defined in RFC 3986> fragment = <pct-encoded, defined in RFC 3986> unichar = <any Unicode code point>unistring = *unicharunistring = *unichar
For convenience, the following definitions are copied here. Only the definitions in the original documents are considered normative
; defined in RFC 5234ALPHA = %x41-5A / %x61-7A ; A-Z / a-z DIGIT = %x30-39 ; 0-9 HEXDIG = DIGIT / "A" / "B" / "C" / "D" / "E" / "F"; defined in unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~" pct-encoded = "%" HEXDIG HEXDIG sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "=" pchar = unreserved / pct-encoded / sub-delims / ":" / "@"ALPHA = %x41-5A / %x61-7A ; A-Z / a-z DIGIT = %x30-39 ; 0-9 HEXDIG = DIGIT / "A" / "B" / "C" / "D" / "E" / "F" ; defined in RFC 3986 unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~" pct-encoded = "%" HEXDIG HEXDIG sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "=" pchar = unreserved / pct-encoded / sub-delims / ":" / "@" fragment = *( pchar / "/" / "?" )
This section defines the different exchange scenarios for the situations explained in section 3 URI fragment and URI query over the HTTP protocol.
The formal grammar defined in the section 4 Media Fragments Syntax describes what producers of media fragment should output. It is not taking into account possible percent-encoding that are valid according to RFC 3986 and the grammar is not a specification of how a media fragment should be parsed. Therefore, section 5.1 Processing Media Fragment URI defines how to parse media fragment URIs.
This sections defines how to parse media fragment URIs defined in section 4 Media Fragments Syntax , along with notes on some of the caveats to be aware of. Implementors are free to use any equivalent technique(s).
Editorial note: Raphael | |
To generate a simple figure that shows this processing: URI parsing (percent decoding) => name=value pairs => (rfc2047encoding) HTTP |
This section defines how to convert an octet string (from the query or fragment component of a URI) into a list of name-value Unicode string pairs.
Parse the octet string according to the namevalues syntax, yielding a list of name-value pairs, where name and value are both octet string. In accordance with RFC 3986 , the name and value components must be parsed and separated before percent-encoded octets are decoded.
For each name-value pair:
Decode percent-encoded octets in name and value as defined by RFC 3986 . If either name or value are not valid percent-encoded strings, then remove the name-value pair from the list.
Convert name and value to Unicode strings by interpreting them as UTF-8 . If either name or value are not valid UTF-8 strings, then remove the name-value pair from the list.
Note that the output is well defined for any input.
Examples:
Input | Output | Notes |
---|---|---|
"t=1" | [("t", "1")] | simple case |
"t=1&t=2" | [("t", "1"), ("t", "2")] | repeated name |
"a=b=c" | [("a", "b=c")] | "=" in value |
"a&b=c" | [("a", ""), ("b", "c")] | missing value |
"%74=%6ept%3A%310" | [("t", "npt:10")] | unnecssary percent-encoding |
"id=%xy&t=1" | [("t", "1")] | invalid percent-encoding |
"id=%E4r&t=1" | [("t", "1")] | invalid UTF-8 |
While the processing defined in this section is designed to be largely compatible with the parsing of the URI query component in many HTTP server environments, there are incompatible differences that implementors should be aware of:
"&" is the only primary separator for name-value pairs, but some server-side languages also treat ";" as a separator.
name-value pairs with invalid percent-encoding should be ignored, but some server-side languages silently mask such errors.
The "+" character should not be treated specially, but some server-side languages replace it with a space (" ") character.
Multiple occurrences of the same name must be preserved, but some server-side languages only preserve the last occurrence.
This section defines how to convert a list of name-value Unicode string pairs into the media fragment dimensions.
Given the dimensions defined in section 4.2 Fragment Dimensions , each has a pair of production rules that corresponds to the name and value component respectively:
Keyword | Dimension |
---|---|
t | 4.2.1 Temporal Dimension |
xywh | 4.2.2 Spatial Dimension |
track | 4.2.3 Track Dimension |
id |
4.2.4
|
Initially, all dimension are undefined.
For each name-value pair:
If name matches a keyword in the above table, interpret value as per the corresponding section.
Otherwise, the name-value pair does not represent a media fragment dimension. Validators should emit a warning. User agents must ignore the name-value pair.
Note: Because the name-value pairs are processed in order, the last valid occurence of any dimension is the one that is used.
This section defines the protocol steps in HTTP RFC 2616 to resolve and deliver a media fragment specified as a URI fragment.
Various
recipes
are
proposed
and
described
in
a
separate
document
5.2.1
UA
mapped
byte
ranges
Editorial
note
In
this
case,
This
section
describes
the
protocol
steps
used
in
HTTP
exchanges
are
exactly
the
same
as
for
any
other
Web
resource
where
byte
ranges
are
requested
RFC
2616
.
How
the
UA
retrieves
the
byte
ranges
is
dependent
on
the
media
type
of
the
media
resource.
We
here
show
examples
with
only
one
byte
range
retrieval
per
time
range,
which
may
in
practice
turn
into
several
such
retrieval
actions
necessary
to
acquire
the
correct
time
range.
Here
are
the
three
principle
cases
a
media
fragment
enabled
UA
resolve
and
deliver
a
media
Server
will
encounter:
5.2.1.1
UA
requests
URI
fragment
for
the
first
time
A
user
requests
specified
as
a
media
fragment
URI:
URI
query.
User
→
UA
(1):
http://www.example.com/video.ogv#t=10,20
The
UA
has
to
check
if
a
local
copy
of
the
requested
fragment
recipe
proposed
is
available
in
its
buffer
-
not
in
this
case.
But
it
knows
how
to
map
the
fragment
to
byte
ranges:
19147
-
22890.
So,
it
requests
these
byte
ranges
from
the
server:
UA
(1)
→
Proxy
(2)
→
Origin
Server
(3):
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
Range:
bytes=19147-22890
The
server
extracts
the
bytes
corresponding
to
the
requested
range
and
replies
described
in
a
206
HTTP
response:
separate
document
Origin
Server
(3)
→
Proxy
(4)
→
UA
(5):
User
→
UA
(1):
http://www.example.com/video.ogv#t=10,20
The
UA
has
to
check
if
a
local
copy
of
the
requested
fragment
is
available
in
its
buffer
-
it
is
in
In
this
case.
But
the
resource
could
have
changed
on
the
server,
so
it
needs
to
send
a
conditional
GET.
It
knows
the
byte
ranges:
19147
-
22890.
So,
it
requests
these
byte
ranges
from
the
server
under
condition
of
it
having
changed:
UA
(1)
→
Proxy
(2)
→
Origin
Server
(3):
GET /video.ogv HTTP/1.1
Host: www.example.com
Accept: video/*
If-Modified-Since: Sat, 01 Aug 2009 09:34:22 GMT
If-None-Match: "b7a60-21f7111-46f3219476580"
Range:
bytes=19147-22890
The
server
checks
if
the
resource
has
changed
section,
we
discuss
how
Media
Fragment
URIs
should
be
interpreted
by
checking
the
date
-
in
this
case,
the
resource
was
not
modified.
So,
the
server
replies
with
a
304
HTTP
response.
(Note
UAs.
Valid
and
error
cases
are
presented.
In
case
of
errors,
we
distinguish
between
errors
that
a
If-Range
header
cannot
can
be
used,
because
if
the
entity
has
changed,
detected
solely
based
on
the
entire
resource
would
Media
Fragment
URI
and
errors
that
can
only
be
sent.)
Origin
Server
(3)
→
Proxy
(4)
→
UA
(5):
HTTP/1.1 304 Not Modified
Accept-Ranges: bytes
Content-Length: 3743
Content-Type: video/ogg
Content-Range: bytes 19147-22880/35614993
Etag:
"b7a60-21f7111-46f3219476580"
So,
detected
when
the
UA
serves
has
information
of
the
decoded
media
resource
to
the
User
our
of
its
existing
buffer.
(such
as
duration
or
track
information).
A
user
requests
For
each
dimension,
a
number
of
valid
media
fragment
URI
fragments
and
the
UA
sends
the
exact
same
GET
request
as
described
in
the
previous
subsection.
their
semantics
are
presented.
This
time,
the
server
checks
if
the
resource
has
changed
by
checking
the
date
and
it
has
been
modified.
Since
the
byte
mapping
may
not
be
correct
any
longer,
the
server
can
only
tell
the
UA
that
To
describe
the
resource
has
changed
and
leave
all
further
actions
to
different
cases
for
temporal
media
fragments,
we
make
the
UA.
So,
it
sends
a
412
HTTP
response:
following
definitions:
As
described
Further,
as
stated
in
section
3.3
Resolving
URI
fragments
with
server
help
4.2.1
Temporal
Dimension
,
some
User
Agents
cannot
undertake
the
fragment-to-byte
mapping
themselves,
because
temporal
intervals
are
half-open
(i.e.,
the
mapping
begin
time
is
not
obvious.
This
typically
applies
to
media
formats
where
the
setup
considered
part
of
the
decoding
pipeline
does
not
imply
knowledge
of
how
to
map
fragments
to
byte
ranges,
e.g.
Ogg
without
OggIndex.
Thus,
the
User
Agent
would
be
capable
of
decoding
a
continuous
resource,
but
would
not
know
which
bytes
to
request
for
a
media
fragment.
In
this
case,
interval
whereas
the
User
Agent
could
either
guess
what
byte
ranges
it
has
end
time
is
considered
to
retrieve
and
the
retrieval
action
would
follow
be
the
previous
case.
Or
it
could
hope
first
time
point
that
the
server
provides
a
special
service,
which
would
allow
it
to
retrieve
the
byte
ranges
with
a
simple
request
is
not
part
of
the
media
fragment
ranges.
interval).
Thus,
the
HTTP
request
of
the
User
Agent
will
include
a
request
for
the
fragment
hoping
if
we
state
below
that
the
server
can
do
the
byte
range
mapping
and
send
back
the
appropriate
byte
ranges.
This
"the
media
is
realized
by
introducing
new
dimensions
for
the
HTTP
Range
header,
next
played
from
x
to
y",
this
means
that
the
byte
dimension.
The
specification
for
all
new
Range
Request
Header
dimensions
is
given
through
the
following
ABNF
as
an
extension
frame
corresponding
to
the
HTTP
Range
Request
Header
definition
(see
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2):
y
will
not
be
played.
This
specification
is
meant
to
be
analogous
to
the
one
in
URIs,
but
it
is
a
bit
stricter.
The
time
unit
is
not
optional.
For
instance,
it
can
be
"npt",
"smpte",
"smpte-25",
"smpte-30",
"smpte-30-drop"
or
"clock"
for
temporal.
Where
"ntp"
is
used
for
t=a,b
with
a
temporal
range,
only
specification
in
seconds
is
possible.
Where
"clocktime"
is
used
for
<=
b
Next
to
To
describe
the
introduction
of
new
dimensions
different
cases
for
the
HTTP
Range
request
header,
spatial
media
fragments,
we
also
introduce
a
new
HTTP
response
header,
called
Content-Range-Mapping,
which
provides
the
mapping
of
the
retrieved
byte
range
to
make
the
original
Range
request,
which
was
not
in
bytes.
It
serves
two
purposes:
following
definitions:
Three
cases
can
be
distinguished
when
a
User
Agent
needs
assistance
by
a
server
to
perform
the
byte
range
mapping.
In
the
next
subsections,
we'll
go
through
the
protocol
exchange
action
step
by
step.
The
following
spatial
fragments
are
all
valid:
The
result
of
doing
spatial
clipping
on
a
server
does
not
support
media
fragments,
the
complete
resource
will
be
delivered.
It
also
means
that
we
can
combine
both,
byte
range
and
fragment
range
headers
in
one
request,
since
the
server
will
only
react
to
the
Range
header
it
understands.
Assuming
the
server
can
map
the
given
Range
to
one
or
more
byte
ranges,
it
will
reply
with
these
in
a
206
HTTP
response.
Where
has
multiple
byte
ranges
are
required
to
satisfy
the
Range
request,
these
are
transmitted
as
a
multipart
message-body.
The
media
type
for
this
purpose
is
called
"multipart/byteranges".
This
video
tracks
is
in
sync
with
that
the
HTTP
RFC
RFC
2616
.
Here
spatial
clipping
is
the
reply
applied
to
the
example
above,
assuming
a
single
byte
range
is
sufficient:
all
tracks.
Origin
Server
(3)
→
Proxy
(4)
→
UA
(5):
The
following
track
fragments
are
valid:
The following id fragments are valid:
Both
syntactical
and
direct
access
mechanisms
for
video
rely
heavily
on
this
functionality.
Over
time,
semantical
errors
are
treated
similar.
More
specifically,
the
proxy
infrastructure
will
learn
how
to
cache
media
fragment
URIs
directly
as
described
in
UA
SHOULD
ignore
name-value
pairs
causing
errors
detectable
based
on
the
previous
section
and
then
will
not
require
this
extra
effort.
URI.
To
enable
media-fragment-URI-supporting
UAs
to
make
their
retrieval
cacheable,
Below,
we
introduce
some
extra
HTTP
headers,
which
will
help
tell
the
server
and
the
proxy
what
to
do.
There
is
an
Accept-Range-Redirect
request
header
which
signals
to
provide
more
details
for
each
of
the
server
that
only
a
redirect
to
dimensions.
We
look
at
errors
in
the
correct
byte
ranges
is
necessary
different
dimensions
and
the
result
should
be
delivered
their
values
in
the
Range-Redirect
header.
The
ABNF
for
these
additional
two
HTTP
headers
is
given
as
follows:
subsequent
sub-sections.
We
start
with
errors
on
the
more
general
levels.
User
→
UA
(1):
http://www.example.com/video.ogv#t=10,20
The
UA
has
to
check
if
a
local
copy
of
the
requested
fragment
is
available
in
its
buffer.
In
our
case
here,
it
is
not.
If
it
was,
we
would
revert
back
to
following
list
provides
the
processing
described
in
sections
5.2.1.2
UA
requests
URI
fragment
it
already
has
buffered
and
5.2.1.3
UA
requests
URI
fragment
different
kind
of
a
changed
resource
,
since
the
UA
already
knows
the
mapping
to
byte
ranges.
The
UA
issues
a
HTTP
GET
request
with
errors
that
can
occur
on
the
fragment
general
URI
level
and
requesting
to
retrieve
just
the
mapping
to
byte
ranges:
how
they
should
be
treated:
Examples:
Finally,
The
value
cannot
be
parsed
for
the
UA
follows
spatial
dimension
or
the
redirect,
which
in
this
case
corresponds
parsed
value
is
invalid
according
to
the
process
specified
in
section
5.3
Protocol
for
URI
query
Resolution
in
HTTP
.
specification.
Invalid
spatial
fragments
SHOULD
be
ignored
by
the
UA.
The
server
can
also
decide
to
combine
a
redirect
and
a
media
fragment
URI:
Examples:
The
server
decides
not
to
serve
the
requested
media
fragment
in
terms
of
byte
ranges
and
redirects
the
UA
to
an
alternate
representation.
However,
in
this
case,
the
server
decides
to
handle
value
cannot
be
parsed
for
the
track
fragment
through
a
URI
query
and
dimension.
Invalid
track
fragments
SHOULD
be
ignored
by
the
temporal
fragment
through
a
URI
fragment:
UA.
Origin
Server
(3)
→
Proxy
(4)
→
UA
(5):
Examples:
User
→
UA
(1):
http://www.example.com/video.ogv?t=10,20
This
is
a
full
resource,
so
it
is
a
simple
HTTP
retrieval
process.
The
UA
has
to
check
if
a
local
copy
of
the
requested
resource
is
available
in
its
buffer.
If
yes,
it
does
a
conditional
GET
with
e.g.
an
If-Modified-Since
and
If-None-Match
HTTP
header.
Assuming
the
resource
has
not
been
retrieved
before,
value
cannot
be
parsed
for
the
following
is
sent
to
id
dimension.
Invalid
id
fragments
SHOULD
be
ignored
by
the
server:
UA.
UA
(1)
→
Proxy
(2)
→
Origin
Server
(3):
GET /video.ogv?t=10,20 HTTP/1.1
Host: www.example.com
Accept:
video/*
If
the
server
doesn't
understand
these
query
parameters,
it
typically
ignores
them
and
returns
the
complete
resource.
This
is
not
a
requirement
by
the
URI
or
the
HTTP
standard,
but
the
way
it
is
typically
implemented
in
Web
browsers.
Examples:
Note
Errors
that
a
Link
header
MAY
can
only
be
provided
indicating
the
relationship
between
the
requested
URI
query
and
the
original
media
fragment
URI.
This
enables
detected
when
the
UA
to
retrieve
further
has
information
about
of
the
original
resource,
source
media
are
treated
differently.
Examples
of
such
as
its
full
length.
In
this
case,
the
user
agent
is
also
enable
to
choose
to
display
information
are
the
dimensions
duration
of
a
video,
the
primary
resource
resolution
of
an
image,
track
information,
or
the
ones
created
by
the
query.
The
UA
serves
mime
type
of
the
decoded
media
resource
to
(i.e.,
all
information
that
is
not
detectable
solely
based
on
the
user.
Caching
in
Web
proxies
works
as
it
has
always
worked
-
most
modern
Web
servers
and
UAs
implement
a
caching
strategy
for
URIs
URI).
Note
that
contain
a
query
using
one
lot
of
the
three
methods
for
marking
freshness:
heuristic
freshness
analysis,
the
Cache-Control
header,
or
the
Expires
header.
In
this
case,
many
copies
of
different
segments
of
information
is
located
within
the
original
resource
video.ogv
may
end
up
in
proxy
caches.
An
intelligent
media
proxy
in
future
may
devise
a
strategy
to
buffer
such
resources
in
a
more
efficient
manner,
where
headers
and
byte
ranges
are
stored
differently.
setup
information
Note
that
a
server
that
does
not
support
media
fragments
through
either
URI
fragment
or
query
addressing
will
return
Below,
we
provide
more
details
for
each
of
the
full
resource
in
either
case.
It
is
therefore
not
possible
to
first
try
URI
fragment
addressing
and
when
that
fails
to
try
URI
query
addressing.
dimensions.
We
will
look
at
errors
in
the
different
dimensions
and
their
values
in
the
subsequent
sub-sections
and
we
will
start
with
The
following
errors
can
occur
on
the
more
general
levels.
level:
For
this,
To
describe
the
different
cases
for
temporal
media
fragments,
we
make
use
the
definitions
from
6.1.1
Valid
temporal
dimension
.
The
invalidity
of
the
following
definitions:
s:
temporal
fragments
can
only
be
detected
by
the
start
point
of
UA
if
it
knows
the
media
duration
(for
non-existent
temporal
fragments)
and
s
>=
0
e:
the
end
point
frame
rate
(for
smpte
temporal
fragments)
of
the
media
(i.e.
duration
=
e
-
s
)
and
s
<
e
a:
a
positive
integer,
a
>=
0
b:
a
positive
integer,
b
>=
0
source
media.
The
value
resolves
to
a
non-existent
fragment.
If
To
describe
the
UA
is
already
set
up
different
cases
for
decoding
the
resource
and
it
can
identify
that
the
fragment
is
non-existent
(i.e.
knows
about
start
and
end
times),
it
will
avoid
undertaking
an
unnecessary
retrieval
action.
Otherwise
it
will
undertake
the
RANGE
retrieval
request
with
spatial
media
fragments,
we
use
the
'include-setup'
as
specified
in
section
definitions
from
5.2.2.2
Server
mapped
byte
ranges
with
corresponding
binary
data
and
codec
setup
data
6.1.2
Valid
spatial
dimension
and
will
receive
a
206
with
just
.
The
invalidity
of
the
setup
data.
If
following
spatial
fragments
can
only
be
detected
by
the
UA
is
set
up
for
decoding,
but
cannot
identify
that
the
fragment
is
non-existent
and
does
if
it
knows
the
retrieval
action
without
resolution
of
the
'include-setup',
it
will
result
in
a
416.
source
media.
The
value
cannot
invalidity
of
track
fragments
can
be
parsed
for
the
dimension.
If
detected
if
the
UA
is
already
set
up
for
decoding
the
resource,
it
will
identify
that
the
fragment
is
invalid
and
avoid
undertaking
an
unnecessary
retrieval
action.
Otherwise
it
will
undertake
the
RANGE
retrieval
request
with
the
'include-setup'
as
specified
knows
which
tracks
are
available
in
section
5.2.2.2
Server
mapped
byte
ranges
with
corresponding
binary
data
and
codec
setup
data
and
will
receive
a
206
with
just
the
setup
data.
Examples:
t=a,b
with
a
>
b
retrieves
nothing
since
inverted
interval
t=asdf
t=5,ekj
t=agk,9
t='0'
t=10-20
t=10:20
t=10,20,40
t%3D10
where
%3D
is
equivalent
to
=;
percent
encoding
does
not
resolve
Effect:
retrieve
whatever
source
media.
If
the
browser
needs
to
set
up
playback,
but
otherwise
nothing
6.2.5
SMPTE
time
code
mismatch
When
there
is
UA
detects
a
mismatch
between
the
SMPTE
time
code
used
by
non-existing
track
in
the
UA
and
Media
Fragment
URI,
it
SHOULD
ignore
the
encoding
settings
of
track
fragment.
For
example,
the
requested
source
media
resource
(e.g.,
use
consists
of
smpte-25
time
code
when
the
media
resource
is
encoded
at
30fps),
the
server
MUST
ignore
the
RANGE
header
two
tracks:
'videohigh'
and
returns
the
whole
resource
(i.e.,
a
200).
6.3
Errors
on
the
spatial
dimensions
Assuming
'audiomed'.
The
track
fragment
track=foo
points
to
a
single
spatial
dimension
is
present,
we
now
analyse
what
content
can
appear
here
non-existing
track
fragment
and
how
it
should
be
handled.
ignored
if
the
UA
knows
which
tracks
are
available.
This
list
still
has
to
The
invalidity
of
id
fragments
can
be
provided.
6.5
Errors
on
detected
if
the
named
dimensions
Assuming
UA
knows
which
id
fragments
are
available
in
the
source
media.
If
the
UA
detects
a
single
named
dimension
is
present,
we
now
analyse
what
content
can
appear
here
and
how
non-existing
id
fragment
in
the
Media
Fragment
URI,
it
SHOULD
ignore
the
id
fragment.
For
example,
if
the
source
media
does
not
contain
the
id
fragment:
'chapter1',
then
id=chapter1
points
to
a
non-existing
id
fragment
and
should
be
handled.
ignored
if
the
UA
knows
which
id
fragments
are
available.
This section contains notes to implementors. Some of the information here is already stated formally elsewhere in the document, and the reference here is mainly a heads-up. Other items are really outside the scope of this specification, but the notes here reflect what the authors think would be good practice.
The sub-sections are not mutually exclusive. Hence, an implementer of a web browser as a media fragment client should read the sections 7.1 Browsers Rendering Media Fragments , 7.2 Clients Displaying Media Fragments and 7.3 All Media Fragment Clients .
The pixel coordinates defined in the section 4.2.2 Spatial Dimension are intended to be identical to the intrinsic width and height defined in HTML5 .
For spatial URI fragments, the next section describes two distinct use cases, highlighting and cropping. HTML rendering clients, however, are expected to implement cropping as the default rendering mechanism.
When dealing with media fragments, there is a question whether to display the media fragment in context or without context. In general, it is recommended to display a URI fragment in context since it is part of a larger resource. On the other hand, a URI query results in a new resource, so it is recommended to display it as a complete resource without context. The next paragraphs discuss for each axis the context of a media fragment and provides suggestions regarding the visualization of the URI fragment within its context.
For a temporal URI fragment, it is recommended to start playback at a time offset that equals to the start of the fragment and pause at the end of the fragment. When the "play" button is hit again, the resource will continue loading and play back beyond the end of the fragment. When seeking to specific offsets, the resource will load and play back from those seek points. It is also recommended to introduce a "reload" button to replay just the URI fragment. In this way, a URI fragment basically stands for "focusing attention". Additionally, temporal URI fragments could be highlighted on the transport bar.
For a spatial URI fragment, we foresee two distinct use cases: highlighting the spatial region in-context and cropping to the region. In the first case, the spatial region could be indicated by means of a bounding box or the background (i.e., all the pixels that are not contained within the region) could be blurred or darkened. In the second case, the region alone would be presented as a cropped area. How a document author specifies which use case is intended is outside the scope of this specification, we suggest implementors of the specification provide a means for this, for example through attributes or stylesheet elements.
Finally, for track URI fragments, it is recommended to play only the tracks identified by the track URI fragment. If no tracks are specified, the default tracks should be played. Different tracks could be selected using drop-down boxes or buttons; the selected tracks are then highlighted during playback. The way the UA retrieves information regarding the available tracks of a particular resource is out of scope for this specification.
Resolution
Order:
Where
multiple
dimensions
are
combined
in
one
URI
fragment
request,
implementations
are
expected
to
first
do
track
temporal,
id,
and
temporal
track
selection
on
the
container
level,
and
then
do
spatial
clipping
on
the
codec
level.
Named
selection
is
done
for
whatever
the
name
stands
for:
a
track,
a
temporal
section,
or
a
spatial
region.
Media Fragment Grammar: Note that the grammar for Media Fragment URI only specifies the grammar for features standardised by this specification. If a string does not parse correctly it does not necessarily mean the URI is wrong, it only means it is not a Media Fragment according to this specification. It may be correct for some extended form, or for a completely different fragment specification method. For this reason, error recovery on syntax errors in media fragment specifiers is unwise.
External
Clipping:
There
is
no
obligatory
resolution
method
for
a
situation
where
a
media
fragment
URI
is
being
used
in
the
context
of
another
clipping
method.
Formally,
it
is
up
to
the
context
embedding
the
media
fragment
URI
to
decide
whether
the
outside
clipping
method
overrides
the
media
fragment
URI
or
cascades,
i.e.
is
defined
on
the
resulting
resource.
In
the
absence
of
strong
reasons
to
do
otherwise
we
suggest
cascading.
An
example
is
a
SMIL
element
as
follows:
<smil:video
clipBegin="5"
clipEnd="15"
src="http://www.example.com/example.mp4#t=100,200"/>
.
This
should
start
playback
of
the
original
media
resource
at
second
105,
and
stop
at
115.
Content-Range-Mapping: The Content-Range-Mapping header returned sometimes refers to a completely different range than the one that was specified as the Range: in the request. This can happen if a byte-based range is requested from a cache server that is not Media Fragment aware, and that server had previously cached the data as a result of a time range request. Technically, the information in the Content-Range-Mapping header is still correct, but it is completely unrelated to the request issued.
Media type: The media type of a resource retrieved through a URI fragment request is the same as that of the primary resource. Thus, retrieval of e.g. a single frame from a video will result in a one-frame-long video. Or, retrieval of all the audio tracks from a video resource will result in a video and not a audio resource. When using a URI query approach, media type changes are possible. E.g. a spatial fragment from a video at a certain time offset could be retrieved as a jpeg using a specific HTTP "Accept" header in the request.
Synchronisation: Synchronisation between different tracks of a media resource needs to be maintained when retrieving media fragments of that resource. This is true for both, URI fragment and URI query retrieval. With URI queries, when transcoding is required, a non-perceivable change in the synchronisation is acceptable.
Embedded Timecodes: When a media resource contains embedded time codes, these need to be maintained for media fragment retrieval, in particular when the URI fragment method is used. When URI queries are used and transcoding takes place, the embedded time codes should remain when they are useful and required.
SMPTE Timecodes: Standardisation of SMPTE timecodes in this document is primarily intended to allow frame-accurate references to sections of video files, they can be seen as a form of content-based addressing.
Reasonable
Clipping:
Temporal
clipping
needs
to
be
as
close
as
reasonably
possible
to
what
the
media
fragment
specified,
and
not
omit
any
requested
data.
"Reasonably
close"
means
the
nearest
compression
entity
to
the
requested
fragment
that
completely
contains
the
requested
fragment.
This
means,
e.g.
for
temporal
fragments
if
a
request
is
made
for
http://www.example.org/video.ogv#t=60,100
,
but
the
closest
decodable
range
is
t=58,102
because
this
is
where
a
packet
boundary
lies
for
audio
and
video,
then
it
will
be
this
range
that
is
returned.
The
UA
is
then
capable
of
displaying
only
the
requested
subpart,
and
should
also
just
do
that.
For
some
container
formats
this
is
a
non-issue,
because
the
container
format
allows
specification
of
logical
begin
and
end.
Reasonable
byte
ranges:
If
a
single
temporal
range
request
would
result
in
a
disproportionally
large
number
of
byte
ranges
it
may
be
better
if
the
server
returns
a
redirect
to
the
query
form
of
the
media
fragment.
This
situation
could
happen,
happen
for
example,
if
the
underlying
media
file
is
organized
in
a
strange
way.
Media Fragment URIs are only defined on media resources. However, many Web developers that create Web pages with video or audio want to provide their users the ability to jump directly to media fragments - in particular to time offsets in a video - through providing a URI scheme for the Web page.
The way in which to realize this without requiring an extra server interaction is by using a URI fragment scheme on the Web page which is parsed by JavaScript and communicates the media fragment to the audio or video resource loader. In HTML5 it would need to change the @src attribute of the appropriate <audio> or <video> element with the appropriate URI fragment and then call the load() function to make the element (re)load the resource with that URI.
A URI scheme for such a Web page may involve ampersand-separated name-value pairs as defined in this specification, e.g. http://example.com/videopage.html#t=60,100 .
However, the Web developer has to create a scheme that works with the remainder of the Web page fragment addressing functionality. If, for example, the Web page makes use of the ID attributes of the elements on the page for scrolling down on the page, adding media fragment URI addressing to the Web page addressing will fail. For example, if http://example.com/videopage.html#first works and scrolls to an offset on that Web page, http://example.com/videopage.html#first&t=60,100 will not do the same scrolling. The Web developer will then need to parse the fragment parameter and implement the scrolling functionality in JavaScript manually using the scrollTo() or scrollTop() functions.
HTTP
byte
ranges
can
only
be
used
to
request
media
fragments
if
these
media
fragments
can
be
expressed
in
terms
of
byte
ranges.
This
restriction
implies
that
media
resources
should
fulfil
In
other
words,
the
following
conditions:
The
media
fragments
can
be
extracted
in
the
compressed
domain;
domain.
No
syntax
element
modifications
in
the
bitstream
are
needed
to
perform
the
extraction.
Not
all
If
a
media
formats
will
be
compliant
with
these
two
conditions.
Hence,
we
distinguish
the
following
categories:
The
fragments
of
a
media
resource
meets
the
two
conditions
(i.e.,
fragments
can
be
extracted
in
the
compressed
domain
and
no
syntax
element
modifications
are
necessary).
In
this
case,
expressable
in
terms
of
byte
ranges,
caching
media
fragments
of
such
media
resources
is
possible
using
HTTP
byte
ranges,
because
their
media
fragments
are
addressable
in
terms
of
byte
ranges.
Media
fragments
can
be
extracted
in
the
compressed
domain,
but
syntax
element
modifications
are
required.
These
In
case
media
fragments
are
cacheable
using
HTTP
byte
ranges
on
condition
that
the
syntax
element
modifications
are
needed
in
media-headers
applying
to
the
whole
media
resource/fragment.
In
this
case,
those
media-headers
could
be
sent
to
the
client
in
the
first
response
of
the
server,
which
is
a
response
to
a
request
on
a
specific
media
resource
different
from
the
byte-range
content.
Media
fragments
cannot
be
extracted
in
the
compressed
domain.
In
this
case,
domain,
transcoding
operations
are
necessary
to
extract
media
fragments.
Since
these
media
fragments
are
not
expressible
in
terms
of
byte
ranges,
it
is
not
possible
to
cache
these
media
fragments
using
HTTP
byte
ranges.
Note
that
media
formats
which
enable
extracting
fragments
in
the
compressed
domain,
but
are
not
compliant
with
category
2
(i.e.,
syntax
element
modifications
are
not
only
applicable
to
the
whole
media
resource),
also
belong
to
this
category.
unichar = <any Unicode code point> unistring = *unichar ; defined in RFC 5234 ALPHA = %x41-5A / %x61-7A ; A-Z / a-z DIGIT = %x30-39 ; 0-9 HEXDIG = DIGIT / "A" / "B" / "C" / "D" / "E" / "F" ; defined in RFC 3986 unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~" pct-encoded = "%" HEXDIG HEXDIG sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "=" pchar = unreserved / pct-encoded / sub-delims / ":" / "@" fragment = *( pchar / "/" / "?" ) ; defined in RFC 2326 npt-sec = 1*DIGIT [ "." *DIGIT ] ; definitions taken npt-hhmmss = npt-hh ":" npt-mm ":" npt-ss [ "." *DIGIT] ; from RFC 2326 npt-hh = 1*DIGIT ; any positive number npt-mm = 2DIGIT ; 0-59 npt-ss = 2DIGIT ; 0-59 ; defined in RFC 3339 date-fullyear = 4DIGIT date-month = 2DIGIT ; 01-12 date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on ; month/year time-hour = 2DIGIT ; 00-23 time-minute = 2DIGIT ; 00-59 time-second = 2DIGIT ; 00-58, 00-59, 00-60 based on leap second ; rules time-secfrac = "." 1*DIGIT time-numoffset = ("+" / "-") time-hour ":" time-minute time-offset = "Z" / time-numoffset partial-time = time-hour ":" time-minute ":" time-second [time-secfrac] full-date = date-fullyear "-" date-month "-" date-mday full-time = partial-time time-offset date-time = full-date "T" full-time ; Mediafragment definitions segment = mediasegment / *( pchar / "/" / "?" ) ; augmented fragment ; definition taken from ; RFC 3986 ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Common Prefixes ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; deftimeformat = %x6E.70.74 ; "npt" pfxdeftimeformat = %x74.3A.6E.70.74 ; "t:npt" smpteformat = %x73.6D.70.74.65 ; "smpte" / %x73.6D.70.74.65.2D.32.35 ; "smpte-25" / %x73.6D.70.74.65.2D.33.30 ; "smpte-30" / %x73.6D.70.74.65.2D.33.30.2D.64.72.6F.70 ; "smpte-30-drop" pfxsmpteformat = %x74.3A.73.6D.70.74.65 ; "t:smpte" / %x74.3A.73.6D.70.74.65.2D.32.35 ; "t:smpte-25" / %x74.3A.73.6D.70.74.65.2D.33.30 ; "t:smpte-30" / %x74.3A.73.6D.70.74.65.2D.33.30.2D.64.72.6F.70 ; "t:smpte-30-drop" clockformat = %x63.6C.6F.63.6B ; "clock" pfxclockformat = %x74.3A.63.6C.6F.63.6B ; "clock" ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Media Segment ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;mediasegment = namesegment / axissegment axissegment = ( ) *( "&" ( ) ; ; note that this does not capture the restriction of only one timesegment or spacesegment ; in the axisfragment definition, unless we list explicitely all the cases, ;mediasegment = ( timesegment / spacesegment / tracksegment / idsegment ) *( "&" ( timesegment / spacesegment / tracksegment / idsegment ) timesegment = timeprefix "=" timeparam timeprefix = %x74 ; "t" timeparam = npttimedef / smptetimedef / clocktimedef npttimedef = [ deftimeformat ":"] ( npttime [ "," npttime ] ) / ( "," npttime ) npttime = npt-sec / npt-hhmmss smptetimedef = smpteformat ":"( frametime [ "," frametime ] ) / ( "," frametime ) frametime = 1*DIGIT ":" 2DIGIT ":" 2DIGIT [ ":" 2DIGIT [ "." 2DIGIT ] ] clocktimedef = clockformat ":"( clocktime [ "," clocktime ] ) / ( "," clocktime ) clocktime = (datetime / walltime / date) datetime = date-time ; inclusion of RFC 3339 spacesegment = xywhprefix "=" xywhparam xywhprefix = %x78.79.77.68 ; "xywh" xywhparam = [ xywhunit ":" ] 1*DIGIT "," 1*DIGIT "," 1*DIGIT "," 1*DIGIT xywhunit = %x70.69.78.65.6C ; "pixel" / %x70.65.72.63.65.6E.74 ; "percent" tracksegment = trackprefix "=" trackparam trackprefix = %x74.72.61.63.6B ; "track" trackparam = unistringnamesegment = nameprefix "=" nameparam nameprefix = %x69.64 ; "id" nameparam = unistringidsegment = idprefix "=" idparam idprefix = %x69.64 ; "id" idparam = unistring
; defined in RFC 2616 CHAR = [any US-ASCII character (octets 0 - 127)] token = 1*[any CHAR except CTLs or separators]` first-byte-pos = 1*DIGIT last-byte-pos = 1*DIGIT bytes-unit = "bytes" range-unit = bytes-unit | other-range-unit byte-range-resp-spec = (first-byte-pos "-" last-byte-pos) Range = "Range" ":" ranges-specifier Accept-Ranges = "Accept-Ranges" ":" acceptable-ranges ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; HTTP Request Headers ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; ranges-specifier = byte-ranges-specifier | fragment-specifier ; ; note that ranges-specifier is extended from RFC 2616 ; to cover alternate fragment range specifiers ; fragment-specifier = "include-setup" | fragment-range *( "," fragment-range ) [ ";" "include-setup" ]fragment-range = time-ranges-specifier | track-ranges-specifier | name-ranges-specifierfragment-range = time-ranges-specifier | id-ranges-specifier ; ; note that this doesn't capture the restriction to one fragment dimension occurring ; maximally once only in the fragment-specifier definition. ; time-ranges-specifier = timeprefix ":" time-ranges-options time-ranges-options = npttimeoption / smptetimeoption / clocktimeoption npttimeoption = deftimeformat "=" npt-sec "-" [ npt-sec ] smptetimeoption = smpteformat "=" frametime "-" [ frametime ] clocktimeoption = clockformat "=" datetime "-" [ datetime ]track-ranges-specifier = trackprefix "=" trackparam *( ";" trackparam ) name-ranges-specifier = nameprefix "=" nameparamid-ranges-specifier = idprefix "=" idparam ;; Accept-Range-Redirect = "Accept-Range-Redirect" ":" bytes-unit ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; HTTP Response Headers ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Content-Range-Mapping = "Content-Range-Mapping" ":" '{' ( content-range-mapping-spec [ ";" def-include-setup ] ) / def-include-setup '}' '=' '{' byte-content-range-mapping-spec '}' def-include-setup = %x69.6E.63.6C.75.64.65.2D.73.65.74.75.70 ; "include-setup" byte-range-mapping-spec = bytes-unit SP byte-range-resp-spec *( "," byte-range-resp-spec ) "/" ( instance-length / "*" )content-range-mapping-spec = time-mapping-spec | track-mapping-spec | name-mapping-speccontent-range-mapping-spec = time-mapping-spec | id-mapping-spec time-mapping-spec = timeprefix ":" time-mapping-options time-mapping-options = npt-mapping-option / smpte-mapping-option / clock-mapping-option npt-mapping-option = deftimeformat SP npt-sec "-" npt-sec "/" [ npt-sec ] "-" [ npt-sec ] smpte-mapping-option = smpteformat SP frametime "-" frametime "/" [ frametime ] "-" [ frametime ] clock-mapping-option = clockformat SP datetime "-" datetime "/" [ datetime ] "-" [ datetime ]track-mapping-spec = trackprefix SP trackparam *( ";" trackparam ) name-mapping-spec = nameprefix SP nameparamid-mapping-spec = idprefix SP idparam ;; acceptable-ranges = 1#range-unit *( "," 1#range-unit )| "none" ; ; note this does not represent the restriction that range-units can only appear once at most; ; this has also been adapted from RFC 2616 ; to allow multiple range units. ;other-range-unit = token | timeprefix | trackprefix | nameprefixother-range-unit = token | timeprefix | idprefix ;; Range-Redirect = "Range-Redirect" ":" byte-range-resp-spec *( "," byte-range-resp-spec )
This appendix explains how the media fragment specification is mapped to an RTSP protocol activity. We assume here that you have a general understanding of the RTSP protocol mechanism as defined in RFC 2326 . The general sequence of messages sent between an RTSP UA and server can be summarized as follows:
Note that the RTSP protocol is intentionally similar in syntax and operation to HTTP.
We illustrated for each of the four media fragment dimensions how they can be mapped onto RTSP commands. The following examples are used to illustrated each of the dimensions: (1) temporal: #t=10,20 (2) tracks: #track=audio&track=video (3) spatial: #xywh=160,120,320,24 (4) id: #id=Airline%20Edit
In RTSP, temporal fragment URIs are provided through the PLAY method. A URI such as
rtsp://example.com/media#t=10,20
will
be
executed
as
a
series
of
the
following
methods
(all
shortened
for
readability
-
full
examples
can
be
found
in
).
readability).
The
actual
temporal
selection
is
provided
in
the
PLAY
method:
C->S: PLAY rtsp://example.com/media
C->S: PLAY rtsp://example.com/media Range: npt=10-20
The
server
tells
the
UA
which
temporal
range
is
returned:
S->C: RTSP/1.0 200 OK
S->C: RTSP/1.0 200 OK Range: npt=9.5-20.1
We can explain this mapping for all of the media fragment defined time schemes. Also, several temporal media fragment URI requests can be sent as pipelined commands without having to re-send the DESCRIBE and SETUP commands.
In RTSP, track fragment URIs are provided through the SETUP method. A URI such as
rtsp://example.com/media#track=audio&track=video
will be executed as a series of the following methods (all shortened for readability).
The
discovery
of
available
tracks
i
is
provided
through
the
SDP
reply
to
DESCRIBE,
but
it
could
be
done
through
alternative
methods,
too.
Several
consecutive
track
media
fragment
URI
requests
can
only
be
sent
with
new
SETUP
commands
and
cannot
be
pipelined.
This document is the work of the W3C Media Fragments Working Group . Members of the Working Group are (at the time of writing, and in alphabetical order): Eric Carlson (Apple, Inc.), Chris Double (Mozilla Foundation), Michael Hausenblas (DERI Galway at the National University of Ireland, Galway, Ireland), Philip Jägenstedt (Opera Software), Jack Jansen (CWI), Yves Lafon (W3C), Erik Mannens (IBBT), Thierry Michel (W3C/ERCIM), Guillaume (Jean-Louis) Olivrin (Meraka Institute), Soohong Daniel Park (Samsung Electronics Co., Ltd.), Conrad Parker (W3C Invited Experts), Silvia Pfeiffer (W3C Invited Experts), Nobuhisa Shiraishi (NEC Corporation), David Singer (Apple, Inc.), Thomas Steiner (Google, Inc.), Raphaël Troncy (EURECOM), Davy Van Deursen (IBBT),
The people who have contributed t