This wiki has been archived and is now read-only.

Use Cases & Requirements Draft

From Media Fragments Working Group Wiki
Jump to: navigation, search

The need for media fragment addressing in URIs originates from multiple sources.

There are applications that can be enabled or enriched by the availability of media fragment URIs.

There are also requirements for media fragment URIs for enabling other Web technologies to satisfy their use cases.

Thus, this section describes application use cases and technology requirements in separate subsections.

Further, we have added a subsection which describes side conditions that we are considering as relevant during the development of the specification.

Backward compatibility: if the server and/or UA does not support fragments the full resource will be downloaded (ignore what you don't know principle)

Functional requirements (Application Use Cases)

[... we should still discuss if some of these use cases are out of scope ...]

Linking to & Display of Media Fragments

A user is only interested to consume a fragment of a media resource rather than the complete resource. A media fragment URI as per http://www.ietf.org/rfc/rfc3986.txt allows to address this part of the resource directly and thus enables the User Agent to receive just the relevant fragment.

Scenario 1: Search Engine
Tim does a keyword search on a video search service. That keyword is found in several videos in the search service's collection and it relates to clips inside the videos that appear at a time offset. Tim would like the search result to point him to just these media fragments so he can watch the relevant clips rather than having to watch the full videos and manually search for the relevant clips.

Scenario 2: Region of an Image
Tim has discovered on an image hosting service a photo of his third school year class. He is keen to put a link to his own face inside this photo onto his private Web site where he is collecting old photos of himself. He does not want the full photo to be displayed and he does not want to have to download and crop the original image since he wants to reference the original resource.

Scenario 3: Portion of Music
Tim is a Last.fm user. He wants his friend Sue to listen to a cool song, Gypsy Davy. However, not really the entire song is worth it, Tim thinks. He wants Sue to listen to the last 10 seconds only and sends her an email with a link to just that subpart of the media resource.

Scenario 4: Moving Windows of Interest
Tim is now creating an analysis of the movements of muscles of horses during trotting and finds a few relevant videos online. His analysis is collected on a Web page and he'd like to reference the relevant video sections, cropped both in time and space to focus his viewers' attention on specific areas of interest that he'd like to point out.

Browsing and Bookmarking Media Fragments

Media resources - audio, video and even images - are often very large resources that users want to explore progressively. Progressive exploration of text is well-known in the Web space under the term "pagination". Pagination in the text space is realized by creating a series of Web pages and enabling paging through them by scripts on a server, each page having their own URI. For large media resources, such pagination can be provided by media fragment URIs, which enable direct access to media fragments.

Scenario 1: Segmenting a Video
Michael has a Website that collects recordings of the sittings of his government's parliament. These recordings tend to be very long - generally on the order of 7 hours in duration. Instead of splitting up the recordings into short files by manual inspection of the change of topics or some other segmentation approach, he prefers to provide many handles to a unique video resource. As he publishes the files however, he provides pagination on the videos such that people can watch them 20 min at a time.

Scenario 2: Temporal Audio Pagination
Lena would like to browse the descriptive audio tracks of a video like she does with Daisy audio books, by following the logical structure of the media. Audio descriptions and captions generally come in blocks either timed or separated by silences. Chapter by chapter and then section by section she eventually jumps to a specific paragraph and down to the sentence level by using the "tab" control like she would normally do in audio books. The descriptive audio track is an extra spoken track that provides a description of scenes happening in a video. When the descriptive audio track is not present, Lena can similarly browse through captions and descriptive text tracks which are either rendered through her braille reading device or through her text-to-speech engine.

Scenario 4: Spatial Video Pagination
Elaine has recorded a video mozaic of all her TV channels of an international election day. She wants to keep the original file of what all TV broadcasts are synchronously showing but now she wants to be able to make a long presentation where each channel will show, one at a time, one after another. She creates a playlist of media fragments URIs that each select a specific channel in the mozaic to play each channel after one another.

Scenario 5: Audio Passage Bookmark
Sue likes the song segment that Tim has sent her and decides to add this specific segment to her bookmarks.

When regarding media resources (in particular audio and video) as monolithic blocks, they are very inaccessible. For example, it is difficult to find out what they are about, where the highlights, or what the logical structure of the resources are. Lack of these features, in particular lack of captions and audio annotations, further make the resources inaccessible to disabled people. Introducing an ability to directly access highlights, fragments, or the logical structure of a media resource will provide a big contribution towards making a media resource more accessible.

Scenario 6: Captions helps browsing Video
Silvia has a deaf friend, Elaine, who would like to watch the holiday videos that Silvia is publishing on her website. Silvia has created subtitle tracks for her videos and also a CMML annotation with unique identifiers on the clips that she describes. The clips were formed based on locations that Silvia has visited. In this way, Elaine is able to watch the videos by going through the clips and reading the subtitles for those clips that she is interested in. She watches the sections on Korea, Australia, and France, but jumps over the ones of Great Britain and Holland.

Recompositing Media Fragments

As we enable direct linking to media fragments in a URI, we can also enable simple recompositing of such media fragments. Note that because the media fragments in a composition may possibly originate from different codecs and very different files, we can not realistically expect smooth playback between the fragments.

Scenario 1: Reframing a photo in a slideshow
Erik has a collection of photos and wants to create a slide show of some of the photos and wants to highlight specific areas in each image. He uses xspf to define the slide show (playlist) using spatial fragment URIs to address the photo fragments.

Scenario 2: Mozaic
Jack wants to create a mosaic for his website with all the image fragments that Erik defined collated together. He uses SMIL 3.0 Tiny Profile and the spatial fragment URIs to layout the image fragments and stitch them together as a new "image".

Scenario 3: Video Mashup
Jack has a collection of videos and wants to create a mashup from segments out of these videos without having to manually edit them together. He uses SMIL 3.0 Tiny Profile and temporal fragment URIs to address the clips out of the videos and sequence them together.

Given an ability to link to media fragments through URIs, people will want to determine whether they receive the full resource or just the data that relates to the media fragment. This is particularly the case where the resource is large, where the bandwidth is scarce or expensive, and/or where people have limited time/patience to wait until the full resource is loaded.

Scenario 4: Selective previews
Yves is a busy person. He doesn't have time to attend all meetings that he is supposed to attend. He also uses his mobile device for accessing Web resources while traveling, to make the most of his time. Some of the recent meetings that Yves was supposed to attend have been recorded and published on the Web. A colleague points out to Yves in an email which sections of the meetings he should watch. While on his next trip, Yves goes back to this email and watches the highlighted sections by simply clicking on them. The media server of his company dynamically composes a valid media resource from the URIs that Yves is sending it such that Yves' video player can play just the right fragments.

Scenario 5: Music Samples
Erik also has a music collection. He creates an "audio podcast" in the form of an RSS feed with URIs that link to samples from his music files. His friends can play back the samples in their Web-attached music players.

Scenario 6: Highlighting regions (Out-Of-Scope)
Tim has discovered yet another alumni photo of his third school year class. This time he doesn't want to crop his face but he wants to keep the photo in the context of his classmates. He wants his region of the photo highlighted and the rest grey scaled.

Annotating Media Fragments

Media resources typically don't just consist of the binary data. There is often a lot of textual information available that relates to the media resource. Enabling the addressing of media fragments ultimately creates a means to attach annotations to media fragments.

Scenario 1:Spatial Tagging of Images
Raphael systematically annotates some highlighted regions in his photos that depicts his friends, families, or the monuments he finds impressive. This knowledge is represented by RDF descriptions that use spatial fragment URIs to relate to the image fragments in his annotated collection. It makes it possible later to search and retrieve all these media fragment URIs that relate to one particular friend or monument.

Scenario 2: Temporal Tagging of Audio and Video
Raphael also has a collection of audio and video files of all the presentations he ever made. His RDF description collection extends to describing all the segments where he gave a demo of a software system with structured details on the demo.

NB: Time-aligned text such as captions, subtitles in multiple languages, and audio descriptions for audio and video don't have to be created as separate documents and link to each segment through a temporal URI. Such text can be made part of the media resource by the media author or delivered as a separate, but synchronised data stream to the media player. In either case, it should be made accessible in a Web page through a javascript API or access through a DOM nested browsing context of the video/audio/image element. This needs to be addressed in the HTML5 working group.

Annotating media resources on the level of a complete resource is in certain circumstances not enough. Support for annotating multimedia on the level of fragments is often desired. The definition of "anchors" (or id tags) for fragments of media resources will allow us to identify fragments by name. It allows the creation of an author-defined segmentation of the resource - an author-provided structure.

Scenario 3: Named Anchors
Raphael would like to attach an RDF-based annotation to a video fragment that is specified through an "anchor". Identifying the media fragment by name instead of through a temporal video fragment URI allows him create a more memorable URI than having to remember the time offsets.

Scenario 4: Spatial and Temporal Tagging
Guillaume uses video fragments URIs in an MPEG-7 sign language profile to describe a moving point of interest: he wants the focus region to be the dominant hand of in a Sign Language video. Not only the series of video fragment URIs gives the coordinates and timing of the trajectory followed by the hand, it can also describe the areas of changing handshapes.

Scenario 5: Search Engine
Guillaume wants to retrieve the images of each bike present at a recent cycling event. Group photos and general shots of the event have been published online and thanks to a query in a search engine, Guillaume can now retrieve multiple individual shots of each bike in the collection.

Adapting Media Resources

When addressing a media resource as a user, one often has the desire not to retrieve the full resource, but only a subpart of interest. This may be a temporally or spatially consecutive subpart, but could also be e.g. a smaller bandwidth version of the same resource, a lower framerate video, a image with less colour depth or an audio file with a lower sampling rate. Media adaptation is the general term used for such server-side created versions of media resources.

Scenario 1: Changing Video quality (Out-Of-Scope)
Davy is looking for videos about allergies and would like to get previews at a lower frame rate to decide whether to download and save them in his collection. He would like to be able to specify in the URI a means of telling the media server the adaptation that he is after. For video he would like to adapt width, height, frame rate, colour depth, and temporal subpart selection. Alternatively, he may want to get just a thumbnail of the video.

This scenario is out of scope for this Working Group because it requires changes be made to the actual encoded data to retrieve a "fragment". URI-based media fragments should basically be achieved through cropping of one or more byte sections.

Scenario 2: Selecting Regions in Images
Davy is interested to have precise coordinates on his browser address bar to see and pan over large-size images maps. Through the same URI scheme he can now generically address and locate different image subparts on his client side for all image types.

Scenario 3: Selecting an Image from a multi-part document
Davy is now interested in multi-resolution, multi-page medical images. He wants to select the detailed image of the toe X-rays which appear on page 7 of the TIFF document.

Scenario 4: Retrieving an Image embedded thumbnail
Davy is also interested to have the kind of preview functionality for pictures, in particular these JPEG large 10 Mega pixels files that have embedded thumbnails in them. He can now provide a fast preview by selecting the embedded thumbnail in the original image without even having to resize or create a new separate file!

Scenario 5: Switching of Video Transmission
Davy has a blind friend called Katrina. Katrina would also like to watch the videos that Davy has found, and is lucky that the videos have additional alternative audio tracks, which describe to blind users what is happening in the videos. Her Internet connection is of lower bandwidth and she would like to switch off the video track, but receive the two audio tracks (original audio plus audio annotations). She would like to do this track selection through simple changes to the URI.

Scenario 6: Toggle All Audio OFF
Sebo is Deaf and enjoys watching videos on the Web. Her friend sent her a link to a new music video URI but she doesn't want to waste time and bandwidth receiving any sounds. So when she enters the URI in her browser's address bar, she also adds an extra parameter to ignore all audio tracks without naming them by selecting the video segment only.

Scenario 7: Toggle specific Audio tracks
Davy's girlfriend is a fan of Karaoke. She would love to be able to play back videos from the Web that have a karaoke text, and two audio tracks, one each for the music and for the singer. Then she could practice the songs by playing back the complete video with all tracks, but use the video in Karaoke parties with friends where she turns off the singer's track through a simple selection of tracks in the User Agent.

Non-functional requirements

Model of a Video Resource

Single Media Resource Definition

We have one consistent view of what a media resource is and are only concerned with single-timeline media.

Existing Standards

We want to work within the boundaries of existing standards where possible, in particular within the URI specification.

Unique Resource

We want to specify media fragments as usable parts of a resource. One media fragment therefore

  • is not seen as a separate resource BUT it is uniquely addressable
  • is not a "secondary resource" but a selective view of an entire resource.

Valid Resource

We need to make sure that delivered media fragments are valid media resources by themselves and can thus be played back by existing media players / image viewers.

Parent Resource

We want to make it possible to access the entire resource as the "context" of a fragment via a simple change of the URI. This URI as a selective view of the resource provides a mechanism to focus on a fragment whilst hinting at the wider media context in which the fragment is included.

Single Fragment

A media fragments URI should create only a single "mask" onto a media resource and not a collection of potentially overlapping fragments.

Relevant Protocols

The main protocol we are concerend with are HTTP and RTSP, since they are open protocols for media delivery.

No Recompression

Media fragments need to be delivered as byte-range subparts of the media resource such as to make the fragments an actual subresource of the media resource; this implies that we should avoid to decode and recompress media resource to create a fragment.

Minimize Impact on Existing Infrastructure

We want to minimize the necessary changes to all software in the media delivery chain: User Agents, Proxies, Media Servers.

Focus for Changes

We want to focus the necessary changes as much as possible on the media servers because they have to implement fragmentation support for the media formats as the most fundamental requirement for providing media fragment addressing.

Browser Impact

Changes to the user agent should be a one-off and not need adaptation per media encapsulation/encoding format.

Fallback Action

If a User Agent connects with a media fragment URI to a Media Server that does not support media fragments, the Media Server should reply with the full resource. The User Agent will then have to take action to either cancel this connection (if e.g. the media resource is too long) or do a fragment offset locally.

A User Agent that does not understand media fragment URIs will simply hand on the URI (potentially stripped off the fragment part) to the server and receive the full resource in lieu of the fragment. This may lead to unexpected behaviour with media fragment URIs in non-conformant User Agents, e.g. where a mash-up of media fragments is requested, but a sequence of the full files is played. This is acceptable during a transition phase.