W3C

Image Annotation on the Semantic Web

W3C Working Draft 22 March 2006

This version:
http://www.w3.org/TR/2006/WD-swbp-image-annotation-20060322/
Latest version:
http://www.w3.org/TR/swbp-image-annotation/
Editors:
Jacco van Ossenbruggen, Center for Mathematics and Computer Science (CWI Amsterdam)
Raphaël Troncy, Center for Mathematics and Computer Science (CWI Amsterdam)
Giorgos Stamou, IVML, National Technical University of Athens
Jeff Z. Pan, University of Aberdeen (Formerly University of Manchester)
Contributors:
Christian Halaschek-Wiener, University of Maryland
Nikolaos Simou, IVML, National Technical University of Athens
Vassilis Tzouvaras, IVML, National Technical University of Athens
 
Also see Acknowledgements.

Abstract

Many applications that involve multimedia content make use of some form of metadata that describe this content. The goals of this document are (i) to explain what the advantages are of using Semantic Web languages and technologies for the creation, storage, manipulation, interchange and processing of image metadata, and (ii) to provide guidelines for doing so. The document gives a number of use cases that illustrate ways to exploit Semantic Web technologies for image annotation, an overview of RDF and OWL vocabularies developed for this task and an overview of relevant tools.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document is a First Public Working Draft produced by the Multimedia Annotation in the Semantic Web Task Force of the W3C Semantic Web Best Practices & Deployment Working Group. This group is part of the W3C Semantic Web Activity.

Discussion of this document is invited on the public mailing list public-swbp-wg@w3.org (public archives). Please start the subject line of the message with the text "comments: [MM]".

After reviewing comments and further feedback, the Working Group may publish new versions of this document or may advance the document to Working Group Note.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. This document is informative only. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.


Document Roadmap

After reading this document, readers may turn to separate documents discussing individual image annotation vocabularies, tools, and other relevant resources. Note: many current approaches to image annotation are not based on Semantic Web languages. Interoperability between these technologies and RDF and OWL-based approaches is not the topic of this document.

Target Audience

This document is target at everybody with an interest in image annotation, ranging from non-professional end-users that are annotating their personal digital photos to professionals working with digital pictures in image and video banks, audiovisual archives, museums, libraries, media production and broadcast industry, etc.

Objectives

Table of Contents

1. Introduction

The need for annotating digital image data is recognized in a wide variety of different applications, covering both professional and personal usage of image data. At the time of writing, most work done in this area does not use semantic-based technologies partly because of the differences between the multimedia and the web communities and their underlying standardization organizations. This document explains the advantages of using Semantic Web languages and technologies for image annotations and provides guidelines for doing so. It is organized around a number of representative use cases, and a description of Semantic Web vocabularies and tools that could be used to help accomplish the task mentioned in the uses cases. The remainder of this introductory section first gives an overview of image annotation in general, followed by a short description of the key Semantic Web concepts that are relevant for image annotation.

1.1 Image Annotation Issues

Annotating images on a small scale for personal usage can be relatively simple. The reader should be warned, however, that large scale, industrial strength image annotation is notoriously complex. Trade offs along several dimensions make the professional multimedia annotations difficult:

  1. Production versus post-production annotation

    A general rule is that it is much easier to annotate earlier rather than later. Typically, most of the information that is needed for making the annotations is available during production time. Examples include time and date, lens settings and other EXIF metadata added to JPEG images by most digital cameras at the time a picture is taken, experimental data in scientific and medical images, information from scripts, story boards and edit decision lists in creative industry, etc. Indeed, maybe the single most best practice in image annotation is that in general, adding metadata during the production process is much cheaper and yields higher quality annotations than adding metadata in a later stage (such as by automatic analysis of the digital artifact or by manual post-production data).

  2. Generic vs task-specific annotation

    Annotating images without having a specific goal or task in mind is often not cost effective: after the target application has been developed, it turns out that images have been annotated using the wrong type of information, or on the wrong abstraction level, etc. Redoing the annotations is then an unavoidable, but costly solution. On the other hand, annotating with only the target application in mind may also not be cost effective. The annotations may work well with that one application, but if the same metadata is to be reused in the context of other applications, it may turn out to be too specific, and unsuited for reuse in a different context. In most situations the range of applications in which the metadata will be used in the future is unknown at the time of annotation. When lacking a crystal ball, the best the annotator can do in practice is use an approach that is sufficiently specific for the application under development, while avoiding unnecessary application-specific assumptions as much as possible.

  3. Manual versus automatic annotation and the "Semantic Gap"

    In general, manual annotation can provide image descriptions at the right level of abstraction. It is, however, time consuming and thus expensive. In addition, it proves to be highly subjective: different human annotators tend to "see" different things in the same image. On the other hand, annotation based on automatic feature extraction is relatively fast and cheap, and can be more systematic. It tends to result, however, in image descriptions that are too low level for many applications. The difference between the low level feature descriptions provided by image analysis tools and the high level content descriptions required by the applications is often referred to, in the literature, as the Semantic Gap. In the remainder, we will discuss use cases, vocabularies and tools for both manual and automatic image annotation.

  4. Different types of metadata

    While various classifications of metadata have been described in the literature, every annotator should at least be aware of the difference between annotations describing properties of the image itself, and those describing the subject matter of the image, that is, the properties of the objects, persons or concepts depicted by the image. In the first category, typical annotations provide information about title, creator, resolution, image format, image size, copyright, year of publication, etc. Many applications use a common, predefined and relatively small vocabulary defining such properties. Examples include the Dublin Core and VRA Core vocabularies. The second category describes what is depicted by the image, which can vary wildly with the type of image at hand. In many applications, it is also useful to distinguish between objective observations ('the person in the white shirt moves his arm from left to right') versus subjective interpretations ('the person seems to perform a martial arts exercise). As a result, one sees a large variation in vocabularies used for this purpose. Typical examples vary from domain-specific vocabularies (for example, with terms that are very specific for astronomy images, or sport images, etc) to domain-independent ones (for example, a vocabulary with terms that are sufficiently generic to describe any news photo). In addition, vocabularies tend to differ in size, granularity, formality etc. In the remainder, we discuss the above metadata categories. Note that in the first type it is not uncommon that a vocabulary only defines the properties and defers the definitions of the values of those properties to another vocabulary. This is true, for example, for both Dublin Core and VRA Core. This means that, typically, in order to annotate a single image one needs terms from multiple vocabularies.

  5. Lack of Syntactic and Semantic Interoperability

    Many different file formats and tools for image annotations are currently in use. Reusing metadata developed for one set of tools in another tool is often hindered by a lack of interoperability. First, different tools use different file formats, so tool A may not be able to read in the metadata provided by tool B (syntax-level interoperability). Solving the problem is relatively easy if the inner structure of both file formats are known by developing a conversion tool. Second, tool A may assign a different meaning to the same annotation as tool B does (semantic interoperability). Solving this problem is much harder, and a first step to provide a solution is to require that the vocabulary used be explicitly defined for both tools.

1.2 Semantic Web Basics

This section briefly describe the role of Semantic Web technologies in image annotations. The aim of the Semantic Web is to augment the existing Web so that resources (Web pages, images etc.) are more easily interpreted by programs (or "intelligent agents"). The idea is to associate Web resources with semantic categories which describe the contents and/or functionalities of Web resources.

Annotations alone do not establish the semantics of what is being marked-up. One way generally followed to introduce semantics to annotations is to get an agreement to carefully define what a set of concepts are and what terms have to be used for them.

This agreement can be only "informal", that is, relies on natural language for defining the meaning of a set of information properties. For example, the Dublin Core Metadata Element Set provides 15 "core" information properties, such as "Title", "Creator", "Date", with descriptive semantic definitions (in natural language). One can use these information properties in, e.g., RDF or META tags of HTML.

For example, the following RDF/XML code represents the statements "there is an image Ganesh.jpg created by Jeff Z. Pan and whose title is An image about the Elephant Ganesh. The first four lines define the XML namespaces used in this description. A good starting point for having more information on RDF is the RDF Primer.

<rdf:RDF xml:base="http://example.org/"
         xmlns="http://example.org/"
         xmlns:dc="http://purl.org/dc/elements/1.1/"
         xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">

  <rdf:Description rdf:about="Ganesh.jpg"/>
    <dc:title>An image about the Elephant Ganesh</dc:title>
    <dc:creator>Jeff Z. Pan</dc:creator>
  </rdf:Description>
</rdf:RDF>

A complementary approach is to also use ontologies to specify formally the meaning of Web resources and thus get a "formal" agreement. Ontology is a term borrowed from philosophy that refers to the science of describing the kinds of entities in the world and how they are related. In computer science, ontology is, in general, a representation of a shared conceptualization of a specific domain. It provides a shared and common vocabulary, including important concepts, properties, their definitions and constraints, sometimes referred to as background assumptions regarding the intended meaning of the vocabulary, used in a domain that can be communicated between people and heterogeneous, distributed application systems. The (formal) ontology approach, though more difficult to develop, is more powerful than the informal-only agreement approach because users can thoroughly define the vocabulary using axioms expressed in a logic language and machine can use this formal meaning for reasoning, completing and validating the annotations. Ideally, the concepts and properties of an ontology should have both formal definitions and natural language descriptions to be unambiguously used by humans and software applications.

There exists a standard Semantic Web Ontology Language OWL, which is a W3C recommendation. We provide below an example of this language in its RDF/XML syntax. Given that there exists a Image class and a hasSize property in an ontology, one can use the following OWL statements to define a new OWL class called BigImage as the set of all members of the class Image such that the size of the image is equal to Big. For more information, the OWL Guide provides a good overview of the OWL language.

<rdf:RDF xml:base="http://example.org/"
         xmlns="http://example.org/"
         xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
         xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema##">
         xmlns:owl="http://www.w3.org/2002/07/owl#"

  <owl:Class rdf:about="BigImage"/>
    <owl:intersectionOf rdf:parseType="Collection">
      <owl:Class rdf:about="#Image">
      <owl:Restriction>
        <owl:onProperty rdf:resource="#hasSize">
        <owl:cardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:cardinality>
        <owl:allValueFrom rdf:resource="#Big">
      </owl:Restriction>
    </owl:intersectionOf>
  </owl:Class>
</rdf:RDF>

The next section presents some representative use cases that highlight some requirements for image annotation tools, vocabularies, and practices.

2. Use Cases

Image annotation is relevant in a wide range of domains, organizations and applications; it cannot be covered in a single document such as this. This document, instead, describes a number of use cases that are intended as a representative set of examples. These use cases will be used later to discuss the vocabularies and tools that are relevant for image annotation on the Semantic Web. Example scenarios are given in Section 5.

The use cases are organized in four categories, which reflect either the topics depicted by the images or their usage community. These criteria often determine the tools and vocabularies used in the annotation process.

2.1 World Images

This section provides two use cases with images that could potentially depict any subject: management of a personal photo collection and that of a news press photo bank. The other use cases will focus on images from a specific domain.

Use case: Management of Personal Digital Photo Collections

Many personal users have thousands of digital photos from vacations, parties, traveling, conferences, everyday life etc. Typically, the photos are stored on personal computer hard drives in a simple directory structure without any metadata. The user wants generally to easily access this content, view it, use it in his homepage, create presentations, make part of it accessible for other people or even sell part of it to image banks. Too often, however, the only way for this content to be accessed is by browsing the directories, their name providing usually the date and the description with one or two words of the original event captured by the specific photos. Obviously, this access becomes more and more difficult as the number of photos increases and the content becomes quickly unused in practice. More sophisticated users leverage simple photo organizing tools allowing them to provide keyword metadata, possibly along with a simple taxonomy of categories. This is a first step towards a semantically-enabled solution. Section 5.1 provides an example scenario for this use case using Semantic Web technologies.

Use case: Press Photo Bank

2.2 Culture Images

This section contains a single use case from the cultural heritage domain. This domain is characterized by a long tradition in describing images, with many standardized methods and vocabularies.

Use case: Cultural Heritage

Let us imagine that a museum in fine arts has asked a specialized company to produce high resolution digital scans of the most important art works of their collections. The museum's quality assurance requires the possibility to track when, where and by whom every scan was made, with what equipment, etc. The museum's internal IT department, maintaining the underlying image database, needs the size, resolution and format of every resulting image. It also needs to know the repository ID of the original work of art. The company developing the museum's website additionally requires copyright information (that varies for every scan, depending on the age of the original work of art and the collection it originates from). It also want to give the users of the website access to the collection, not only based on the titles of the paintings and names of their painters, but also based on the topics depicted ('sun sets'), genre ('self portraits'), style ('post-impressionism'), period ('fin de siècle'), region ('west European'). Section 5.2 shows how all these requirements can be fulfilled using Semantic Web technologies.

2.3 Media

The use case developed in this section is mainly targeted at media professionals, and less to the general public. Typical requests are characterized by very detailed queries, not only about the content of images, but also about the media specific details such as camera angle, lens settings etc.

Use case: Television Archive

Audiovisual archive centers are used to manage very large multimedia databases. For instance, INA, the French Audiovisual National Institute, has been archiving TV documents for 50 years and radio documents for 65 years and stores more than 1 million hours of broadcast programs. The images and sound archives kept at INA are either intended for professional use (journalists, film directors, producers, audiovisual and multimedia programmers and publishers, in France and worldwide) or communicated for research purposes (for a public of students, research workers, teachers and writers). In order to allow an efficient access to the data stored, most of the parts of these video documents are described and indexed by their content. The global multimedia information system should then be fine-grain enough detailed to support some very complex and precise queries. For example, a journalist or a film director client might ask for an excerpt of a previously broadcasted program showing the first goal of a given football player in its national team, scored with its head. The query could additionally contain some more technical requirements such that the goal action should be available according to both the front camera view and the reverse angle camera view. Finally, the client might or might not remember some general information about this football game, such that the date, the place and the final score. Section 5.3 gives a possible solution for this use case using Semantic Web technologies.

2.4 Scientific Images

This section presents two use cases from the scientific domain. Typically here, images are annotated using large and complex ontologies.

Use Case: Large-scale Image Collections at NASA

Many organizations maintain extremely large-scale image collections. The National Aeronautics and Space Administration (NASA), for example, has hundreds of thousands of images, stored in different formats, levels of availability and resolution, and with associated descriptive information at various levels of detail and formality. Such an organization also generates thousands of images on an ongoing basis that are collected and cataloged. Thus, a mechanism is needed to catalog all the different types of image content across various domains. Information about both the image itself (e.g., its creation date, dpi, source) and about the specific content of the image is required. Additionally, the associated metadata must be maintainable and extensible so that associated relationships between images and data can evolve cumulatively. Lastly, management functionality should provide mechanisms flexible enough to enforce restriction based on content type, ownership, authorization, etc. Section 5.4 gives an example solution for this use case.

Use Case: Bio-Medical Images

3. Vocabularies for Image Annotation

Choosing which vocabularies to use for annotating image is a key decision in an annotation project. Typically, one needs more than a single vocabulary to cover the different relevant aspects of the images. A separate document named Vocabularies Overview discusses a number of individual vocabularies that are relevant for images annotation. The remainder of this section discusses more general issues.

Many of the relevant vocabularies have been developed prior to the Semantic Web, and Vocabularies Overview lists many translations of such vocabularies to RDF or OWL. Most notably, the key International Standard in this area, the Multimedia Content Description standard, widely known as MPEG-7, is defined using XML Schema. At the time of writing, there is no commonly accepted mapping from the XML Schema definitions in the standard to RDF or OWL. Several alternative mappings, however, have been developed so far and are discussed in the overview.

Another relevant vocabulary is the VRA Core. Where the Dublin Core (DC) specifies a small and commonly used vocabulary for on-line resources in general, VRA Core defines a similar set targeted especially at visual resources, specializing the DC elements. Dublin Core and VRA Core both refer to terms in their vocabularies as elements, and both use qualifiers to refine elements in similar way. All the elements of VRA Core have either direct mappings to comparable fields in Dublin Core or are defined as specializations of one or more DC elements. Furthermore, both vocabularies are defined in a way that abstracts from implementation issues and underlying serialization languages. A key difference, however, is that for Dublin Core, there exists a commonly accepted mapping to RDF, along with the associated schema. At the time of writing, this is not the case for VRA Core, and the overview discusses the pros and cons of the alternative mappings.

Many annotations on the Semantic Web are about an entire resource. For example, a <dc:title> property applies to the entire document. For images and other multimedia documents, one often needs to annotate a specific part of a resource (for example, a region in an image). Sharing the metadata dealing with the localization of some specific part of multimedia content is important since it allows to have multiple annotations (potentially from multiple users) referring to the same content.

  1. Ideally, the target image already specifies this specific part, using a name that is addressable in the URI fragment identifier (this can be done, for example, in SVG).
  2. Otherwise the region needs to be described in the metadata itself, as it is done in MPEG-7.

4. Available Tools for Semantic Image Annotation

Among the numerous tools used for image archiving and description, some of them may be used for semantic annotation. The aim of this section is to identify some key characteristics of semantic image annotation tools, so as to provide some guidelines for their proper use. Using these characteristics as criteria, users of these tools could choose the most appropriate for a specific application.

Type of Content. A tool can annotate different type of content. Usually, the raw content is an image, whose format can be jpg, png, tif, etc. but there are also tools that can annotate videos as well.

Type of Metadata. An annotation can be targeted for different use. Following the categorization provided by The Making of America II project, the metadata can be descriptive (for description and identification of information), structural (for navigation and presentation), or administrative (for management and processing). Most of the tools can be used in order to provide descriptive metadata and for some of them, the user can also provide structural and administrative information.

Format of Metadata. An annotation can be expressed in different format. This format is important since it should ensure interoperability with other (semantic web) applications. MPEG-7 is often used as the metadata format for exchanging automatic analysis results whereas OWL and RDF are better appropriate in the Semantic Web world.

Annotation level. Some tools give to the user the opportunity to annotate an image using vocabularies while others allow free text annotation only. When ontologies are used (in RDF or OWL format), the annotation level is considered to be controlled since the semantics is generally provided in a more formal way, whereas if they are not, the annotation level is considered to be free.

Client-side Requirement. This characteristic refers to whether users can use a Web browser to access the service(s) or need to install a stand-alone application.

License Conditions. Some of the tools are open source while some others are not. It is important for the user and for potential researchers and developers in the area of multimedia annotation to know this issue before choosing a particular tool.

Collaborative or individual. This characteristic refers to the possible usage of the tool as an annotation framework for web-shared image databases or as an individual user multimedia content annotation tool.

Granularity. Granularity specifies whether annotation is segment based or file based. This is an important characteristic since in some applications, it could be crucial to provide the structure of the image. For example, it is useful to provide annotations for different areas of the image, describing several cues of information (like a textual part or sub-images) or defining and describing different objects visualized in the image (e.g. people).

Threaded or unthreaded. This characteristic refers to the ability of the tool to respond or add to a previous annotation and to stagger/structure the presentation of annotations to reflect this.

Access control. This refers to the access provided for different users to the metadata. For example, it is important to distinguish between users that have simple access (just view) and users that have full access (view or change).

Concluding, the appropriateness of a tool depends on the nature of annotation that the user requires and cannot be predetermined. A separate web page is maintained with Semantic Web Image Annotation Tools, and categorizes most of the annotation tools found in the Internet, according to the characteristics described above. Any comments, suggestions or new tools annoucements will be added to this separate document. The tools can be used for different types of annotations, depending on the use cases, as shown in the following section.

5. Example Solutions to the Use Cases

This section describes possible scenarios for how Semantic Web technology could be used for supporting the use cases presented in Section 2. These scenarios are provided purely as illustrative examples and do not imply endorsement by the W3C membership or the Semantic Web Best Practices and Deployment Working Group.

5.1 Use Case: Management of Personal Digital Photo Collections

A photo from a personal collection
A photo from a personal collection

Possible Semantic Web-based solution

The solution of the use case described in Section 2.1 requires the use of multiple vocabularies. The potential domain of a photo from a personal digital collection is very wide, and may include sports, entertainment, sightseeing etc. In order to solve this use case the information that a user needs to know about the image has to be taken into account for a appropriate selection of vocabularies. The use case requires creating semantic labels and associate them with the photo. Semantic labels may refer to both media and content type annotations. The examples cover three different approaches: Manual, Semi-Automatic, and Automatic. Each approach has advantages and disadvantages and each one requires different solutions.

Manual Annotation

Manual annotation potentially offers the most accurate information, but it is the most time demanding and thus expensive. In manual annotation, there is typically no need for creating comprehensive annotations based on media features (e.g. low-level image characteristics also known as visual descriptors) since most users are not interested in querying the image database using low-level features such as shape, texture, color histograms etc. However, for most applications, some minimal media type information is needed such as the type of the image (i.e. jpeg, tiff etc.) or the resolution of the image. In addition, provenance information regarding the date created, the creator, the thematic category etc. is also common. VRA [VRA in RDF/OWL] can be used to describe the above information.

Regarding the actual content of the image, various vocabularies can be used depending on the respective thematic category. The example shows a photo that has content from the beach holidays thematic category. For this reason, a beach ontology and the PhotoStuff image annotator [PhotoStuff] can be used to describe the image content.

Semi-Automatic Annotation

Semi-Automatic Annotation assists the manual annotation to extract higher-level, semantic labels (or vice versa). Image analysis tools such as image segmentation and object recognition tools are based on lower level aspects of the media. As a result, a more extensive set of lower level media type descriptors is needed in this approach. The current trend in the multimedia community is that the combination of image analysis tools with multimedia-specific and domain-specific vocabularies is shifting the image analysis, recognition and retrieval processes to a more semantic level.

Using the above holiday beach example, in order to semi-automatically annotate the image, low-level image concepts and relations are needed (color, shape, texture etc.). The MPEG-7 visual part [MPEG-7] is an appropriate framework for the representation of such features. For this reason, a Visual Descriptor Ontology (VDO) [VDO] in combination with the beach domain ontology can be used to assign visual descriptors to domain concepts in order to be automatically recognized and thus annotated. For example, the M-OntoMat Annotizer can be used to manually segment objects that have a semantic meaning, then extract the respective visual descriptors and store them as prototype instances in a predefined domain ontology (beach ontology). In addition, reasoning support is also required in the semi-automatic process. Using reasoning tools, higher level concepts and events can be recognized in the image. Multimedia reasoning tools require spatio-temporal knowledge about the objects of the image (e.g. a person consists of a body, two hands, two legs and a head; or: the sky is over the sea etc.). An example of visual descriptors in association with domain concepts using M-OntoMat Annotizer is shown in the RDF graph below (Figure 1). The RDF code can be found here.

An RDF Graph Describing the association of MPEG-7 Visual descriptors
Figure 1: An RDF Graph Describing the association of MPEG-7 Visual descriptors with the domain concept "sand"
Automatic Annotation

Automatic Annotation means that no user involvement is needed, and thus is time and cost effective. However, even with perfect image segmentation, person detection and object recognition, a tool will not recognize events such as "Katerina's' holidays in Thailand". In the beach holiday example, more vocabularies are needed such as a context ontology for acquiring the context of the image (e.g. automatically detect that the image is about holidays in beaches and not in mountains) in order to automatically annotating the image. Also, automation is needed in creating the prototype instances using the VDO, the domain ontologies and the M-OntoMat Annotizer in order to automatically segment regions that may have semantic meaning and then extract and store the visual descriptors. Such an advanced approach is beyond the scope of this deliverable.

Conclusion and discussion

The example solution shows that even the manual annotation is non-trivial. It is difficult to provide a unified way to annotate personal photos. The context of the photo indicates which ontology must be used in the annotation process. In the above example, a beach domain ontology is used since the context of the photo is summer holidays. Apart from domain specific ontologies, media type ontologies and a photo annotation tool are required to complete the annotation.

In the case of semi-automatic annotation, there are still many open research and technical issues. Even with perfect image analysis tools, a system cannot recognise events that may have semantic meaning. This problem is due to the gap that exists between low-level image analysis tools and high-level image annotations.

5.2 Use Case: Cultural Heritage

Image of Monet's painting 'Garden at Sainte-Adresse' Claude Monet, Garden at Sainte-Adresse.
Image courtesy of Mark Harden, used with permission.

Possible Semantic Web-based solution

Many of the requirements of the use case described in Section 2.2 can be met by using the vocabulary developed by the VRA in combination with domain-specific vocabularies such as Getty's AAT and ULAN. In this section, we provide as an example a set of RDF annotations of a painting by Claude Monet, which is in English known as "Garden at Sainte-Adresse". It is part of the collection of the Metropolitan Museum of Art in New York. The corresponding RDF file is available as a separate document. No special annotation tools where used to create the annotations. We assume that cultural heritage organizations that need to publish similar metadata will do so by exporting existing information from their collection database to RDF. Below, we discuss the different annotations used in this file.

House keeping

The file starts as a typical RDF/XML file, by defining the XML version and encoding and defining entities for the RDF and VRA namespaces that will be used later. Note that we use the RDF/OWL schema of VRA Core developed by Mark van Assem.

<?xml version='1.0' encoding='ISO-8859-1'?>
<!DOCTYPE rdf:RDF [
    <!ENTITY rdf        "http://www.w3.org/1999/02/22-rdf-syntax-ns#">
    <!ENTITY vra        "http://www.vraweb.org/vracore/vracore3#">
      
Work versus Image

The example includes annotations about two different images of the same painting. An important distinction made by VRA vocabulary is the distinction between annotations describing a work of art itself and annotations describing (digital) images of that work. This example also uses this distinction. In RDF, to say something about a resource, that resource needs to have a URI. We will thus not only need the URIs of the two images, but also a URI for the painting itself:

    <!ENTITY image1    "http://www.metmuseum.org/Works_Of_Art/images/ep/images/ep67.241.L.jpg">
    <!ENTITY image2    "http://www.artchive.com/artchive/m/monet/adresse.jpg">
    <!ENTITY painting  "http://thing-described-by.org/?http://www.metmuseum.org/Works_Of_Art/images/ep/images/ep67.241.L.jpg">
]>
   
URI and ID conventions

VRA Core does not specify how works, images or annotation records should be identified. For the two images, we have chosen for the most straightforward solution and use the URI of the image as the identifying URI. We did not have, however, a similar URI that identifies the painting itself. We could not reuse the URI of one of the images. This is not only conceptually wrong, but would also lead to technical errors: it would make the existing instance of vra:Image also an instance of the vra:Work class, while this is not allowed by the schema.

In the example, we have decided to `mint' the URI of the painting by arbitrary selecting the URI of one of the images, and prefixing it by http://thing-described-by.org/?. This creates a new URI that is distinct from the image itself, but when a the browser resolves it, it will be redirected to the image URI by the thing-described-by.org web server (one could argue if the use of an http-based URI is actually appropriate here. See What do HTTP URIs Identify? and [httpRange-14] for more details on this discussion).

Warning: The annotations described below also contain a vra:idNumber.currentRepository element, that defines the identifier used locally in the museum's repositories. These local identifiers should not be confused with the globally unique identifier that is provided by the URI.

More housekeeping: starting the RDF block

The next line opens the RDF block, declares the namespaces using the XML entities defined above. Out of courtesy, it uses rdf:seeAlso to help agents find the VRA schema that is used.

<rdf:RDF  xmlns:rdf="&rdf;" xmlns:vra="&vra;"
  rdf:seeAlso="http://www.w3.org/2001/sw/BestPractices/MM/vracore3.rdfs"
>
   

Description of the work (painting)

The following lines describe properties of the painting itself: we will deal with the properties of the two images later. First, we provide general information about the painting such as the title, its creator and the date of creation. For these properties, the VRA closely follows the Dublin Core conventions:


  <!-- Description of the painting -->
  <vra:Work rdf:about="&painting1;">

    <!-- General information -->
    <vra.title>Jardin à; Sainte-Adresse</vra.title>
    <vra:title.translation>Garden at Sainte-Adresse</vra:title.translation>
    <vra:creator>Monet, Claude</vra:creator>                            <!-- ULAN ID:500019484 -->
    <vra:creator.role>artist</vra:creator.role>                         <!-- ULAN ID:31100     -->
    <vra:date.creation>1867</vra:date.creation>
   
Text fields and controlled vocabularies

Many values are filled with RDF Literals, of which the value is not further constraint by the schema. But many of these values are actually terms from other controlled vocabularies, such as the Getty AAT, ULAN or a image type defined by MIME. Using controlled vocabularies solves many problems associated with free text annotations. For example, ULAN recommends a spelling when an artist's name is used for indexing, so for the vra:creator field we have exactly used this spelling ("Monet, Claude"). The ULAN identifiers of the records describing Claude Monet and the "artist" class are given in XML comments above. The use of controlled vocabulary can avoid confusion and the need for "smushing" different spellings for the same name later.

However, using controlled vocabularies does not solve the problem of ambiguous terms. The annotations below use three different uses of "oil paint", "oil paintings" and "oil painting (technique)". The first refers to the type of paint used on the canvas, the second to the type of work (e.g. the work is an oil painting, and not an etching) and the last to the painting technique used by artist. All three terms refer to different concepts that are part of different branches of the AAT term hierarchy (the AAT identifiers of these concepts are mentioned in XML comments). However, the use of terms that are so similar for different concepts is bound to lead to confusion. Instead, one could switch from using owl:datatypeProperties to using owl:objectProperties, and replace the literal text by a reference to the URI of the concept used. For example, one could change:
<vra:material.medium>oil paint</vra:material.medium>
to
<vra:material.medium rdf:resource="http://www.getty.edu/aat#300015050"/>

This approach, requires, however, that an unambiguous URI-based naming scheme is defined for all terms in the target vocabulary (and in this case, such a URI-based naming scheme does not yet exist for AAT terms). Additional Semantic Web-based processing is also only possible once these vocabularies become available in RDF or OWL.

    <!-- Technical information -->
    <vra:measurements.dimensions>98.1 x 129.9 cm</vra:measurements.dimensions>
    <vra:material.support>unprimed canvas</vra:material.support>        <!-- AAT ID:300238097 -->
    <vra:material.medium>oil paint</vra:material.medium>                <!-- AAT ID:300015050 -->
    <vra:type>oil paintings</vra:type>                                  <!-- AAT ID:300033799 -->
    <vra.technique>oil painting (technique)</vra.technique>             <!-- AAT ID:300178684 -->

    <!-- Associated style etc -->
    <vra:stylePeriod>Impressionist</vra:stylePeriod>                    <!-- AAT ID:300021503 -->
    <vra:culture>French</vra:culture>                                   <!-- AAT ID:300111188 -->
   
Annotating subject matter

For many applications, it is useful to know what is actually depicted by the painting. One could add annotations of this style to an arbitrary level of detail. To keep the example simple, we have chosen to record only the names of the people that are depicted on the painting, using the vra:subject field. Also for simplicity, we have chosen not to annotate specific parts or regions of the painting. This might have been appropriate, for example, to identify the associated regions that depict the various people in the painting:

    <!-- Subject matter: (who/what is depicted by this work -->
    <vra:subject>Jeanne-Marguerite Lecadre (artist's cousin)</vra:subject>
    <vra:subject>Madame Lecadre (artist's aunt)</vra:subject>
    <vra:subject>Adolphe Monet (artist's father)</vra:subject>
   
Provenance: annotating the past

Many of the fields below do not contain information about the current situation of the painting, but information about places and collections the painting has been in the past. This provides provenance information that is important in this domain.

    <!-- Provenance -->
    <vra:location.currentSite>Metropolitan Museum of Art, New York</vra:location.currentSite>
    <vra:location.formerSite>Montpellier</vra:location.formerSite>
    <vra:location.formerSite>Paris</vra:location.formerSite>
    <vra:location.formerSite>New York</vra:location.formerSite>
    <vra:location.formerSite>Bryn Athyn, Pa.</vra:location.formerSite>
    <vra:location.formerSite>London</vra:location.formerSite>
    <vra:location.formerRepository>
      Victor Frat, Montpellier (probably before 1870 at least 1879;
      bought from the artist); his widow, Mme Frat, Montpellier (until 1913)
    </vra:location.formerRepository>
    <vra:location.formerRepository>Durand-Ruel, Paris, 1913</vra:location.formerRepository>
    <vra:location.formerRepository>Durand-Ruel, New York, 1913</vra:location.formerRepository>
    <vra:location.formerRepository>
      Reverend Theodore Pitcairn and the Beneficia Foundation, Bryn Athyn, Pa. (1926-1967),
      sale, Christie's, London, December 1, 1967, no. 26 to MMA
    </vra:location.formerRepository>
    <vra:idNumber.currentRepository>67.241</vra:idNumber.currentRepository> <!-- MMA ID number -->
   

The remaining properties describe the origin the sources used for creating the metadata and a rights management statement. We have used the vra:description element to provide a link to a web page with additional descriptive information:

    <!-- extra information, source of this information and copyright issues: -->
    <vra:description>For more information, see http://www.metmuseum.org/Works_Of_Art/viewOne.asp?dep=11&viewmode=1&item=67%2E241&section=description#a</vra:description>
    <vra:source>Metropolitan Museum of Art, New York</vra:source>
    <vra:rights>Metropolitan Museum of Art, New York</vra:rights>
      

Image properties

Finally, we define the properties that are specific to the two images of the painting, which differ in resolution, copyright etc. The first set of annotations describe a 500x300 pixel image that is located at the website of the Metropolitan itself, while the second set describes the properties of a larger resolution (1075 x 778px) image at Mark Harden's Artchive website. Note that VRA Core does not specify how Works and their associated Images should be related. In the example we follow Van Assem's suggestion and use vra.relation.depicts to explicitly link the Image to the Work it depicts.

  <!-- Description of the first online image of the painting -->
  <vra:Image rdf:about="&image1a;">
    <vra:type>digital images</vra:type>                                <!-- AAT ID: 300215302 -->
    <vra:relation.depicts rdf:resource="&painting1;"/>
    <vra.measurements.format>image/jpeg</vra.measurements.format>                   <!-- MIME -->
    <vra.measurements.resolution>500 x 380px</vra.measurements.resolution>
    <vra.technique>Scanning</vra.technique>
    <vra:creator>Anonymous employee of the museum</vra:creator>
    <vra:idNumber.currentRepository>ep67.241.L.jpg</vra:idNumber.currentRepository>
    <vra:rights>Metropolitan Museum of Art, New York</vra:rights>
  </vra:Image>
   
  <!-- Description of the second online image of the painting -->
  <vra:Image rdf:about="&image1b;">
    <vra:type>digital images</vra:type>                                <!-- AAT ID: 300215302 -->
    <vra:relation.depicts rdf:resource="&painting1;"/>
    <vra:creator>Mark Harden</vra:creator>
    <vra.technique>Scanning</vra.technique>
    <vra.measurements.format>image/jpeg</vra.measurements.format>                   <!-- MIME -->
    <vra.measurements.resolution>1075 x 778px</vra.measurements.resolution>
    <vra:idNumber.currentRepository>adresse.jpg</vra:idNumber.currentRepository>
    <vra:rights>Mark Harden, The Artchive, http://www.artchive.com/</vra:rights>
  </vra:Image>
</rdf:RDF>
   

Conclusion and discussion

The example above reveals several technical issues that are still open. For example, the way the URI for the painting was minted is rather arbitrary. Preferably, there would have been a commonly accepted URI scheme for paintings (c.f. the LSID scheme used to identify concepts from the life sciences). At the time of writing, the VRA, AAT and ULAN vocabulary used have currently no commonly agreed upon RDF or OWL representation, which reduces the interoperability of the chosen approach. Tool support is another issue. While some major database vendors already start to support RDF, generating the type of RDF as shown here from existing collection databases will in many cases require non trivial custom conversion software.

From a modeling point of view, subject matter annotations are always non-trivial. As stated above, it is hard to give general guidelines about what should be annotated and to what depth, as this can be very application dependent. Note that in the example, we annotated the persons that appear in the painting, and that we modeled this information as properties of the painting URI, not of the two image URIs. But if we slightly modify our use case and assume one normal image and one X-ray image that reveals an older painting under this one, it might make more sense to model more specific subject matter annotations as properties of the specific images.

Nevertheless, the example shows that a large part of issues described by the use case can be solved using current Semantic Web technology. It shows how RDF can be used to use existing vocabularies to annotate various aspects of paintings and the images that depict them.

5.3 Use Case: Television News Archive

Possible Semantic Web-based solution

The use case described in Section 2.3 is typically one that requires the use of multiple vocabularies. Let us imagine that the image to be described is about a refused goal of a given soccer player (e.g. J.A Boumsong) for an active offside position during a particular game (e.g. Auxerre-Metz). First, the image can be extracted from a weekly sports magazine broadcasted on a TV channel. This program may be fully described using the vocabulary developed by the [TV Anytime forum]. Second, this image shows the player Jean-Alain Boumsong scoring with his head during the game Auxerre-Metz. The context of this football game could be described using the [MPEG-7] vocabulary while the action itself might be described by a soccer ontology such as the one developed by [Tsinaraki]. Finally, a soccer fan may notice that this goal was actually refused for an active offside position of another player. On the image, a circle could highlight this player badly positioned. Again, the description could merge MPEG-7 vocabulary for delimiting the relevant image region and a domain specific ontology for describing the action itself. In the following, we provide as an example a set of RDF annotations illustrating these three levels of description as well as the vocabularies involved.

The image context

Let us consider that the image comes from a weekly sports magazine named Stade 2 broadcasted on March, 17th 2002 on the French public channel France 2. This context can be represented using the TV Anytime vocabulary which allows for a TV (or radio) broadcaster to publish its program listings on the web or in an electronic program guide. Therefore, this vocabulary provides the necessary concepts and relations for cataloging the programs, giving their intended audience, format and genre, or some parental guidance. The vocabulary contains also the vocabulary for describing afterwards the real audience and the peak viewing times which are of crucial importance for the broadcasters in order to adapt their advertisement rates.

RDF description of the program from which the image comes from
<?xml version='1.0' encoding='ISO-8859-1'?>
<!DOCTYPE rdf:RDF [
    <!ENTITY rdf        "http://www.w3.org/1999/02/22-rdf-syntax-ns#">
    <!ENTITY xsd        "http://www.w3.org/2001/XMLSchema#">
]>

<rdf:RDF
  xmlns:rdf="&rdf;"
  xmlns:xsd="&xsd;"
  xmlns:tva="urn:tva:metadata:2002"
>

  <tva:Program rdf:about="program1">
    <tva:hasTitle>Stade 2</tva:hasTitle>
    <tva:hasSynopsis>Weekly Sports Magazine broadcasted every Sunday</tva:hasSynopsis>
    <tva:Genre rdf:resource="urn:tva:metadata:cs:IntentionCS:2002:Entertainment"/>
    <tva:Genre rdf:resource="urn:tva:metadata:cs:FormatCS:2002:Magazine"/>
    <tva:Genre rdf:resource="urn:tva:metadata:cs:ContentCS:2002:Sports"/>
    <tva:ReleaseInformation>
      <rdf:Description>
        <tva:ReleaseDate xsd:date="2002-03-17"/>
        <tva:ReleaseLocation>fr</tva:ReleaseLocation>
      </rdf:Description>
    </tva:ReleaseInformation>
  </tva:Program>

</rdf:RDF>
    
The description of the action

To be done.

The description of particular region

Discuss the pros and cons of having either 2 separate files (one expressing the localization of the region and one representing the content annotation) or 1 RDF file having both description.

The annotation link

Discuss the various annotation links provided by MPEG-7 (annotates, depicts, exemplifies, etc).

5.4 Use Case: large-scale image collections at NASA

Apollo 7 Saturn rocket launch
Apollo 7 Saturn rocket launch - October, 10th 1968. Image courtesy of NASA, available at GRIN, used with permission.

Possible Semantic Web-based solution

One possible solution for the requirements expressed in the use case description in Section 2.4 is an annotation environment that enables users to annotate information about images and/or their regions using concepts in ontologies (OWL and/or RDFS). More specifically, subject matter experts will be able to assert metadata elements about images and their specific content. Multimedia related ontologies can be used to localize and represent regions within particular images. These regions can then be related to the image via a depiction/annotation property. This functionality can be provided, for example, by the MINDSWAP digital-media ontology (to represent images, image regions, etc.), in conjunction with FOAF (to assert image depictions). Additionally, in order to represent the low level image features of regions, the aceMedia Visual Descriptor Ontology can be used.

Domain Specific Ontologies

In order to describe the content of such images, a mechanism to represent the domain specific content depicted within them is needed. For this use case, domain ontologies that define space specific concepts and relations can be used. Such ontologies are freely available and include, but are not limited to the following:

Visual Ontologies

As discussed above, this scenario requires the ability to state that images (and possibly their regions) depict certain things. For example, consider a picture of the Apollo 7 Saturn rocket launch. One would want to make assertions that include that the image depicts the Apollo 7 launch, the Apollo 7 Saturn IB space vehicle is depicted in a rectangular region around the rocket, the image creator is NASA, etc. One possible way to accomplish this is to use a combination of various multimedia related ontologies, including FOAF and the MINDSWAP digital-media ontology. More specifically, image depictions can be asserted via a depiction property (a sub-property of foaf:depiction) defined in the MINDSWAP Digital Media ontology. Thus, images can be semantically linked to instances defined on the Web. Image regions can defined via an ImagePart concept (also defined in the MINDSWAP Digital Media ontology). Additionally, regions can be given a bounding box by using a property named svgOutline, allowing localizing of image parts. Essentially SVG outlines (SVG XML literals) of the regions can be specified using this property. Using the Dublin Core standard and the EXIF Schema more general annotations about the image can be stated as well, including its creator, size, etc. A subset of these sample annotations are shown in an RDF graph below in Figure 2.

RDF Graph Describing the Apollo 7 Launch Image
Figure 2: An RDF Graph Describing the Apollo 7 Launch Image

Figure 2 illustrates how the approach links metadata to the image:

Additionally, the entire annotations of the Apollo 7 launch are shown below in RDF/XML.

RDF/XML annotations of Apollo 7 launch
<rdf:RDF
    xmlns:j.0="http://www.w3.org/2003/12/exif/ns#"
    xmlns:j.1="http://www.mindswap.org/2005/owl/digital-media#"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:j.2="http://semspace.mindswap.org/2004/ontologies/System-ont.owl#"
    xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
    xmlns:owl="http://www.w3.org/2002/07/owl#"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:j.3="http://semspace.mindswap.org/2004/ontologies/ShuttleMission-ont.owl#"
    xml:base="http://example.org/NASA-Use-Case" >

  <rdf:Description rdf:about="A0">
    <j.1:depicts rdf:resource="#Saturn_1B"/>
    <rdf:type rdf:resource="http://www.mindswap.org/~glapizco/technical.owl#ImagePart"/>
    <rdfs:label>region2407</rdfs:label>
    <j.1:regionOf rdf:resource="http://grin.hq.nasa.gov/IMAGES/SMALL/GPN-2000-001171.jpg"/>
    <j.1:svgOutline>
     <svg xml:space="preserve" width="451" heigth="640" viewBox="0 0 451 640">
      <image xlink:href="http://grin.hq.nasa.gov/IMAGES/SMALL/GPN-2000-001171.jpg" x="0" y="0"  width="451" height="640" />
      <rect x="242.0" y="79.0" width="46.0" height="236.0" style="fill:none; stroke:yellow; stroke-width:1pt;"/>
     </svg>
    </j.1:svgOutline>
  </rdf:Description>

  <rdf:Description rdf:about="http://grin.hq.nasa.gov/IMAGES/SMALL/GPN-2000-001171.jpg">
    <j.0:imageLength>640</j.0:imageLength>
    <dc:date>10/11/1968</dc:date>
    <dc:description>Taken at Kennedy Space Center in Florida</dc:description>
    <j.1:depicts rdf:resource="#Apollo_7_Launch"/>
    <j.1:hasRegion rdf:nodeID="A0"/>
    <dc:creator>NASA</dc:creator>
    <rdf:type rdf:resource="http://www.mindswap.org/~glapizco/technical.owl#Image"/>
    <j.0:imageWidth>451</j.0:imageWidth>
  </rdf:Description>

  <rdf:Description rdf:about="#Apollo_7_Launch">
    <j.3:launchDate>10/11/1968</j.3:launchDate>
    <j.3:codeName>Apollo 7 Launch</j.3:codeName>
    <j.3:has_shuttle rdf:resource="#Saturn_1B"/>

    <rdfs:label>Apollo 7 Launch</rdfs:label>
    <j.1:depiction rdf:resource="http://grin.hq.nasa.gov/IMAGES/SMALL/GPN-2000-001171.jpg"/>
    <rdf:type rdf:resource="http://semspace.mindswap.org/2004/ontologies/ShuttleMission-ont.owl#Launch"/>
  </rdf:Description>

  <rdf:Description rdf:about="#Saturn_1B">
    <rdfs:label>Saturn_1B</rdfs:label>
    <j.1:depiction rdf:nodeID="A1"/>
    <rdfs:label>Saturn 1B</rdfs:label>
    <rdf:type rdf:resource="http://semspace.mindswap.org/2004/ontologies/System-ont.owl#ShuttleName"/>
    <j.1:depiction rdf:nodeID="A0"/>
  </rdf:Description>

</rdf:RDF>
   

In order to represent the low level features of images, the aceMedia Visual Descriptor Ontology can be used. This ontology contains representations of MPEG-7 visual descriptors and models Concepts and Properties that describe visual characteristics of objects. For example, the dominant color descriptor can be used to describe the number and value of dominant colors that are present in a region of interest and the percentage of pixels that each associated color value has.

Available Annotation Tools

Existing toolkits, such as [PhotoStuff] and [M-OntoMat-Annotizer], currently provide graphical environments to accomplish the annotation tasks mentioned above. Using such tools, users can load images, create regions around parts of the image, automatically extract low-level features of selected regions (via M-OntoMat-Annotizer), assert statements about the selected regions, etc. Additionally, the resulting annotations can be exported as RDF/XML (as shown above), thus allowing them be shared, indexed, and used by advanced annotation-based browsing (and searchable) environments.

6. Conclusions

Current Semantic Web technologies are sufficiently generic to support annotation of a wide variety of Web resources, including image resources. This document provides examples of the use of Semantic Web languages and tools for image annotation, based on use cases for a wide variety of domains. It also briefly surveys some currently available vocabularies and tools that can be used to semantically annotate images so that machine can better process them. The use of Semantic Web technologies have significant advantages in applications areas in which the interoperability of heterogeneous metadata is important and in areas that require an explicitly defined and formal semantics of the metadata in order to perform reasoning tasks.

Still, many things need to be improved. Commonly accepted, widely used vocabularies for image annotation are still missing. Having such vocabularies would help in sharing metadata across applications and across multiple domains. Especially, a standard means to address subregions withing an image is still missing. In addition, tool support needs to improve dramatically before Semantic Web-based image annotation can be applied on an industrial scale: support needs to be integrated in the entire production and distribution chain. Finally, many existing approaches for image metadata are not based on Semantic Web technologies, and work is required to make these approaches interoperable with the Semantic Web.

References

[AAT]
Art and Architecture Thesaurus. The J. Paul Getty Trust, 2004. (See http://www.getty.edu/research/conducting_research/vocabularies/aat/)
[Dublin Core]
The Dublin Core Metadata Initiative, Dublin Core Metadata Element Set, Version 1.1: Reference Description.
[httpRange-14]
TAG's issue list, issue 14, see http://www.w3.org/2001/tag/issues.html?type=1#httpRange-14
[HTTP-URI]
Tim Berners-Lee, What do HTTP URIs Identify? Available at http://www.w3.org/DesignIssues/HTTP-URI
[Hunter, 2001]
J. Hunter. Adding Multimedia to the Semantic Web — Building an MPEG-7 Ontology. In International Semantic Web Working Symposium (SWWS 2001), Stanford University, California, USA, July 30 - August 1, 2001.
[LSID]
Life Sciences Identifier specification, http://www.omg.org/cgi-bin/doc?dtc/04-05-01.
[MIME-2]
RFC 2046: Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types . N. Freed, N. Borenstein, November 1996. Available at ftp://ftp.isi.edu/in-notes/rfc2046.txt
[M-OntoMat-Annotizer]
M-OntoMat-AnnotizerProject Homepage at http://www.acemedia.org/aceMedia/results/software/m-ontomat-annotizer.html
[MPEG-7]
Information Technology - Multimedia Content Description Interface (MPEG-7). Standard No. ISO/IEC 15938:2001, International Organization for Standardization(ISO), 2001.
[Ossenbruggen, 2004]
J. van Ossenbruggen, F. Nack, and L. Hardman. That Obscure Object of Desire: Multimedia Metadata on the Web (Part I). In: IEEE Multimedia 11(4), pp. 38-48 October-December 2004.
[Ossenbruggen, 2005]
F. Nack, J. van Ossenbruggen, and L. Hardman. That Obscure Object of Desire: Multimedia Metadata on the Web (Part II). In: IEEE Multimedia 12(1), pp. 54-63 January-March 2005.
[OWL Guide]
OWL Web Ontology Language Guide, Michael K. Smith, Chris Welty, and Deborah L. McGuinness, Editors, W3C Recommendation, 10 February 2004, http://www.w3.org/TR/2004/REC-owl-guide-20040210/ . Latest version available at http://www.w3.org/TR/owl-guide/ .
[OWL Semantics and Abstract Syntax]
OWL Web Ontology Language Semantics and Abstract Syntax, Peter F. Patel-Schneider, Patrick Hayes, and Ian Horrocks, Editors, W3C Recommendation 10 February 2004, http://www.w3.org/TR/2004/REC-owl-semantics-20040210/ . Latest version available at http://www.w3.org/TR/owl-semantics/ .
[PhotoStuff]
PhotoStuff Project Homepage at http://www.mindswap.org/2003/PhotoStuff/
[RDF Primer]
RDF Primer, F. Manola, E. Miller, Editors, W3C Recommendation, 10 February 2004. This version is http://www.w3.org/TR/2004/REC-rdf-primer-20040210/. The latest version is at http://www.w3.org/TR/rdf-primer/.
[RDF Syntax]
RDF/XML Syntax Specification (Revised), Dave Beckett, Editor, W3C Recommendation, 10 February 2004, http://www.w3.org/TR/2004/REC-rdf-syntax-grammar-20040210/ . Latest version available at http://www.w3.org/TR/rdf-syntax-grammar/ .
[Stamou, 2005]
G. Stamou and S. Kollias (eds). Multimedia Content and the Semantic Web: Methods, Standards and Tools. John Wiley & Sons Ltd, 2005.
[Troncy, 2003]
R. Troncy. Integrating Structure and Semantics into Audio-visual Documents. In Second International Semantic Web Conference (ISWC 2003), pages 566 – 581, Sanibel Island, Florida, USA, October 20-23, 2003. Springer-Verlag Heidelberg.
[Tsinaraki]
Tsinaraki, C.: OWL soccer ontology available at http://elikonas.ced.tuc.gr/ontologies/soccer.zip.
[TV Anytime]
TV Anytime Forum, http://www.tv-anytime.org/
[ULAN]
Union List of Artist Names. The J. Paul Getty Trust, 2004. (See http://www.getty.edu/research/conducting_research/vocabularies/ulan/)
[VDO]
aceMedia Visual Descriptor Ontology, available from http://www.acemedia.org/aceMedia/reference/resource/index.html
[VRA Core]
Visual Resources Association Data Standards Committee, VRA Core Categories, Version 3.0. Available at: http://www.vraweb.org/vracore3.htm.
[VRA in RDF/OWL]
Mark van Assem. http://www.w3.org/2001/sw/BestPractices/MM/vra-conversion.html describes the RDFS schema of VRA Core 3.0 used in section 5.2.
[XML NS]
Namespaces in XML, Bray T., Hollander D., Layman A. (Editors), World Wide Web Consortium, 14 January 1999. This version is http://www.w3.org/TR/1999/REC-xml-names-19990114/. The latest version is http://www.w3.org/TR/REC-xml-names/.

Acknowledgments

The editors would like to thank John Smith (IBM T. J. Watson Research Center), Chris Catton (University of Oxford) and the following Working Group members for their feedback on earlier versions of this document: Mark van Assem, Jeremy Caroll, Jane Hunter, Libby Miller, Guus Schreiber and Michael Uschold.

This document is a product of the Multimedia Annotation on the Semantic Web Task Force of the Semantic Web Best Practices and Deployment Working Group.