W3C

Publishing and Linking on the Web

W3C Working Group Note 30 April 2013

This version:
http://www.w3.org/TR/2013/NOTE-publishing-linking-20130430/
Latest published version:
http://www.w3.org/TR/publishing-linking/
Latest editor's draft:
http://www.w3.org/2001/tag/doc/publishingAndLinkingOnTheWeb.html
Previous version:
http://www.w3.org/TR/2012/WD-publishing-linking-20121025/
Editors:
Ashok Malhotra, Oracle
Larry Masinter, Adobe
Jeni Tennison, Independent
Daniel Appelquist, Telefónica

Abstract

The Web borrows familiar concepts from physical media (e.g., the notion of a "page") and overlays them on top of a networked infrastructure (the Internet) and a digital presentation medium (browser software). This is a convenient abstraction, but when social or legal concepts and frameworks relating documents, publishing and speech are applied to the Web, the analogies can be misleading, for example, publishing a page on the Web is fundamentally different from printing and distributing a page in a magazine or book.

Communication is often subject to governance: legislation, legal opinion, regulation, convention and contract; these are ways in which society looks to enforce norms, for example, around copyright, censorship, and privacy. But there is often a mismatch between governance intended to apply to the Web (usually based on the analogy with physical media) and the technology and architecture used to create it.

This document is intended to inform future social and legal discussions about the architecture of the Web: the ways in which the Web's technical facilities operate to store, publish and retrieve information, and by providing definitions for terminology as used within the Web's technical community. Specifically, this document has the following goals:

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

The publication of this Note indicates that work has ended on this specification for the time being. Work on this document may restart in the future in the TAG or in another Working Group when we have consensus on how to proceed.

This document was published by the Technical Architecture Group as a Working Group Note. If you wish to make comments regarding this document, please send them to www-tag@w3.org (subscribe, archives). All comments are welcome.

Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1. Why We Wrote This Document

In recent months there have been several legal actions against individuals and organizations for making material available that is illegal or seditious material and may have been copyrighted. We argue in section 4.3 Including that the manner in which the material is made available is important and should be taken into consideration. Similarly, as we explain in section 4.2 Caching and Relaying there are several kinds of intermediaries that store and/or integrate material from other sources. The role of these intermediaries is quite different from websites that provide original content and the laws should (and in some cases do) distinguish between them.

As the Web has worked its way into the fabric of our lives and access to the Internet is likened to free speech as a fundamental right, it is also increasingly subject to governance. By governance we mean the general idea of societal controls, whether by legislative, regulatory, court order, contractual, or other means. Unfortunately, a number of problems arise when dealing with governance of the Internet.

The goal of this document is to clarify technology. If it informs policy-makers and thus helps make better policies it will have succeeded in its goals. A secondary goal is to point out that many restrictions on the use of Web material that are commonly written into Terms and Conditions are better implemented by technology. For example, if a Website does not want other sites linking to specific pages it is more effective not to provide URIs for those pages rather than to include this restriction in the Terms and Conditions.

Readers who are interested in legal opinion and case citations related to linking and the role of intermediaries, the most common causes of lawsuits, as well as other related matters may want to consult ChillingEffects.org and LinksandLaw.com.

2. Introduction

The act of viewing a web page is a complex interaction between a user's browser and any number of web servers. Unlike reading a book, viewing a web page involves copying the data held on the servers onto the user's computer, if only temporarily. Logic encoded within the page may cause more copying to take place, perhaps from other servers. The combined material may be displayed or otherwise used within the original page often without the user's explicit knowledge or consent. For an end user, it is usually impossible to tell whether a given image or video displayed within a page originates from the server the page comes from or from some other location.

In addition to browsers and webservers, many other kinds of servers live on the Web. Proxy servers and services that combine and repackage data from other sources may also retain copies of this material. These intermediary services may transform, translate or rewrite some of the material that passes through them, to enhance the user's experience of the web page or for their own purposes. Still other services on the web, such as search engines and archives, make copies of content as a matter of course. This is, in part, to facilitate the indexing necessary for their operation, and in part to provide value to their users and to the original authors of the web page. These intermediaries, we argue, should be treated differently by the law based on how much control they have over the underlying material and how they process it.

Examples of the kind of legal questions that have arisen related to material that originates on other sites include:

The Wikipedia page on Copyright aspects of hyperlinking and framing discusses these and several other examples.

2.1 Background

Many content publishers seek to control the use of their content on the Web. In some cases, they employ means that do not take into account the Web's true architecture, and do not use the technical mechanisms available to them. A few illustrative examples are provided below.

Licenses that describe how material may be copied and altered by others tend not to distinguish between a proxy compressing a web page to make it load faster and someone editing and republishing the page on their own website. To illustrate, the Creative Commons Attribution-NoDerivs defines the terms (emphasis added):

Adaptation
means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.
Distribute
means to make available to the public the original and copies of the Work through sale or other transfer of ownership.
Reproduce
means to make copies of the Work by any means including without limitation by sound or visual recordings and the right of fixation and reproducing fixations of the Work, including storage of a protected performance or phonogram in digital form or other electronic medium.

Consider, now, the following questions:

Terms and Conditions statements on websites also list acceptable and unacceptable behavior on a site, with any browsing on the site implicitly indicating acceptance of the terms. These generally do not take into account the behavior of proxies. For instance, one standard set of Terms and Conditions includes:

You may view, download for caching purposes only, and print pages from the website for your own personal use, subject to the restrictions set out below and elsewhere in these terms of use.

You must not:

(a) republish material from this website (including republication on another website);

(b) sell, rent or sub-license material from the website;

(c) show any material from the website in public;

(d) reproduce, duplicate, copy or otherwise exploit material on our website for a commercial purpose;

(e) edit or otherwise modify any material on the website; or

(f) redistribute material from this website except for content specifically and expressly made available for redistribution (such as our newsletter)

It is not possible to view material on the web without it being downloaded onto your computer, so forbidding downloading except for caching purposes essentially means that people cannot view the page. In addition, many proxies automatically transform the documents that pass through them, for example to compress them so that they take up less bandwidth for mobile consumption or to introduce advertisments into pages that are accessed free of charge. Should this be prohibited?

Limits placed on the use of a website often include limitations on automatic indexing of the website, without exceptions for search engines that make the website discoverable or archives that ensure its longevity. For example, the set of terms and conditions quoted above goes on to say:

You must not conduct any systematic or automated data collection activities (including without limitation scraping, data mining, data extraction and data harvesting) on or in relation to our website without our express written consent.

Search engines rely on systematic data collection from websites in order to provide users with accurate search results, and archives do so in order to retain websites for posterity. So, these terms and conditions, if adhered to strictly, put the website out of the reach of search engines and hence makes it undiscoverable; surely this is not in the best interest of the website. Another problem is that automated agents (webcrawlers, spiders, robots) that gather information from the web are unable to read these terms and conditions; the only things they understand are the technical signals that a website provides about what is permitted. See more on this below.

As another example, the terms and conditions for gsig.com include:

Use of Materials: Upon your agreement to the Terms, GSI grants you the right to view the site and to download materials from this site for your personal, non-commercial use. You are not authorized to use the materials for any other purpose. If you do download or otherwise reproduce the materials from this Site, you must reproduce all of GSI?s proprietary markings, such as copyright and trademark notices, in the same form and manner as the original.

...

You may not use any "deep-link", "page-scrape"?, "robot", "spider" or any other automatic device, program, algorithm or methodology or any similar or equivalent manual process to access, acquire, copy or monitor any portion of the Site or any of its content, or in any way reproduce or circumvent the navigational structure or presentation of the Site.

It would be simpler and more effective for the site to use technical means for controlling what webcrawlers, spiders or robots access on the site, namely a robots.txt file which is a set of machine processable instructions instructing automated web agents what thay can and cannot do. They could also exempt automated web agents from the Terms and Conditions as discussed below.

Many sites have a linking policy that limits what links can be made to the site from other sites. These conditions are not backed up through relatively simple technical mechanisms that would prevent such links from being made. For example, the website at quotec.co.uk has a linking policy that includes:

Links pointing to this website should not be misleading.

Appropriate link text should be always be used.

From time to time we may update the URL structure of our website, and unless we agree in writing otherwise, all links should point to http://www.quotec.co.uk.

You must not use our logo to link to this website (or otherwise) without our express written permission.

You must not link to this website using any inline linking technique.

You must not frame the content of this website or use any similar technology in relation to the content of this website.

Technically it is straightforward to prevent linking to pages that the website does not want others to link to: you simply do not give these pages URLs or make the URLs undiscoverable. This is likely to be more effective than asking people to read and adhere to the Terms and Conditions. Several techniques for controlling linking and inclusion are discussed in section 6. Techniques.

Legislation that governs the possession and distribution of unlawful material (such as child pornography, information that is under copyright or material that is legally suppressed through a gag order) needs to exempt certain types of services, such as caching or hosting, as it would be impractical for such services to police all the material that passes through their servers. This does not, however, always happen and intermediaries are often held accountable for material that did not originate with them and have no control over. An example of good legislation that does exempt intermediaries the UK is the Coroners and Justice Act 2009 Schedule 13; from the Explanatory Notes (emphasis added):

Paragraphs 3 to 5 of [Schedule 13] provide exemptions for internet service providers from the offence of possession of prohibited images of children in limited circumstances, such as where they are acting as mere conduits for such material or are storing it as caches or hosts.

3. Terminology

This section summarises the terminology that is used within this document. More details about each of the terms is given in the rest of the document.

3.1 Actors and Agents

Archiving Server
A service that stores copies of web content, e.g. to provide an ongoing historical record.
Browser
A user agent that accesses web content and displays the resulting web pages for users.
Controller
A company, organization or individual who manages web content on a web server and determines what it serves.
Crawler
A user agent which accesses web content automatically.
Search Engine
A service that indexes web content and provides an interface to search this index, typically including links to the indexed content and often copies of it as well.
Service Provider
A company, organization or individual who owns and operates a web server.
User
A real person. In this document, somebody who is using a browser.
User Agent
Software that accesses web content.
Web Server
Software that makes web content available on the web.
Origin Server
The web server from which particular web content originates.
Proxy
A kind of web server that relays web content
Caching Proxy
A proxy that keeps a copy of the web content that it relays.
Reuser
In this document, a web server that that reuses web content from elsewhere on the web, adding value to it by combining it with other information or transforming it, e.g., a ?server-side mashup.?

3.2 Artifacts

Address
A string used to name web content, with which a user agent can access it. Web standards call this a URL, URI or IRI.
Link
An address, visibile within a web page, with which a user can interact.
Web Content
Content available on the web. Web content includes data in a variety of formats: text, HTML, images, video, audio, style sheets and scripts, or anything hosted by a web server that may be accessed by a user agent. When the user agent is a browser, top-level access usually results in a user viewing a web page.
Web Page
What the user sees and/or hears when a browser accesses a web content.

3.3 Actions

Access
To retrieve a web content using its address, and display or otherwise process it. For some web content, processing may involve accessing additional web content which is embedded or included, recursively. . .
Archive
To permanently store a copy of web content that is hosted elsewhere.
Cache
To (temporarily) store a copy of web content which is hosted by another server in order to avoid retrieving it again. The cached copy is used instead of passing through the request.
Copy
In the context of this document, the duplication of bits from one place to another place. Since copy is an imprecise term, we have tried to use more specific terms such as cache, relay, alias and include.
Relay
To provide access to web content hosted by another server without keeping a copy.
Embed
To include web content (such as text, an image or video) within a Web Page in a way that the embedded web content appears as an integral part of the parent web page when it is presented to a user.
Host
In this document, to provide original access (not as a proxy or relay) to web content.
Include
Within this document, to access additional web content (such as an image or a script), while processing a web page.
Index
To extract information from web content (hosted elsewhere) to facilitate searching and/or retrieval of that content.
Transform
To change content such as by reformatting, resizing, translating, or rewriting content.
Upload
To put a file on a server such that it is given an address on the web

4. Publishing

The concept of publishing on the web has evolved as the web's ecosystem has enlarged and diversified, and as the capabilities of browsers and the web standards that they implement have developed. There is no single definition of what publishing on the web means. Instead, there are a number of activities that could be viewed as publication or distribution in a legal sense. This section describes some of these activities and how they work.

4.1 Hosting

The basic form of publication on the web is hosting. A server hosts a file if it stores the file on disk or generates the file from data that it stores, and that file did not (to the server's knowledge) originate elsewhere on the web.

Fig. 1 Accessing hosted pages

A browser makes a request to a hosting server for a file. The hosting server responds with the content of that file, which the browser stores in a local cache and displays to the user.

Diagram showing a browser accessing web content
	    from a hosting server and creating a copy of it within its
	    cache.

The presence of data on a server does not necessarily mean that the organisation that owns and maintains the server has an awareness of the presence of the data or its content. Many websites are hosted on shared hardware that is owned by a service provider that stores and serves data at the direction of controlling individuals and organisations which determine the data they provide on the site. Because of this, multiple servers may host the same file at different locations.

There are many different types of service provider. Some exercise practically no control over the software and data that they host, merely providing a base platform on which code can run. Others may focus on particular types of content, such as images (e.g. Flickr), videos (e.g. YouTube) or messages (e.g. Twitter). Also, there may be many service providers involved in the publication of a particular file on the web: some providing hardware, others providing different kinds of publishing support.

Some service providers automatically perform transformations on material that they host, as a service, such as converting to alternative formats, clipping or resizing, or marking up text. When they sign up to a service, controllers explicitly or implicitly enter into an agreement with the service provider that grants them a license to perform transformations on the material which they upload.

Service providers that host particular types of material often employ automatic filters to prevent the publication of unlawful material, but it is impossible for a service provider to detect and filter out everything that might be unlawful.

To add to the complexity of this area, it is possible for each of the following to be in different jurisdictions:

and be controlled by different laws and conventions.

4.2 Caching and Relaying

Some servers provide access to files that are hosted elsewhere on the web: on an origin server that holds the original version of the file. These files might be stored on the server and provided again at a later time (a caching proxy in this document), or might simply pass through the server in response to a request (a relaying server in this document).

Fig. 2 Accessing web content via a caching proxy

A browser requests web content from a server; the caching poxy responds with content that it has fetched from an origin server.

Diagram showing a caching proxy passing on to
			a browser a copy of content from elsewhere

It is often impossible to tell whether a server is providing a stored response or whether it has made a new request to the origin server and is serving the results of that request. Servers commonly store the results of some requests and not others, acting as a caching proxy some of the time and as a relaying server the rest.

In both cases, the file the caching or relaying server provides might be different from the original web content that was accessed from the origin server. For example:

Caching and relaying servers are extremely useful on the web. There are four main types of caching and relaying servers discussed here: proxies, archives, search engines and reusers. The distinctions between them are summarised in the table below.

proxy archive search engine reuser
purpose increase network performance maintain historical record locate relevant information better understand information
refreshing based on HTTP headers never variable based on HTTP headers
retrieval on demand proactive proactive usually on demand
URI use usually uses same URI uses new URI uses new URI uses new URI

4.2.1 Archives

Archives aim to catalog and provide access to some web content to provide an on-going historical record. They use crawlers to fetch pages and other web content from the portion of the web that they cover, and store them on their own servers, along with metadata about the pages, including when each was retrieved. They then may provide access to the stored copies of the web content at particular historical dates, enabling people to see how pages used to appear.

Archives are often run by institutions that have a legal mandate and responsibility to keep a historical record, such as a legal deposit. Although their primary purpose is long term record-keeping, they often make this material available online as well. They might restrict access to the data for a period of time after it is collected, for security or privacy reasons, and may respond to legally-backed removal requests. Users might use archives for research, but also to access information that has otherwise been removed from the Web.

When they are made available to the public, archived pages are often distinguishable by end users from the original page using banners placed within the page or having the original page appear within a frame. The links (both to other pages and to embedded web content such as images) are usually rewritten so that when the user interacts with the page, they are taken to the version of the linked web content at the same point in time. "Dark archives" do not make their content available to the public.

4.2.2 Search Engines

Search engines aim to catalog and analyze as many web pages as they can, so that they can direct users to appropriate information in response to a search. They use crawlers to fetch pages and other web content from the web, analyse them and store them on their own servers to support further analysis.

Search engines are mostly interested in indexing web content and providing links to them rather than in the content itself. They may or may not copy the page itself, but they always store metadata about the page, derived from the information in the page and other information on the web, such as what other pages link to it.

Search engines play an important role in the web by enabling people to find information, including that which would otherwise be lost or is temporarily unavailable. When a user views a stored page from a search engine, it is usually obvious both that the search engine is involved (from the URI of the page and from banners or framing), that the content originally came from somewhere else, and where it came from. The links within the page are not usually rewritten.

4.2.3 Reusers and Aggregators

A server that is a reuser fetches information from one or more origin servers and either provides an alternative URI for the same page or adds value to it by reformatting it or combining it with other data. Good examples are the BBC Wildlife Finder, which incorporates information from Wikipedia, Animal Diversity Web and other sources or triplr.org, which converts data from one format to another as a service.

Fig. 3 Reusing information

A browser requests content, which is constructed by the reusing server requesting information from two origin servers

Diagram showing a reuser building a page from
	      information from two origin servers

Reusers that do not change the information from the origin server may be used to simplify access to the origin server (by mapping simple URLs to a more complex query) or to provide a route around gateways or the same-origin policy (as servers are not limited in where they access web content from).

Since reused information is, by design, seamlessly integrated into a page that is served from the reuser, people viewing that page will not generally be aware that the information originates from elsewhere. The URIs used for the pages will be those of the reuser. Licenses on the material may require attribution and even when it doesn't, it is good practice for reusers to indicate where the material originates.

4.3 Including

A web page written in HTML may include other web content, such as images, video, scripts, stylesheets, data and other HTML. The HTML in a web page refers to this external web content using markup. For example, an <img> element uses the src attribute to refer to an image which should be shown within the page. Material that is included within a web page may appear to be a hosted copy to the user of a website, but in fact may come from somewhere else, entirely outside the control of the owner of the web page.

Fig. 4 Including web content

A browser requests a file from a server; instructions in the page tell the browser to request other web content from a different server

Diagram showing a page including other web content

HTML supports several different mechanisms for including external web content in a web page but they all work in essentially the same way. When a user navigates to a web page, the browser automatically fetches all the included web content into its local cache and executes them or displays them within the page.

Inclusion is different from hosting, copying or disseminating a file because the information is never stored on, nor passes through, the server that hosts the web page doing the including. As such, although the included web content is an essential component of the page to make it appear and function as a whole, the server of the web page does not have control over content which may change without its knowledge.

4.3.1 Inclusion Chains

When scripts or HTML are included into web pages, the included content may itself include other content (which may include still more and so on). The author of the original web page can choose what content it wants to include, but does not have control over the choice of the subsequently included content. The publishers of included content might change the content at any time, possibly without warning. This has been used to include third-party images without permission, or to substitute the image with something distasteful or to redirect to a link that performed an unintended action on the user's behalf; see Preventing MySpace Hotlinking.

4.3.2 Hidden Requests

Some of the web content that is used within a page may be invisible to the user. An example is a hidden image that is used for tracking purposes: each time a user navigates to the page, the hidden image is requested; the server uses the information from the request of the image to build a picture of the visitors to the site.

This facility can be used for malicious purposes. An <img> element can point to any URI (not just an image) and causes a GET request to that URI. If a website has been constructed such that GET requests cause an action to be carried out (such as logging out of a website), a page that includes this "image" will cause the action to take place.

4.4 Linking

Linking is a fundamental facility on the web. In fact, it has been argued that linking is what makes the Web the Web. HTML pages can include links using the <a> element to other pages on the web with the href attribute holding the URI for the linked page. Some of the links may be to be pages from the same origin; while others will be cross-origin links to pages on third-party's sites that hold related information.

Fig. 5 Linking to pages

A browser requests content from a server; instructions in the page link to other content, but the browser does not retrieve that content until told to do so

Diagram showing a page linking to other web content

A user can usually tell where a link is going to take them prior to selecting it by "mousing over" it or after the link is selected through the status bar in the browser, although some links are overridden by onclick event handling that takes them to a different location. Some websites, such as Wikipedia, use icons to indicate whether a link is a cross-origin link or whether it will take a user to a page on the same server. The use of interstitial pages or dialog boxes which warn the user they are about to leave the site in question can obscure the eventual destination of the link.

4.4.1 Linking Out of Control

Traditionally, a user must take a specific action in order to navigate to the linked page, such as by clicking on the link or selecting it with a keystroke or a voice command. In these cases, the linked page cannot be accessed without the user's knowledge and consent although they may not know where they will eventually end up.

Some sites use elaborate means to obscure whether a link is followed by the user. For example:

  • Browsers can be made to navigate to another page using scripted navigation, which may simply run automatically (navigating the user to another page after a period of time, for example) and can hide the location of a link, such that users don't know where they will be navigating to. The HTML5 history API [HTML5] enables a script to change the address bar, which can mean that the address bar does not reflect the actual location of the page (although this use is limited to locations that have the same origin as the original page).
  • Browsers might pre-fetch web content that seems likely to be visited next, so that the target page is loaded more quickly when the user accesses it. A page can indicate which links should be pre-fetched using the prefetch link relation in a link. For example, a page might indicate that the first result in a list of search results should be fetched before the user actually navigates the link.

5. Linking vs. Inclusion

Although linking and inclusion (or embedding or transclusion) are often confounded, they are fundamentally different.

In an article discussing the decision by the National Newspapers of Ireland to charge for links to articles that appear within their their pages the author says: "There's the fact that naming a work's title does not and cannot be copyright infringement not under US law ... A link (or the URL inside it) is little more than a name, so arguably the same rule would apply. And even if it is more than a name, the URL can be regarded as a factual statement (you can find the content here) and facts arguably cannot be copyrighted in the US (some courts disagree)."

This is clearly not the case with inclusion. If you include material from another site, especially if you do it without attribution, you can be prosecuted for copyright infringement and if the material is judged to be seditious or otherwise unlawful you can be prosecuted for distributing inappropriate material.

6. Techniques

The discussion highlights how difficult it can be for end users (both human and machine) to be aware of the original source of content on the web, and the ways in which it may have been changed en route to them. Controllers of content need to make clear about how that content can be used elsewhere, both through human-readable prose and by using technical barriers that can be used to limit access. Third parties that use that content that originates elsewhere, whether proxies, reusers or linkers, should also follow good practices in transformation, reuse and linking to information. This is discussed in more detail below.

6.1 Controlling Access

Publishers can control access to pages in several ways. In addition to not giving out URIs to these pages, they can control access via the Referer HTTP header which indicates the last page that was referenced. If it was not a page on your own site, then you can redirect to your site's home page, for example. It is also possible to do this check in JavaScript, which can then be used to bring up a dialog to check whether the contractual terms have been read, to confirm that the user is over 18, or to ask for a password.

You can also use a cookie, for example, to start a session only when a page is accessed through a given gateway page and reject or provide an alternative path for requests that don't have the cookie set.

The User-Agent HTTP header which indicates the identity of the user agent making the request is particularly useful in preventing access from crawlers and search engines. A robots.txt file on the website can be also used for the same purpose.

The domain name or IP address of the client making the connection can also be used to prevent specific reusers from accessing material.

6.2 Controlling Inclusion

As well as the techniques above, which can be used to control any access to pages, it's also possible to provide additional control over the inclusion of content in a third-party's web pages.

To prevent an HTML page from being embedded within a frame, publishers can include a script that checks whether the document is the top document in the window.

The Cross-Origin Resource Sharing Working Draft [CORS] defines a set of HTTP headers that can be used to give the publisher of the third-party resource greater control over access to their resources. These are usually used to open up cross-origin access to resources that publishers want to be reused, such as JSON or XML data exposed by APIs, by indicating to the browser that the resource can be fetched by a cross-origin script.

Note

A new From-Origin or Embed-Only-From-Origin HTTP header is also currently under discussion by the Web Applications Working Group and described within the Cross-Origin Resource Embedding Restrictions Editor's Draft [CORER]. This would enable publishers to control which origins are able to embed the content they publish into their pages.

Publishers should ensure actions are not taken on behalf of their users in response to an HTTP GET on a URI, as otherwise sites are open to security breaches through inclusions, as described in section 4.3 Including. It is also good practice to check the Referer header in these cases to prevent actions being taken as the result of the submission of forms within other website's web pages, unless that functionality is desired.

6.3 Controlling Caching

There are a number of HTTP headers [HTTP11] that enable content providers to indicate whether a proxy should cache a given page and for how long it should keep the copy. These are described in detail within Section 13: Caching in HTTP. For example, a server can use the HTTP header Cache-Control: no-store to indicate that a particular response should not be cached by a proxy server.

Publishers of websites can also indicate which pages should not be fetched or indexed by any search engine or archive through robots.txt [ROBOTS] and the robots <meta> element [META]. They can indicate other characteristics of web pages, such as how frequently they might change and their importance on the website, through sitemaps [SITEMAPS]. More sophisticated publishers may use the Automated Content Access Protocol (ACAP) extensions [ACAP] to attempt to indicate access policies.

Publishers can also use the rel="canonical" link relationship to indicate a canonical URI for a page which should be used by search engines and other reusers to reference a given page.

6.4 Controlling Processing

The Cache-Control: no-transform HTTP header indicates that a proxy server must not change the original content, nor the headers:

For example, an proxy server must not convert a TIFF served with Cache-Control: no-transform into a JPG, nor should it rewrite links within an HTML page.

6.5 Licensing

Websites can include a license that describes how the information served by the website can be reused by others.

Just as with HTTP headers, robots.txt and sitemaps, there can be no technical guarantees that crawlers will honor license information within a site. However, to give well behaved crawlers a chance of identifying the license under which a page is published, websites should:

6.6 How to Write Good Websites

This section describes techniques that you should use when operating a website that incorporates material from other sources, whether by caching, transforming or simply linking.

6.6.1 Honoring Headers

As described in section 6.3 Controlling Caching and section 6.4 Controlling Processing, there are a number of HTTP headers and other conventions that indicate how origin servers intend other servers to treat the resources that they publish. Servers that cache or reuse data from origin servers should obey these headers.

6.6.2 Adding Headers

Proxies must use the Via HTTP header when they handle requests to origin servers, to indicate their involvement in the response to the user's original request. Proxies which perform transformations on content must include a Warning: 214 Transformation applied HTTP header in the response.

These and other recommendations for proxies which perform transformations are included in the Guidelines for Web Content Transformation Proxies 1.0.

6.6.3 Attribution

Many licenses require reusers of information to provide attribution to the original source of the material. This attribution must be human-readable, so that users understand where the material came from, and may also be computer-readable, which enables automated tools to track the use of material on the web.

The wording and positioning of attribution is usually dictated by the license under which the material is made available. For example, if you use the free icons available from Axialis Software their license includes:

If you use the icons in your website, you must add the following link on each page containing the icons (at the bottom of the page for example):

Icons by Axialis Team

The HTML code for this link is:

<a href="http://www.axialis.com/free/icons">Icons</a> by <a href="http://www.axialis.com">Axialis Team</a>
	    

If there is no explicit guidance about the location of attribution, it is recommended that attribution to material from a third party appear as close to the actual material as possible. Methods to make the attribution machine-readable include:

  • the use of the cite attribute on the <blockquote> element, where a portion of a page is quoted within your own site
  • using the dc:source property with microformats, microdata or RDFa to indicate the source of a portion of the page (identified through an id)

An example of clear attribution of material from another site is that of the BBC Wildlife Finder; the following screenshot shows the attribution within the page on the Pygmy Three-toed Sloth.

Fig. 6 Attributing Reused Content

A description of the pygmy three-toed sloth is
	    followed by an attribution that reads "This entry is
	    from Wikipedia, the user-contributed encyclopedia. If you
	    find the content in the 'About' section factually
	    incorrect, defamatory or highly offensive you can edit
	    this article at Wikipedia. For more information on our use
	    of Wikipedia please read our FAQ."

6.6.4 Linking

There are a number of practices around linking to third-party sites that can help users and automated agents understand the relationship between your website and the third parties. These include:

  • Use rel="nofollow" for links where the link is not meant to imply approval; these will not be used by search engines when determining the relevance for a page
  • Use rel="external" for links to third-party web pages; this can be used as the basis of styling, such as an image that indicates the user will be taken to a separate site

There are a number of techniques that can be used to track which links are followed from a website. Methods that rewrite the links within a web page to point to an interstitial ("you are leaving this website") page or through a script can mislead the user and any automated agents about the target of the link. It is better to use a script to capture onclick or other events and redirect the user at that point.

7. Conclusions

In conclusion, publishing on the Web is different from print publishing. This document has enumerated some of these differences, especially those relevant to licensing, attribution and copyright issues.

Recommended practices include:

A. Acknowledgements

Many thanks to Thinh Nguyen, Rigo Wenning, Wendy Seltzer and other TAG members for their reviews and comments on earlier versions of this draft, and to Robin Berjon for ReSpec.js.

B. References

B.1 Informative references

[CORS]
Anne van Kesteren. Cross-Origin Resource Sharing. 29 January 2013. W3C Candidate Recommendation. URL: http://www.w3.org/TR/2013/CR-cors-20130129
[HTML5]
Robin Berjon et al. HTML5. 17 December 2012. W3C Candidate Recommendation. URL: http://www.w3.org/TR/html5/
[HTTP11]
R. Fielding et al. Hypertext Transfer Protocol - HTTP/1.1. June 1999. RFC 2616. URL: http://www.ietf.org/rfc/rfc2616.txt
[RDFA-CORE]
Shane McCarron et al. RDFa Core 1.1: Syntax and processing rules for embedding RDF through attributes. 7 June 2012. W3C Recommendation. URL: http://www.w3.org/TR/2012/REC-rdfa-core-20120607/