The Web borrows concepts from physical media (the notion of a "page," for example) and overlays them on top of a networked infrastructure (the Internet) and digital presentation medium (the browser software). This mapping is an abstraction to enable the Web user to more easily interact with content and applications. However, when social or legal concepts and frameworks relating documents, publishing and speech are applied to the Web, this abstraction often does not suffice. Publishing a page on the Web is actually fundamentally different from printing and distributing a page in a magazine or book but because the social conventions around these physical media are so strong and have been reinforced through our society for hundreds of years, it is all too tempting to try to apply them to the Web when in fact this application may not be appropriate.
This document was written, in part, because of some legal issues that were raised to the TAG. It does not attempt to answer these legal questions, but rather it seeks to set definitions for terms which could inform future social and legal dialog and opinion around publishing and linking on the Web.
This is an Editor's Draft which the TAG intends to become a Working Draft on the Recommendation track at W3C. The previous version of this document was published as a Draft TAG Finding.
The act of viewing a web page is a complex interaction between a user's browser and any number of web servers. Unlike giving someone a book, say, viewing a web page is an act of copying: the data held on the servers is copied onto the user's computer. The page itself may cause more copying to take place — of images, videos and other files, perhaps from other servers, that are displayed or otherwise used within the original page — without the user's explicit knowledge or consent. For an end user, it is usually impossible to tell whether a given image or video displayed within a page originates from the server the page comes from or from some other location.
Proxy servers and services that combine and repackage data from other sources may also retain copies of this material, due to the user's original request for the page. These intermediary services may transform, translate or rewrite some of the material that passes through them, to enhance the user's experience of the web page.
Still other services on the web, such as search engines and archives, make copies of content as a matter of course, to provide value to their users and to the original authors of the web page (as it enables the content to be found more easily).
Licenses that describe how material may be copied and altered by others tend not to take account of this complexity, for example to distinguish between a proxy compressing a web page to make it load faster and someone editing and republishing the page on their own website. To illustrate, the Creative Commons Attribution-NoDerivs defines the terms:
- means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License.
- means to make available to the public the original and copies of the Work through sale or other transfer of ownership.
- means to make copies of the Work by any means including without limitation by sound or visual recordings and the right of fixation and reproducing fixations of the Work, including storage of a protected performance or phonogram in digital form or other electronic medium.
Terms and Conditions statements on websites also list acceptable and unacceptable behaviour on a site, with any browsing on the site implicitly indicating acceptance of the terms. These generally do not take into account the behaviour of proxies that transform content on-the-fly for mobile consumption, for example. For instance, one standard set of terms and conditions includes:
You must not:
(a) republish material from this website (including republication on another website);
(b) sell, rent or sub-license material from the website;
(c) show any material from the website in public;
(d) reproduce, duplicate, copy or otherwise exploit material on our website for a commercial purpose;
(e) edit or otherwise modify any material on the website; or
(f) redistribute material from this website except for content specifically and expressly made available for redistribution (such as our newsletter)
Limits placed on the use of a website often include limitations on automatic indexing of the website, without exceptions for the search engines that make the website discoverable or the archives that ensure its longevity. For example, the same set of terms and conditions as described above includes:
You must not conduct any systematic or automated data collection activities (including without limitation scraping, data mining, data extraction and data harvesting) on or in relation to our website without our express written consent.
As another example, the terms and conditions for gsig.com include:
Use of Materials: Upon your agreement to the Terms, GSI grants you the right to view the site and to download materials from this site for your personal, non-commercial use. You are not authorized to use the materials for any other purpose. If you do download or otherwise reproduce the materials from this Site, you must reproduce all of GSI’s proprietary markings, such as copyright and trademark notices, in the same form and manner as the original.
You may not use any “deep-link”, “page-scrape”, “robot”, “spider” or any other automatic device, program, algorithm or methodology or any similar or equivalent manual process to access, acquire, copy or monitor any portion of the Site or any of its content, or in any way reproduce or circumvent the navigational structure or presentation of the Site.
Many sites have a linking policy that limits what links can be made to the site from other sites; that these policies are directly accessible through searches for "all links should point to" illustrates that these conditions are not backed up through technical mechanisms that would prevent such links from being made. For example, the website at quotec.co.uk has a linking policy that includes:
Links pointing to this website should not be misleading.
Appropriate link text should be always be used.
From time to time we may update the URL structure of our website, and unless we agree in writing otherwise, all links should point to http://www.quotec.co.uk.
You must not use our logo to link to this website (or otherwise) without our express written permission.
You must not link to this website using any inline linking technique.
You must not frame the content of this website or use any similar technology in relation to the content of this website.
Legislation that governs the possession and distribution of illegal material (such as child pornography, information that is under copyright or material that is legally suppressed through a gag order) often needs to exempt certain types of services, such as caching or hosting, as it would be impractical for the people running those services to police all the material that passes through their servers. An example of legislation that does this in the UK is the Coroners and Justice Act 2009 Schedule 13; from the Explanatory Notes (emphasis added):
Paragraphs 3 to 5 of [Schedule 13] provide exemptions for internet service providers from the offence of possession of prohibited images of children in limited circumstances, such as where they are acting as mere conduits for such material or are storing it as caches or hosts.
Examples of the kind of legal questions that have arisen are:
There are many other examples on the Wikipedia page on Copyright aspects of hyperlinking and framing.
This document does not aim to address whether particular activities on the web are illegal or legal; this is outside the scope of the TAG. Instead, it aims to:
This section summarises the terminology that is used within this paper. More details about each of the terms is given in the rest of the document.
The concept of publishing on the web has evolved as the web's ecosystem has enlarged and diversified, and as the capabilities of browsers and the web standards that they implement have developed. There is no single definition of what publishing on the web means. Instead there are a number of activities that could be viewed as publication or distribution in a legal sense, or something else. This section describes each of these activities and how they work.
The basic form of publication on the web is hosting. A server hosts a file if it stores the file on disk or generates the file from data that it stores, and that file did not (to the server's knowledge) originally come from elsewhere on the web.
Hosting and Possession
The presence of data on a server does not necessarily mean that the organisation that owns and maintains the server has an awareness of that data being present. Many websites are hosted on shared hardware that is owned by a service provider that stores and serves data for other controlling individuals and organisations which determine the data they provide on the site. Because of this, multiple servers may host the same file at different URIs. For example, an artist could upload the same image to multiple servers, which then store the image and serve it to others.
There are many different types of service provider. Some may exercise practically no control over the software and data that they host but provide hardware on which code can run. Others may focus on particular types of content, such as images (eg Flickr), videos (eg YouTube) or messages (eg Twitter). There may be many service providers involved in the publication of a particular file on the web: some providing hardware, others providing different kinds of publishing support.
Licensing Transformation by Service Providers
Some service providers automatically perform transformations on material that they host, as a service, such as converting to alternative formats, clipping or resizing, or marking up text. When they sign up to a service, controllers explicitly or implicitly enter into an agreement with the service provider that grants them a license to perform transformations on the material which they upload.
Transformation of Illegal Material
Service providers that host particular types of material often employ automatic filters to prevent the publication of illegal material, but it is impossible for a service provider to detect and filter out everything that might be illegal. If a service provider automatically transforms that material as part of its service, and illegal material is not successfully filtered out, automatic processing (including the transformation) of files will still take place.
To add to the complexity of this area, it is possible for each of the following to be in different jurisdictions:
Some servers provide access to files that are hosted elsewhere on the web, on an origin server that holds the original version of the file. These files might be stored on the server and provided again at a later time, in which case for the purposes of this document it is termed a caching server, or might simply pass through the server in response to a request, in which case for the purposes of this document it is termed a distributing server.
It is usually impossible to tell whether a server is providing a stored response or has made a new request to an origin server and is serving the results of that request. Servers commonly store the results of some requests and not others, acting as a caching server some of the time and as a distributing server the rest.
In both cases, the file the caching or distributing server provides may be different from the original one that it has accessed from the origin server. For example:
Caching and distributing servers are extremely useful on the web. There are four main types of caching and distributing servers discussed here: proxies, archives, search engines and reusers. The distinctions between them are summarised in the table below.
|purpose||increase network performance||maintain historical record||locate relevant information||better understand information|
|refreshing||based on HTTP headers||never||variable||based on HTTP headers|
|retrieval||on demand||proactive||proactive||usually on demand|
|URI use||usually uses same URI||uses new URI||uses new URI||uses new URI|
Caching by Proxies
Legislation and licenses that restrict copying may consequently prevent caching by proxies which would make the Web slower.
Transformation by Proxies
Legislation and licenses that restrict the changing of data by proxies may also slow down the web, particularly in low-bandwidth situations.
Controlling Proxy Behaviour
Proxies should comply with instructions from origin servers that describe whether pages may be copied and transformed, but will only be able to comply with those that are machine-readable.
Proxies come in four general flavors:
The use of a forward proxy, gateway or transforming proxy may be configured either on an individual machine or transparently for a particular network. Users may have no idea that their requests are channelled through a given proxy, or they may have configured their set-up to use the proxy.
Reverse proxies appear to be normal servers to users: it is impossible for a user to tell that their request is actually passed on to a completely different origin server, or where that server is. This is intentional as the origin server in this case is a private one.
To improve performance, some proxies, particularly CDNs, may pre-fetch resources that a page includes, since these resources are likely to be requested by the browser soon after the page is viewed. In other words, although generally the contents of a proxy's cache will be determined by the requests that users of that proxy have made, the proxy might also in some cases contain content that no one has ever requested.
Archives aim to catalog and provide access to some portion of web content to provide an on-going historical record. They use crawlers to fetch pages and other resources from the portion of the web that they cover, and store them on their own servers, along with some metadata about the pages, particularly when they were retrieved. They then provide access to the stored copies of the resources at particular historical dates, enabling people to see how pages used to appear.
Archiving by Archives
Legislation and licenses that restrict copying may consequently prevent archiving and therefore prevent archives from storing an accurate historical record of the web.
Transformation by Archives
Legislation and licenses that restrict the rewriting of links by archives make them harder, or even impossible, to use.
Controlling Archive Behaviour
Archives should comply with instructions from origin servers that describe whether pages may be copied and transformed, but will only be able to comply with those that are machine-readable.
Archives are usually run by institutions that have a legal mandate and responsibility to keep this historical record, such as a legal deposit. Although their primary purpose is long term record-keeping, they often make this material available online as well. While they might restrict access to the data for a period of time after it is collected, for security or privacy reasons, it is not usually possible to remove information from an archive. Users might use archives for research, but also to access information that has otherwise been removed from the web.
Archived pages are usually distinguishable by end users from the original page using banners placed within the page or having the original page appear within a frame. The links (both to other pages and to embedded resources such as images) are usually rewritten so that when the user interacts with the page, they are taken to the version of the linked resource at the same point in time.
Search engines aim to catalog and provide access to as many web pages as they can, so that they can direct users to appropriate information in response to a search. They use crawlers to fetch pages and other resources from the web, analyse them and store them on their own servers to support further analysis.
Copying by Search Engines
Legislation and licenses that restrict copying by search engines prevent information from being easily found by users.
Transformation by Search Engines
Legislation and licenses that restrict the transformation of material by search engines prevent them from using the information within the page to provide relevant information to people searching.
Controlling Search Engine Behaviour
Search engines should comply with instructions from origin servers that describe whether pages may be copied and transformed, but will only be able to comply with those that are machine-readable.
Search engines are most interested in indexing resources and providing links to them rather than in the content of the resource itself. They might not copy the page itself, but they always store metadata about the page, derived from the information in the page itself and other information on the web, such as what other pages link to it.
Search engines play an important role in the web in enabling people to find information, including that which would otherwise be lost or is temporarily unavailable. When a user views a stored page from a search engine, it is usually obvious both that the search engine is involved (from the URI of the page and from banners or framing), that the content originally came from somewhere else, and where it came from. The links within the page are not usually rewritten.
Data reuse is becoming more prevalent as web servers act as services to others. A server that is a reuser fetches information from one or more origin servers and either provides an alternative URI for the same page or adds value to it by reformatting it or combining it with other data. Good examples are the BBC Wildlife Finder, which incorporates information from Wikipedia, Animal Diversity Web and other sources or triplr.org, which converts RDF data from one format to another as a service.
Reusers should comply with instructions from origin servers that describe whether pages may be copied and transformed.
Attributing Reused Material
Reusers should indicate the sources of the information on pages, for both humans and computers.
Reusers that do not change the information from the origin server may be used to simplify access to the origin server (by mapping simple URLs to a more complex query) or to provide a route around gateways or the same-origin policy (as servers are not limited in where they access resources from).
Since reused information is, by design, seamlessly integrated into a page that is served from the reuser, people viewing that page will not generally be aware that the information originates from elsewhere. The URIs used for the pages will be those of the reuser, for example. Licenses on the material may require attribution; even when it doesn't, it is good practice for reusers to indicate where the material originates.
An alias is a URI that points the browser to another URI on an origin server. A server can automatically redirect a browser (using a HTTP
3XX status code and a
Location header). Web pages from a server can do the same thing using a
<meta> element with an
http-equiv attribute set to
Refresh; this technique is often used with a slight delay to indicate to the user that they are being redirected to another page.
Redirecting to Illegal Material
Aliases mean that users could easily fall foul of legislation or licenses that banned access to particular information, because it is easy to put in place URIs that appear innocent but redirect to the banned information.
Aliases do not involve any of the information from the origin server passing or being stored by the redirecting server, but the redirecting server will be able to record when a particular URI is requested.
Although it is preferable to only have one URI for a particular resource, redirections are a useful mechanism for managing change on the web. They are used within websites when the structure of the website changes, or between websites when a new website is created that supersedes the first, or to archived information when a host no longer wants to provide access to a file itself.
Redirections are also used to provide other services. Link shorteners provide a short URI for a resource that is then redirected to the original URI, and are useful in locations where space is limited such as in print or on Twitter. Depending on their implementation, link-tracking services can use a similar technique to enable servers to analyse which links are followed from their site: the link tracker records the request and redirects the user to the true target page.
When aliasing is used, users may not be aware about the eventual target of a link, or the involvement of an aliasing server, both of which are important. Shortened links, for example, hide the target location behind a URI that often has no visible relationship to the eventual destination of the page. Some implementations of link tracking do not change the original destination of the link (such that the status bar on a browser shows the eventual target of the page) but instead use the
onclick event to direct the user to the aliasing server.
Following a redirection, browsers change the address bar to the new location, but this is often the only indication, so users may or may not be aware of this happening.
Web pages typically rely on many resources other than the HTML in which the page is written, such as images, video, scripts, stylesheets, data and other HTML. The HTML in a web page refers to these external resources in markup, for example, an
<img> element uses the
src attribute to reference an image which should be shown within the page. Material that is included within a web page may appear to be a hosted copy to the user of a website, but in fact be hosted completely separately, outside the control of the owner of the web page.
Includers should comply with computer and human-readable instructions from origin servers that describe whether and how resources may be included within a page.
Those who include third-party resources within their web pages should indicate the sources of the information on such page for both humans and computers.
HTML supports several different ways of including other resources in a page, which are listed in , but they all work in basically the same way. When a user navigates to a web page, the browser typically automatically fetches all the included resources into its local cache and executes them or displays them within the page.
Inclusion is different from hosting, copying or disseminating a file because the information is never stored on, nor passes through, the server that hosts the web page doing the including. As such, although the included resources are an essential component of the page to make it appear and function as a whole, the server of the web page does not have control over their content.
Users may not be aware that included resources are used within a page at all. When included resources are embedded within the page such that they are visible to a user, it won't be clear that an image or video is from a third-party website rather than the website that they are visiting unless this is explicitly indicated within the content of the page.
A resource that is included into a popular page causes a large number of requests to the server on which the file is published, which can be burdensome to the third-party who hosts the file. Publishers who intend their files to be reused in this way therefore typically have terms and conditions that apply to the reuse of those files and may have to put in place technical barriers to prevent it.
As with normal links, included resources may or may not have the same origin as the page that includes them. Resources such as images and scripts that are included within the web page may be from any site. However, browsers implement a same-origin policy which generally means that third-party resources cannot be fetched and processed by scripts running on the page, for example through XMLHttpRequests [[XMLHTTPREQUEST]] (though typically these scripts can write markup into the page which includes such resources).
When scripts or HTML are included into web pages, the included resource may itself include other resources (which may include still more and so on). The author of the original web page has control over which resources it includes, but will not have control over which resources those included resources go on to include.
Whatever level of remove from the origin page, the publishers of included resources may change the content of those resources at any time, possibly without warning. This has been used in cases where websites included third-party images without permission, to substitute the image with something distasteful or to redirect to a link that performed an action on the user's behalf; see Preventing MySpace Hotlinking.
Some of the resources that are included within a page may be invisible to the user. An example is a hidden image that is used for tracking purposes: each time a user navigates to the page, the hidden image is requested; the server uses the information from the request of the image to build a picture of the visitors to the site.
This facility can be used for malicious purposes. An
<img> element can point to any URI (not just an image) and causes a GET request on that resource. If a website has been constructed such that GET requests cause an action to be carried out (such as logging out of a website), a page that includes this "image" will cause the action to take place.
Linking is a fundamental notion for the web. HTML pages use
<a> elements to insert links to other pages on the web, with the
href attribute holding the URI for the linked page. Some of the links will be to be pages from the same origin; others will be cross-origin links to pages on third-party's sites that hold related information.
Legislation and licenses that forbid linking to particular pages on the web undermine the central mechanism for the way the web works.
Websites cannot prevent others from linking to their pages (addressing a page within another page) but can prevent access to those pages when they are accessed through those links.
Linking to Illegal Material
Website owners could easily fall foul of legislation or licenses that banned links to pages that contain illegal information, because they are not in control of the information on the linked pages.
A user can usually tell where a link is going to take them prior to selecting (clicking, tapping, etc...) it through the browser UI (e.g. by "mousing over" it) or after the link is selected through the status bar in the browser, although some links are overridden by
onclick event handling that takes them to a different location. Some websites, such as Wikipedia, use icons to indicate when a link is a cross-origin link and when it will take a user to a page on the same server. The use of intersticial pages or dialog boxes which warn the user they are about to leave the site in question can obscure the eventual destination of the link, as discussed in .
If the link is a cross-origin link (or even in some cases where it is an internal link), the publisher of the origin page will have no control over the content or access policies of the linked page. These are the responsibility of the publisher of that page; the TAG Finding on "Deep Linking" in the World Wide Web [DEEPLINKING] describes the ways in which publishers can control access to their pages and the fundamental principle that addressing (linking to) a page is distinct from accessing it.
As described above, in the context of the architecture of the World Wide Web, in linking from one page to another, a Web page author is referring to a part of the linked-to website or service. These links, as accessed through and processed by browsers are designed to be public identifiers. The existence of the link does not imply the right to access, and websites are free to use any one of many access control techniques to restrict access. Hence, linking is a “speech act”. It is the opinion of the TAG that linking should therefore enjoy the same protections enjoyed by any other type of protected speech.
Freedom of expression (speech) is a right enshrined under Article 19 of the Universal Declaration of Human Rights. Countries that adhere to this idea of freedom of expression implement it in differing terms. However, since linking constitues a type of expression, in as much as there is a right to freedom of expression, this right encompasses the right to link.
Traditionally, a user must take a specific action in order to navigate to the linked page, such as by clicking on the link or selecting it with a keystroke or a voice command. In these cases, the linked page cannot be accessed without the user's knowledge and consent (though they may not know where they will eventually end up).
Page Access might not be Purposeful
The appearance of a page within a browser's cache does not necessarily mean the user purposefully navigated to a page.
There are three practices used by some sites that mean that users do not necessarily have control over whether a link is followed:
prefetchlink relation in a link. For example, a page might indicate that the first result in a list of search results should be fetched before the user actually navigates the link.
The description above about how information is published on the web highlights how hard it can be for end users (both human and machine) to be aware of the original source of content on the web, and the ways in which it may have been changed en route to them. It also shows that the controllers of content need to be clear about how that content can be used elsewhere, both through human-readable prose and in the technical barriers that they put up that limit access. Third parties that use that content, whether proxies, reusers or linkers, should also follow some best practices in transformation, reuse and links to information.
Once material is put on the public web (that is, on the internet and unprotected by authentication barriers), it is impossible to completely limit how that material is used through technical means — HTTP headers can be faked, metadata can be ignored. However, there are a number of standard techniques that controllers can use to indicate how they intend their material to be used, which intermediate servers should pay attention to.
Publishers can control access to resources that are unprotected by authentication through HTTP, by refusing or redirecting connections to particular resources based on:
RefererHTTP header; this is useful for preventing linking to particular resources from outside a website, or preventing the inclusion of a resource in another website
User-AgentHTTP header; this is particularly useful for preventing access from crawlers
As well as the techniques above, which can be used to control any access to pages, it's also possible to provide additional control over the inclusion of resources in a third-party's web pages.
In the case of HTML pages, publishers can include a script that checks whether the document is the top document in the window, to prevent it from being embedded within a frame.
The Cross-Origin Resource Sharing Working Draft [[CORS]] defines a set of HTTP headers that can be used to give the publisher of the third-party resource greater control over access to their resources. These are usually used to open up cross-origin access to resources that publishers want to be reused, such as JSON or XML data exposed by APIs, by indicating to the browser that the resource can be fetched by a cross-origin script.
Note: A new
Embed-Only-From-OriginHTTP header is also currently under discussion by the Web Applications Working Group and described within the Cross-Origin Resource Embedding Restrictions Editor's Draft [CORER]. This would enable publishers to control which origins are able to embed the resources they publish into their pages.
Publishers should ensure actions are not taken on behalf of their users in response to an HTTP
GET on a URI, as otherwise sites are open to security breaches through inclusions, as described in . It is also good practice to check the
Referer header in these cases to prevent actions being taken as the result of the submission of forms within other website's web pages, unless that functionality is desired.
There are a number of HTTP headers [[HTTP11]] that enable content providers to indicate whether a proxy should cache a given page and for how long it should keep the copy. These are described in detail within Section 13: Caching in HTTP. For example, a server can use the HTTP header
Cache-Control: no-store to indicate that a particular resource should not be cached by a proxy server.
Publishers of websites can also indicate which pages should not be fetched or indexed by any search engine or archive through robots.txt [ROBOTS] and the robots
<meta> element [META]. They can indicate other characteristics of web pages, such as how frequently they might change and their importance on the website, through sitemaps [SITEMAPS]. More sophisticated publishers may use the Automated Content Access Protocol (ACAP) extensions [ACAP] to attempt to indicate access policies.
Publishers can also use the
rel="canonical" link relationship to indicate a canonical URI for a page which should be used by search engines and other reusers to reference a given page.
Cache-Control: no-transform HTTP header indicates that a proxy server must not change the original content, nor the headers:
For example, an proxy server must not convert a TIFF served with
Cache-Control: no-transform into a JPG, nor should it rewrite links within an HTML page.
One reason that people linking to websites use misleading links is when the original URLs are too long to incorporate into space-limited documents, such as short-form posts or in printed media. Although it's possible to use third-party link shortening services, origin websites can also set up link shorteners for their own content, and then use the
rel="shortlink" link relationship to point from the original page to the short link for that page.
Websites indicate a license that describes how the information within the website can be reused by others. The license should be referenced from every page that comes under the license, as both humans and computers will often enter a site on a page other than the home page of the site.
Just as with HTTP headers, robots.txt and sitemaps, there can be no guarantees that crawlers will honour license information within a site. However, to give good-faith crawlers a chance of identifying the license under which a page is published, websites should:
cc:licenseto indicate the license of included resources, such as images or videos
This section describes the techniques that you should use when operating a website that incorporates material from other sources, whether caching, transforming or simply linking.
As described in and , there are a number of HTTP headers and other conventions that indicate how an origin server intends other servers to treat the resources that they publish. Servers that cache or transform data from origin servers should obey these headers, which exist to ensure that the end user receives current information in the intended form.
Proxies must use the
Via HTTP header when they handle requests to origin servers, to indicate their involvement in the response to the user's original request. Proxies which perform transformations on a document must include a
Warning: 214 Transformation applied HTTP header in the response.
These and other recommendations for proxies which perform transformations are included in the Guidelines for Web Content Transformation Proxies 1.0.
Many licenses require the reusers of information to provide attribution to the original source of the material. This attribution must be human-readable, so that users of your website understand where the material came from, and may also be computer-readable, which enables automated tools to track the use of material on the web.
The wording and positioning of attribution is usually dictated by the license under which the material is made available. For example, the license for the free icons available from Axialis Software includes:
If you use the icons in your website, you must add the following link on each page containing the icons (at the bottom of the page for example):Icons by Axialis Team
The HTML code for this link is:<a href="http://www.axialis.com/free/icons">Icons</a> by <a href="http://www.axialis.com">Axialis Team</a>
If there is no explicit guidance about the location of attribution, it is recommended to include attribution to material from a third party as close to the use of that material as possible. Methods to make the attribution machine-readable include:
citeattribute on the
<blockquote>element, where a portion of a page is quoted within your own site
dc:sourceproperty with microformats, microdata or RDFa to indicate the source of a portion of the page (identified through an
An example of clear attribution of material from another site is that of the BBC Wildlife Finder; the following screenshot shows the attribution within the page on the Pygmy Three-toed Sloth.
There are a number of practices around linking to a third-party site that can help users and automated agents to understand the relationship between your website and the third parties. These include:
rel="nofollow"for links where the link is not meant to imply approval; these will not be used by search engines when determining the relevance for a page
rel="external"for links to third-party web pages; this can be used as the basis of styling, such as an image that indicates the user will be taken to a separate site
There are a number of techniques that can be used to track which links are followed from a website. Methods that rewrite the links within a web page to point to an interstitial ("you are leaving this website") page or through a script can mislead the user and any automated agents about the target of the link. It is better to use a script to capture
onclick or other events and redirect the user at that point.
Many thanks to Robin Berjon for ReSpec.js.