Uniform Access to Links and Properties
Summary document based on this wiki page, written for the TAG http://www.w3.org/2001/tag/doc/uniform-access.html
This is a bibliographic summary of the thread started by Stuart Williams on www-tag here. Page title changed 2/29/08 following a suggestion of Henry Thompson, and again on 4/29/08 to recognize that the scope is broader than just descriptions.
The subject is a proposal to standardize on a way to obtain information (links and properties) about a thing using the thing's URI. (When the thing is a document the information is called "metadata".) The information we have in mind comes not from an arbitrary source but is associated with the administrator of the URI. One way to do this is by communicating the location of such a description in an HTTP response via the addition of a new GET response header; another is to invent a new HTTP request method for this purpose. Descriptions might include information about the thing such as bibliographic metadata, factual information, reviews or assessments, related materials, access control or licensing information, etc.
A thing may have many descriptions in many places, but the one of interest here is one that is provided by the entity that is responsible for the thing's URI. For documents (especially changing ones), one might expect such a description to be able to say things about the document that its available representations cannot.
"Thing" is jargon used to permit semantic web use cases where the URI doesn't name a document. If you prefer you may forget about this use case and just say "resource".
An excellent summary, created independently, can be found here: http://www.hueniverse.com/hueniverse/2008/09/discovery-and-h.html
- How do I find the name of the author of a CSV file (i.e. a resource whose only representation is a comma-separated-value table)?
- How might I find out the difference between the resources intended to be denoted by the two URIs http://www.w3.org/TR/grddl/ and http://www.w3.org/TR/2007/REC-grddl-20070911/ ?
- Reviving HTTP Header Linking: Some code and use-cases - March 11, 2008
- GRDDL use case
- POWDER (Phil Archer)
- Bibliographic metadata in electronic publishing (J Rees)
- URI declarations (David Booth)
- any situation where FollowYourNose would help: metadata browsing, RDF search engines, looking for definitions, "web closure"
- Why you can't put the information in the document (J Rees)
N.b. http://www.w3.org/2005/ajar/tab Tabulator] implements the Link: header (with rel=meta)
The Link: HTTP header
- TimBL's discussion in this wiki: LinkHeader
- Metadata linking - A. Daviel, May 1997 (email)
- "Bringing Back the Link - With a Twist" - Mark Nottingham June 2006
- "Input to IETF HTTP Link Headers", blog entry, Phil Archer, POWDER chair, Mar 2007
- assuming HTTP Link will get ratified? - DanC to POWDER WG, 2007-10-05
- From SWIG IRC. TimBL to Sean Palmer: "Link was, I am told, omitted by mistake"
- HTTP Header Linking, Mark Nottingham - Internet-Draft, December 1, 2008
Other HTTP headers discussion
- Resource-description: header proposed by Jonathan Borden in 2003
- Indexing non-HTML objects, Benjamin Franz, 1997
- Ed Davies makes the case on www-tag, Aug 2007
- Description-ID: header suggested by TimBL, Dec 2007
303 asymmetry discussion
- Mikael Nilsson urges TAG to take up the issue, Sep 2007
- 303 Asymmetry, blog post by Ian Davis, Dec 2007
Other ways of getting a description through HTTP
- One observer says the approach of returning description (metadata) in an HTTP response header is "egregious" - "this isn't what GET is for"
- URIQA, Patrick Stickler, Nokia, proposes an MGET HTTP method
- WEBDAV PROPFIND ?
- RDDL ?
- A suggestion of Alan Ruttenberg - /about/
- ARK - append '?' to the URI to get URI of 'a brief metadata record'
- Find content via metadata - URI1 leads (via #-truncation or a 303 redirect) to an RDF document with URI URI2. The RDF document contains metadata along with a triple asserting that the bits for the content named by URI1 can be found by dereferencing URI3.
- Use content negotiation. If you ask for RDF, you get the description. If you ask for something else, you get the thing described. (The TAG, TimBL, and others have pointed out that this contradicts web architecture, which requires that content negotiation choose among things that all carry the same information. That goes for CN between RDF and HTML as much as it does for CN between GIF and JPEG.)
- Use a multipart response, with description in one part and the intended payload in another
- Link: headers are hard to parse. How about distinct header types for each link type? Brian Smith to ietf-http-wg
- voiD allows to describe linked datasets and their interlinking incl. examples, SPARQL endpoints, etc.
Precedent set by other protocols
- Message-Context: header described in RFC 3458
- LSID protocol - provides separate getData and getMetadata methods
- Handle system (best known for its application to DOIs) - the protocol provides only description (property/value pairs); the data / document is obtained using a URL that is a property value
- XRDS-SIMPLE (work in progress: http://xrds-simple.net/core/1.0/)
- OAI-PMH (protocol for metadata harvesting)
- (Henry is supposed to help me here.)
- Is it appropriate to use HTTP headers in this way? Will all servers and clients that ought to be able to use this mechanism be willing and able to? Is this an appropriate use of HTTP headers at all?
- What can we do to help ensure that clients understand which information (description) is about the resource itself (if an IR, then invariant across time and independent of choice of language and media type) and which information is specific to that particular response (awww:representation, if an IR)?
- Is uniform access desirable, appropriate, necessary?
- Should we encourage a mechanism that allows description/metadata/links to be separated from the content? (Consider copyright and licensing information, which needs to stay as close as possible to the content.)
- Should clients even expect to be able to get this kind of information from the server specified by the URI, or should we set the expectation that independent catalogs and aggregators have to be consulted?