TAG Developer Meetup : 22 July in Cambridge, Massachusetts, USA

The TAG will be holding a developer meetup along side of our next face-to-face meeting in Cambridge, Mass. The event will be in the evening of the 22nd of July, will be hosted by Akamai and organized by the BostonJS meetup group. Thanks to both Akamai and BostonJS for helping us out!

As with our previous TAG developer meet-ups, this will be a pretty simple format. We’ll get the TAG members up on stage for a panel discussion about some of the topics we’re covering. We’re going to let you know what we’re working on, answer questions and hopefully engage in some spirited discussion. The event is open to anyone interested in web architecture, web development, web standards and the future of web tech. It’s free to attend and you do not have to already be a member of BostonJS to attend (though you will have to join that meetup group if you want to register). The event is also listed on lanyrd (though you must register on the BostonJS meetup group to attend).

Capability URLs: We Need Your Feedback

The battle for web security and privacy is fought at many levels. Sometimes common practice in web application design can lead to data leakage with untended consequences for users. A good example of this came up recently where confidential files shared through common web-based document sharing services were being exposed unintentionaly to third parties because the private URLs used to share them had been unintentionally leaked.

URLs that allow a user to access an otherwise privileged resource or information are called Capability URLs, and while they can be powerful, they can also cause potential problems when used improperly.

TAG member Jeni Tennison has been working on a draft defining the space of capability URLs and outlining some good practices for usage. We think this document should be useful for web builders who are thinking about incorporating this pattern into their applications. We think it’s pretty good, but we need your feedback before we finalize it and release it as a TAG finding.

The draft may be found here: http://www.w3.org/TR/capability-urls/ and if you have feedback you are encouraged to raise an issue on github or e-mail us on the TAG public mailing list. Thanks!

Extensible Web Summit Roundup

On April 4th, with the much-appreciated help and support of Adobe Systems, the TAG organized an event in San Francisco called the Extensible Web Summit. As I wrote before the event, the intention was to bring together web developers and web platform developers from the local area to discuss upcoming, in-development web platform technologies and standards both to throw a spotlight on some of the topics we think are key and to guide the TAG’s thinking. Judging from the feedback we’ve received, I we achieved these goals and more.

There were quite a few positive views shared on Twitter, including:

…and the summit has inspired a few blog posts, from Alan Sterns, Brian Kardell and Simon St. Laurent.

The final schedule of sessions and links to the unfiltered notes from these sessions can be found on the Lanyrd page for the event. Thanks again to everyone who came along! Based on the feedback we received, I think it’s likely we’ll be running events in the future in the same format. Watch this space and follow @w3ctag on Twitter to keep up to date.

The Extensible Web Summit

We’re running an event on Friday the 4th of April in San Francisco and you’re invited.

For the past year, along-side our regular face-to-face meetings, the TAG has been holding evening developer meet-ups under the moniker “Meet the TAG.” The radical idea has been to take advantage of the fact that our meetings are happening in cities with large concentrations of Web developers to connect with these developer communities in a meaningful and useful way. The hopeful outcome is to both keep developers informed about what we’re doing, presumably on their behalf, with their platform, and to drive some feedback back into the TAG to help guide our thinking and our work. The results have generally been good, generating useful criticism and feedback which we’ve tried to take on board, and making the TAG less of an echo chamber. In the mean time we’ve met lots of developers who take a keen interest in web architecture and the future of the web platform.

For our next face-to-face meeting in San Francisco, we’re planning to expand this idea to a full-fledged one-day event, bringing in web developers and web platform developers, people who are deeply invested in the web platform but may not be participating directly in standards. We’re calling this event the Extensible Web Summit.

As Twitter denizen Daniel Buchner put it:

@briankardell @wycats @w3ctag are you telling me we’re going to try developing API “products” using direct “customer” feedback? Mind = blown

— Daniel Buchner (@csuwildcat) December 9, 2013

The web is 25 years old. What do we want this platform to look like 25 years from now? This event will bring together web platform developers and practitioners from different communities and backgrounds to focus on the future of the web architecture and platform. With the exception of some curated lightning talks at the beginning of the day to set the scene, the event will be run as an unconference, with the agenda self-organized by the participants. We’d like to thank Adobe Systems for stepping forward as our host for this event.

Who should attend? We would ideally like to pack this event with platform developers, framework developers and web developers with an interest in helping to drive the future of the web platform. If you read and liked what you saw in the Extensible Web Manifesto and you’d like to learn more and influence the direction of this thinking, then we’d love to have you along. Likewise if you are interested in other web technologies such as real time communication, platform & device APIs, security, permissions, manifests & packaging, offline usage, JavaScript promises & streams, push notifications and touch. The event is free to attend, but is filling up fast. If this sounds like your cup of tea, visit our Lanyrd page and grab a ticket.

TAG Election: Decision 2013

Within the W3C, the TAG is chartered with the “stewardship” of Web Architecture:

  • to document and build consensus around principles of Web architecture and to interpret and clarify these principles when necessary;
  • to resolve issues involving general Web architecture brought to the TAG;
  • to help coordinate cross-technology architecture developments inside and outside W3C.

It’s one thing to be a steward of something fairly stable, but the Web is currently in a process of upheaval unknown since its inception nearly 20 years ago. Among the challenges we are currently facing are the rise of mobile platforms and accompanying changes in the way people are finding, interacting with and producing services and content on the Internet; the increasing maturity of HTML5 which has been produced and developed under a new model between W3C and the WhatWG; the maturity of Web applications and the rise in importance of JavaScript in the Web platform and the increasing maturity and complexity of video, 2d and 3d graphics and peer to peer communications as first class citizens of the Web. The Web is under existential threat from native mobile application development approaches  and at the same time there has never been a time of greater innovation and energy in the development of the Web technologies and standards.

Against this backdrop, the challenges of stewardship of the Web Architecture become clear.

This past year, the TAG has sought to have greater and more connections with other W3C working groups developing these new technologies, as well as with groups external to W3C including IETF working groups and ECMA’s TC39. We have also sought a stronger line of communication directly to Web developers, particularly those in the Web development community who are interested in seeing a continuance to a coherent architecture of the World Wide Web. In order to continue and expand this mission the TAG is peopled by members who come from a diverse range of backgrounds who are put forward by their organizations to help us in this work; to take responsibility for this stewardship by spending their time and energy in benefit of the Web platform.

The TAG is now in an election cycle and one of our long-serving members, Henry Thompson of the University of Edinburgh, will not be standing for another term. We’re sad to see Henry go, but his departure underscores  the importance of the election cycle. As per W3C process, W3C member organizations are responsible for nominating individuals to run in the TAG election. As a co-chair of the TAG, I encourage you to take this opportunity to influence the future make-up and priorities of the TAG. For the last two election cycles, individuals have written position statements, examples of which can be found here, here and here. These position statements have proven very useful even post-election as a way to help shape the agenda of the TAG. Your W3C Advisory Committee representative must make the nomination. Even if you are not affiliated with a W3C member organization, you can still participate in this process if you have a W3C member organization nominate you and if you are able to commit the time and participate in the meetings. The deadline for nominations is 23:59, Boston time on 29 November 2013.

For those of you attending the W3C TPAC week next week in Shenzhen, if you want to ask any questions in person feel free to ask me or one of the other TAG members attending TPAC. We will also be up on stage during the main technical plenary on Wednesday (currently scheduled for 11:00). If you’re not attending TPAC, feel free to get in touch on the public TAG list, by email, Twitter, carrier pigeon or similar.

The Upcoming TAG By-Election

A by-election happens when an office becomes vacant between usually scheduled elections. Such a situation has recently developed in the TAG and we are now in the middle of a special election process to fill a seat vacated by Marcos Caceres, due to his recent affiliation change.  Two candidates have been put forward by their respective organizations: Frederick Hirsch from Nokia and Sergey Konstantinov from Yandex. Under W3C rules, the Advisory Committee representatives of W3C member companies now must vote on one of these two candidates to fill the vacant seat.

We are in the midst of big changes in the TAG. The TAG is under “new management” (Peter Linss and I have recently been appointed as co-chairs, replacing the irreplaceable Noah Mendelsohn) and has a number of new members and new work items. We are reviving the TAG blog as a public mouthpiece for TAG members and place to anchor discussions; we are moving towards doing more work in GitHub and the substance of our work is growing to encompass the interface between JavaScript and HTML5, extensibility of the Web, and the work on a second edition of the original Architecture of the World Wide Web document for which the TAG is probably most famous. This TAG as a new attitude and a new slate of work has at least partially stemmed from the mandate that we perceive to have taken from last year’s TAG election. For example, the candidacy statements for at least two of the new TAG members elected last year (Yehuda Katz and Alex Russell) included increased communication with the Web developer community and increased coordination with bodies such as TC39; the TAG is now working on making good on these commitments.

Both candidates have written blog posts which detail their positions, approach and thoughts on the future of the TAG.

Frederick Hirsch:
http://fhirsch.blogspot.co.uk/2013/07/what-should-w3c-tag-do-next.html

Sergey Konstantinov:
http://twirl-team.ya.ru/replies.xml?item_no=1036
http://konstantinov.cc/post/54428368997/to-the-developers-and-beyond

If you are a member of an organization that belongs to W3C, then you have a chance to influence this election and the future make-up and work program of the TAG. I encourage you to read these posts and to encourage your Advisory Committee representative to participate in the election. Votes must be cast by Tuesday the 16th of July.

Open Web Platform Weekly Summary – 2013-02-11 – 2013-02-18

Another release of the weekly Openweb Platform Summary from February 11 to 18, 2013. This is a short one. You can read again last week version. Your comments are helpful.

CSS Custom Filters and CSS Shaders in SVG WG

CSS has a draft specification on how to apply effects to an element before rendering it. It allows for custom filters effects which is basically an extension point. The SVG Working Group sent an email asking if it would be possible to reserve webgl as a keyword for CSS Shaders (see also the SVG WG minutes). Tab Atkins recommended to use the following syntax:

@support (filter-type(webgl)) {
@filter curl { ... }
}

James Robinson noted that supporting WebGL and CSS shaders were different and suggested to use another keyword, which is ok with the SVG Working Group.

Mouse Events Soon To Be UI Events?

Anne van Kesteren (Mozilla) is proposing to move all MouseEvent into the UI Events specification.

Proposal to add getClientRect method to CaretPosition

Scott Johnson (Mozilla) is proposing to add a new method, getClientRect(), to the CaretPosition interface (Editor’s draft) for tracking changes to caret positions across reflows. During editing for example in a text area, the document layout might be changed which will require to place the caret at a new position. This new method would help.

CSS Fonts and case matching

A new draft has been published for CSS Font with a nice addition. No need to worry anymore if the fonts have been written with the appropriate uppercase letter or not. It is now case insensitive. If you write everything lowercase, it should still be working. See the other changes.

Privacy, a document in need of love

The W3C tag doesn’t have the resources to tackle the note on Patterns for Privacy by Design in Javascript APIs. So if you are interesting by actively maintaining that document, it is time for you to join the Privacy Interest Group (open to the public).

CSS Parser, from state machine to recursive-descent

Tab Atkins has rewritten the algorithm for parsing CSS from a state machine one to a recursive-descent one.

W3C TAG Publishes Finding on Identifying Application State

The W3C TAG is pleased to announce the publication of a new TAG Finding “Identifying Application State.”

URIs were originally used primarily to identify documents on the Web, or with the use of fragment identifiers, portions of those documents. As Web content has evolved to include Javascript and similar applications that have extensive client-side logic, a need has arisen to use URIs to identify states of such applications, to provide for bookmarking and linking those states, etc. This finding sets out some of the challenges of using URIs to identify application states, and recommends some best practices. A more formal introduction to the Finding and its scope can be found in its abstract.

The W3C TAG would like to thank Ashok Malhotra, who did much of the analysis and editing for this work, and also former TAG member T.V. Raman, who first brought this issue to the TAG’s attention, and who wrote earlier drafts on which this finding is based.

Hash URIs

Note: This was initially posted at http://www.jenitennison.com/blog/node/154. There’s been quite a bit of discussion recently about the use of hash-bang URIs following their adoption by Gawker, and the ensuing downtime of that site. Gawker have redesigned their sites, including lifehacker and various others, such that all URIs look like http://{domain}#!{path-to-content} — the #! is the hash-bang. The home page on the domain serves up a static HTML page that pulls in Javascript that interprets the path-to-content and requests that content through AJAX, which it then slots into the page. The sites all suffered an outage when, for whatever reason, the Javascript couldn’t load: without working Javascript you couldn’t actually view any of the content on the site. This provoked a massive cry of #FAIL (or perhaps that should be #!FAIL) and a lot of puns along the lines of making a hash of a website and it going bang. For analysis and opinions on both sides, see: Breaking the Web with hash-bangs by Mike Davies Broken Links by Tim Bray Hash, Bang, Wallop by Ben Ward Hash-bang boom by Tom Gibara Thoughts on the Hashbang by Ben Cherry Nathan’s comments on www-tag While all this has been going on, the TAG at the W3C have been drafting a document on Repurposing the Hash Sign for the New Web (originally named Usage Patterns For Client-Side URI parameters in April 2009) which takes a rather wider view than just the hash-bang issue, and on which they are seeking comments. All matters of design involve weighing different choices against some criteria that you decide on implicitly or explicitly: there is no single right way of doing things on the web. Here, I explore the choices that are available to web developers around hash URIs and discuss how to mitigate the negative aspects of adopting the hash-bang pattern.

Background

The semantics of hash URIs have changed over time. Look back at RFC 1738: Uniform Resource Locators (URL) from December 1994 and fragments are hardly mentioned; when they are, they are termed “fragment/anchor identifiers”, reflecting their original use which was to jump to an anchor within an HTML page (indicated by an ` element with aname` attribute; those were the days). Skip to RFC 2396: Uniform Resource Identifiers (URI): Generic Syntax from August 1998 and fragment identifiers have their own section, where it says:

When a URI reference is used to perform a retrieval action on the identified resource, the optional fragment identifier, separated from the URI by a crosshatch (“#”) character, consists of additional reference information to be interpreted by the user agent after the retrieval action has been successfully completed. As such, it is not part of a URI, but is often used in conjunction with a URI. At this point, the fragment identifier: is not part of the URI should be interpreted in different ways based on the mime type of the representation you get when you retrieve the URI is only meaningful when the URI is actually retrieved and you know the mime type of the representation Forward to RFC 3986: Uniform Resource Identifier (URI): Generic Syntax from January 2005 and fragment identifiers are defined as part of the URI itself: The fragment identifier component of a URI allows indirect identification of a secondary resource by reference to a primary resource and additional identifying information. The identified secondary resource may be some portion or subset of the primary resource, some view on representations of the primary resource, or some other resource defined or described by those representations. This breaks away from the tight coupling between a fragment identifier and a representation retrieved from the web and purposefully allows the use of hash URIs to define abstract or real-world things, addressing TAG Issue 37: Definition of abstract components with namespace names and frag ids and supporting the use of hash URIs in the semantic web. Around the same time, we have the growth of AJAX, where a single page interface is used to access a wide set of content which is dynamically retrieved using Javascript. The AJAX experience could be frustrating for end users, because the back button no longer worked (to let them go back to previous states of their interface) and they couldn’t bookmark or share state. And so applications started to use hash URIs to track AJAX state (that article is from June 2005, if you’re following the timeline). And so we get to hash-bangs. These were proposed by Google in October 2009 as a mechanism to distinguish between cases where hash URIs are being used as anchor identifiers, to describe views, or to identify real-world things, and those cases where they are being used to capture important AJAX state. What Google proposed is for pages where the content of the page is determined by a fragment identifier and some Javascript to also* be accessible by combining the base URI with a query parameter (_escaped_fragment_={fragment}). To distinguish this use of hash URIs from the more mundane kinds, Google proposed starting the fragment identifier #! (hash-bang). Hash-bang URIs are therefore associated with the practice of transcluding content into a wrapper page. To summarise, hash URIs are now being used in three distinct ways: 1. to identify parts of a retrieved document 2. to identify an abstract or real-world thing (that the document says something about) 3. to capture the state of client-side web applications Hash-bang URIs are a particular form of the third of these. By using them, the website indicates that the page uses client-side transclusion to give the true content of the page. If it follows Google’s proposal, the website also commits to making that content available through an equivalent base URI with a _escaped_fragment_ parameter.

Hash-bang URIs in practice

Let’s have a look at how hash-bang URIs are used in a couple of sites.

Lifehacker

First, we’ll look at lifehacker, which is one of Gawker’s sites whose switch to hash-bangs triggered the recent spate of comments. What happens if I link to the article http://lifehacker.com/#!5770791/top-10-tips-and-tricks-for-making-your-work-life-better? The exact response to this request seems to depend on some cookies (it didn’t work the first time I accessed it in Firefox, having pasted the link from another browser). If it works as expected, in a browser that supports Javascript, the browser gets the page at the base URI http://lifehacker.com/, which includes (amongst a lot of other things) a script that POSTs to http://lifehacker.com/index.php?_actn_=ajax_post a request with the data: op=ajax_post refId=5770791 formToken=d26bd943151005152e6e0991764e6c09 The response to this POST is a 53kB JSON document that contains a bit of metadata about the post and then its escaped HTML content. This gets inserted into the page by the script, to display the post. As this isn’t a GETtable resource, I’ve attached this file to this post so you can see what it looks like. (Honestly, I could hardly bring myself to describe this: a POST to get some data? a .php URL? query parameter set to ajax_post? massive amounts of escaped HTML in a JSON response? Geesh. Anyway, focus… hash-bang URIs…) A browser that doesn’t support Javascript simply gets the base URI and is none the wiser about the actual content that was linked to. What about the _escaped_fragment_ equivalent URI, http://lifehacker.com/?_escaped_fragment_=5770791/top-10-tips-and-tricks-for-making-your-work-life-better? If you request this, you get back an 200 OK response which is an HTML page with the content embedded in it. It looks just the same as the original page with the embedded content. What if you make up some rubbish URI, which in normal circumstances you would expect to give a 404 Not Found response? Naturally, a request to the base URI of http://lifehacker.com/ is always going to give a 200 OK response, although if you try http://lifehacker.com/#!1234/made-up-page you get page furniture with no content in the page. A request to http://lifehacker.com/?_escaped_fragment_=1234/made-up-page results in a 301 Moved Peramently to the hash-bang URI http://lifehacker.com/#!1234 rather than the 404 Not Found that we’d want.

Twitter

Now let’s look at Twitter. What happens if I link to the tweet http://twitter.com/#!/JeniT/status/35634274132561921? Although it’s not indicated in the Vary header, Twitter determines what to do about any requests to this hashless URI based on whether I’m logged in or not (based on a cookie). If I am logged on, I get the new home page. This home page GETs (through various iframes and Javascript obfuscation) several small JSON files through Twitter’s API: http://api.twitter.com/1/statuses/show.json?include_entities=true&contributor_details=true&id=35634274132561921: the details of the tweet http://api.twitter.com/1/statuses/35634274132561921/retweeted_by.json?count=15: details about retweets * http://api.twitter.com/1/users/lookup.json?user_id=&screen_name=unhosted: details about the twitter user @unhosted, who was mentioned in the tweet This JSON gets converted into HTML and embedded within the page using Javascript. All the links within the page are to hash-bang URIs and there is no way of identifying the hashless URI (unless you know the very simple pattern that you can simply remove it to get a static page). If I’m not logged on but am using a browser that understands Javascript, the browser GETs http://twitter.com/; the script in the returned page picks out the fragment identifier and redirects (using Javascript) to http://twitter.com/JeniT/status/35634274132561921. If, on the other hand, I’m using curl or a browser without Javascript activated, I just get the home page and have no idea that the original hash-bang URI was supposed to give me anything different. The response to the hashless URI http://twitter.com/JeniT/status/35634274132561921 also varies based on whether I’m logged in or not. If I am, the response is a 302 Found to the hash-bang URI http://twitter.com/#!/JeniT/status/35634274132561921. If I’m not, for example using curl, Twitter just returns a normal HTML page that contains information about the tweet that I’ve just requested. Finally, if I request the _escaped_fragment_ version of the hash-bang URI http://twitter.com/?_escaped_fragment_=/JeniT/status/35634274132561921 the result is a 301 Moved Permanently redirection to the hashless URI http://twitter.com/JeniT/status/35634274132561921 which can be retrieved as above. Requesting a status that doesn’t exist such as http://twitter.com/#!/JeniT/status/1 in the browser results in a page that at least tells you the content doesn’t exist. Requesting the equivalent _escaped_fragment_ URI redirects to the hashless URI http://twitter.com/JeniT/status/1. Requesting this results in a 404 Not Found result as you would expect.

Advantages of Hash URIs

Why are these sites using hash-bang URIs? Well, hash URIs in general have four features which make them useful to client-side applications: they provide addresses for application states; they give caching (and therefore performance) boosts; they enable web applications to draw data from separate servers; and they may have SEO benefits.

Addressing

Interacting with the web is all about moving from one state to another, through clicking on links, submitting forms, and otherwise taking action on a page. Backend databases on web servers, cookies, and other forms of local storage provide methods of capturing application state, but on the web we’ve found that having addresses for states is essential for a whole bunch of things that we find useful: being able to use the back button to return to previous states being able to bookmark states that we want to return to in the future being able to share states with other people by linking to them On the web, the only addressing method that meets these goals is the URI. Addresses that involve more than a URI, such as “search http://example.com/ with the keyword X and click on the third link” or “access http://example.org/ with cookie X set to Y” or “access http://example.net with the HTTP header X set to Y” simply don’t work. You can’t bookmark them or link to them or put them on the side of a bus. Application state is complex and multi-faceted. As a web developer, you have to work out which parts of the application state need to be addressable through URIs, which can be stored on the client and which on a server. They can be classified into four rough categories; states that are associated with: 1. having particular content in the page, such as having a particular thread open in a webmail application 2. viewing a particular part of the content, such as a particular message within a thread that is being shown in the page 3. having a particular view of the content, such as which folders in a navigational folder list are collapsed or expanded 4. a user-interface feature, such as whether a drop-down menu is open or closed States that have different content almost certainly need to have different URIs so that it’s possible to link to that content (the web being nothing without links). At the other extreme, it’s very unlikely that the state of a drop-down menu would need to be captured at all. In between is a large grey area, where a web developer might decide not to capture state at all, to capture it in the client, in the server, or to make it addressable by giving it a URI. If a web developer chooses to make a state addressable through a URI, they again have choices to make about which part of the URI to use: should different states have different domains? different paths? different query parameters? different fragment identifiers? Hash URIs make states addressable that developers might otherwise leave unaddressable. To give some examples, on legislation.gov.uk we have decided to: use the path to indicate a particular piece of content (eg which section of an item of legislation you want to look at), for example /ukpga/1985/67/section/6 use query parameters for particular views on that content (eg whether you want to see the timeline associated with the section or not), for example /ukpga/1985/67/section/6?view=timeline&timeline=true use fragment identifiers to jump to subsections, for example /ukpga/1985/67/section/6#section-6-2 * also use fragment identifiers for enhanced views (eg when viewing a section after a text search) /ukpga/1985/67/section/6#text%3Dschool%20bus The last of these states would probably have gone un-addressed if we couldn’t use a hash URI for it. The only changes that it makes to the normal page are currently to the links to other legislation content, so that you can go (back) to a highlighted table of contents (though we hope to expand it to provide in-section highlighting). Given that we rely heavily on caching to provide the performance that we want and that there’s an infinite variety of free-text search terms, it’s simply not worth the performance cost of having a separate base URI for those views.

Caching and Parallelisation

Fragment identifiers are currently the only part of a URI that can be changed without causing a browser to refresh the page (though see the note below). Moving to a different base URI — changing its domain, path or query — means making a new request on the server. Having a new request for a small change in state makes for greater load on the server and a worse user experience due both to the latency inherent in making new requests and the large amount of repeated material that has to be sent across the wire.

Note: HTML5 introduces pushState() and changeState() methods in its history API that enable a script to add new URIs to the browser’s history without the browser actually navigating to that page. This is new functionality, at time of writing only supported in Chrome, Safari and Firefox (and not completely in any of them) and unlikely to be included in IE9. When this functionality is more widely adopted, it will be possible to change state to a new base URI without causing a page load. When a change of state involves simply viewing a different part of existing content, or viewing it in a different way, a hash URI is often a reasonable solution. It supports addressability without requiring an extra request. Things become fuzzier when the same base URI is used to support different content, where transclusion is used. In these cases, the page that you get when you request the base URI itself gets content from the server as one or more separate AJAX requests based on the fragment identifier. Whether this ends up giving better performance depends on a variety of factors, such as: How large are the static portions of the page (served directly) compared to the dynamic parts (served using AJAX)? If the majority of the content is static as a user moves through the site, you’re going to benefit from only loading the dynamic parts as state changes. Can different portions of the page be requested in parallel? These days, making many small requests may lead to better performance than one large one. * Can the different portions of the page be cached locally or in a CDN? You can make best use of caches if the rapidly changing parts of a page are requested separately from the slowly changing parts.

Distributed Applications

Hash URIs can also be very useful in distributed web applications, where the code that is used to provide an interface pulls in data from a separate, unconnected source. Simple examples are mashups that use data provided by different sources, requested using AJAX, and combine that data to create a new visualisation. But more advanced applications are beginning to emerge, particularly as a reaction to silo sites such as Google and Facebook, which lock us in to their applications by controlling our data. From the unhosted manifesto:

To be unhosted, a website’s code will need to be very ajaxy first, so that all the servers do is store and serve json data. No server-side processing. This is because we need to switch from transport-layer encryption to client-side payload encryption (we no longer necessarily trust the server we’re talking to). From within the app’s source code, that should run entirely in JavaScript and HTML5, json-objects can be stored, retrieved, sent, and received. The user will have the same experience (we even managed to avoid needing a plugin), but the website is unhosted in the sense that the servers you talk to only see encrypted data and don’t even know which application you are running. The aim of unhosted is to separate application code from user data. This divides servers (at least functionally) into those that store and make available user data, and those that host applications and any supporting code, images and so on. The important feature of these sites is that user data never passes through the web application’s server. This frees users to move to different applications without losing their data. This doesn’t necessarily stop the application server from doing any processing, including URI-based processing; it is only that the processing cannot be based on user data — the content of the site. Since this content is going to be accessed through AJAX anyway, there’s little motivation for unhosted applications to use anything other than local storage and hash URIs to encode state.

SEO

A final reason for using hash URIs that I’ve seen cited is that it increases the page rank for the base URI, because as far as a search engine is concerned, more links will point to the same base URI (even if in fact they are pointing to a different hash URI). Of course this doesn’t apply to hash-bang URIs, since the point of them is precisely to enable search engines to distinguish between (and access content from) URIs whose base URI is the same.

Disadvantages of Hash URIs

So hash-bangs can give a performance improvement (and hence a usability improvement), and enable us to build new kinds of web applications. So what are the arguments against using them?

Restricted Access

The main disadvantages of using hash URIs generally to support AJAX state arise due to them having to be interpreted by Javascript. This immediately causes problems for: users who have chosen to turn off Javascript because: they have bandwidth limitations they have security concerns they want a calmer browser experience clients that don’t support Javascript at all such as: search engines screen scrapers clients that have buggy Javascript implementations that you might not have accounted for such as: older browsers some mobile clients The most recent statistic I could find, about access to the Yahoo home page indicates that up to 2% of access is from users without Javascript (they excluded search engines). According to a recent survey, about the same percentage of screen reader users have Javascript turned off. This is a low percentage, but if you have large numbers of visitors it adds up. The site that I care most about, legislation.gov.uk, has over 60,000 human visitors a day, which means that about 1,200 of them will be visiting without Javascript. If our content were completely inaccessible to them we’d be inconveniencing a large number of users.

Brittleness

Depending on hash-bang URIs to serve content is also brittle, as Gawker found. If the Javascript that interprets the fragment identifier is temporarily inaccessible or unable to run in a particular browser, any portions of a page that rely on Javascript also become inaccessible.

Replacing HTTP

There are other, less obvious, impacts which occur when you use a hash-bang URI. The URI held in the HTTP Referer header “MUST NOT include a fragment”. As Mike Davies noted, this prevents such URIs from showing up in server logs, and stops people from working out which of your pages are linking to theirs. (Of course, this might be a good thing in some circumstances; there might be aspects of the state of a page that you’d rather a referenced server not know about.) You should also consider the impact on the future proofing of your site. When a server knows the entirety of a URI, it can use HTTP mechanisms to indicate when pages have moved, gone, or never existed. With hash URIs, if you change the URIs you use on your site, the Javascript that interprets the fragment identifier needs to be able to recognise and support any redirections, missing, or never existing pages. The HTTP status code for the wrapper page will always be 200 OK, but be meaningless. Even if your site structure doesn’t change, if you use hash-bang URIs as your primary set of URIs, you’re likely to find it harder to make a change back to using hashless URIs in the future. Again, you will be reliant in perpetuity on Javascript routing to decipher the hash-bang URI and redirect it to a hashless URI.

Lack of Differentiation

A final factor is that fragment identifiers can become overcrowded with state information. In a purely hash-URI-based site, what if you wanted to jump to a particular place within particular content, shown with a particular view? The hash URI has to encode all three of these pieces of information. Once you start using hash-bang URIs, there is no way to indicate within the URI (for search engines, for example) that a particular piece of the URI can be ignored when checking for equivalence. With normal hash URIs, there is an assumption that the fragment identifier can basically be ignored; with hash-bang URIs that is no longer true.

Good Practice

Having looked at the advantages and disadvantages, I would echo what seems to be the general sentiment around traditional server-based websites that use hash-bang URIs: pages that give different content should have different base URIs, not just different fragment identifiers. In particular, if you’re serving large amounts of document-oriented content through hash-bang URIs, consider swapping things around and having hashless URIs for the content that then transclude in the large headers, footers and side bars that form the static part of your site. However, if you are running a server-based, data-driven web application and your primary goal is a smooth user experience, it’s understandable why you might want to offer hash URIs for your pages to the 98% of people who can benefit from it, even for transcluded content. In these cases I’d argue that you should practice progressive enhancement: 1. support hashless URIs which do not simply redirect to a hash URI, and design your site around those 2. use hash-bang URIs as suggested by Google rather than simple hash URIs 3. provide an easy way to get the sharable, hashless URI for a particular page when it is accessed with a hash-bang URI 4. use hashless URIs within links; these can be overridden with onclick listeners for those people with Javascript; using the hashless URI ensures that ‘Copy Link Location’ will give a sharable URI 5. use the HTML5 history API where you can to add or replace the relevant hashless URI in the browser history as state changes 6. ensure that only those visitors that both have Javascript enabled and do not have support for HTML5’s history API have access to the hash-bang URIs by using Javascript to, for example: redirect to a hash-bang URI rewrite URIs within pages to hash-bang URIs * attach onclick URIs to links 7. support the _escaped_fragment_ query parameter, the result of which should be a redirection to the appropriate hashless URI This is roughly what Twitter has done, except that it doesn’t make it easy to get the hashless URI from a page or from links within the page. Of course the mapping in Twitter’s case is the straight-forward removal of the #! from the URI, but as a human it’s frustrating to have to do this by hand. The above measures ensure that your site will remain as accessible as possible to all users and provides a clear migration path as the HTML5 history API gains acceptance. The slight disadvantage is that encouraging people to use hashless URIs for links means that you you can no longer depend quite so much on caching as the first page that people access in a session might be any page (whereas with a pure hash-bang scheme everyone goes to the same initial page). Distributed, client-based websites can take the same measures — the application’s server can send back the same HTML page regardless of the URI used to access it; Javascript can pull information from a URI’s path as easily as it can from a fragment identifier. The biggest difficulty is supporting the static page through the _escaped_fragment_ convention without passing user data through the application server. I suspect we might find a third class of service arise: trusted third-party proxies using headless browsers to construct static versions of pages without storing either data or application logic. Time will tell.

The Deeper Questions

There are some deeper issues here regarding web architecture. In the traditional web, there is a one-to-one correspondence between the representation of a resource that you get in response to a request from a server, and the content that you see on the page (or a search engine retrieves). With a traditional hash URI for a fragment, the HTTP headers you retrieve for the page are applicable to the hash URI as well. In a web application that uses transclusion, this is not the case.

Note: It’s also impossible to get metadata about hash URIs used for real-world or abstract things using HTTP; in these cases, the metadata about the thing can only be retrieved through interpreting the data within the page (eg an RDF document). Whereas with the 303 See Other pattern for publishing linked data, it’s possible to use a 404 Not Found response to indicate a thing that does not exist, there is no equivalent with hash URIs. Perhaps this is what lies at the root of my feeling of unease about them. With hash-bang URIs, there are in fact three (or more) URIs in play: the hash-bang URI (which identifies a wrapper page with particular content transcluded within it), a base URI (which identifies the wrapper HTML page) and one or more content URIs (against which AJAX requests are made to retrieve the relevant content). Requests to the base URI and the content URIs provide us with HTTP status codes and headers that describe those particular representations. The only way of discovering similar metadata about the hash-bang URI itself is through the _escaped_fragment_ query parameter convention which maps the hash-bang URI into a hashless URI that can be requested. Does this matter? Do hash-bang URIs “break the web”? Well, to me, “breaking the web” is about breaking the implicit socio-technical contract that we enter into when we publish websites. At the social level, sites break the web when they withdraw support for URIs that are widely referenced elsewhere, hide content behind register- or pay-walls, or discriminate against those who suffer from disabilities or low bandwidth. At the technical level, it’s when sites lie in HTTP. It’s when they serve up pages with the title “Not Found” with the HTTP status code 200 OK. It’s when they serve non-well-formed HTML as application/xhtml+xml. These things matter because we base our own behaviour on the contract being kept. If we cannot trust major websites to continue to support the URIs that they have coined, how can we link to them? If we cannot trust websites to provide accurate metadata about the content that they serve, how can we write applications that cache or display or otherwise use that information? On their own, pages that use Javascript-based transclusion break both the social side (in that they limit access to those with Javascript) and the technical side (in that they cannot properly use HTTP) of the contract. But contracts do get rewritten over time. The web is constantly evolving and we have to revise the contract as new behaviours and new technologies gain adoption. The _escaped_fragment_ convention gives a life line: a method of programmatically discovering how to access the version of a page without Javascript, and to discover metadata about it through HTTP. It is not a pretty pattern (I would much prefer that the server returned a header containing a URI template that described how to create the hashless equivalent of a hash-bang URI, and to have some rules about the parsing of a hash-bang fragment identifier so that it could include other fragments identifiers) but it has the benefit of adoption. In short, hash-bang URIs are an important pattern that will be around for several years because they offer many benefits compared to their alternatives, and because HTML5’s history API is still a little way off general support. Rather than banging the drum against hash-bang URIs, we need to try to make them work as well as they can by: berating sites that use plain hash URIs for transcluded content encouraging sites that use hash-bang URIs to follow some good practices such as those I outlined above * encouraging applications, such as browsers and search engines, to automatically map hash-bang URIs into the _escaped_fragment_ pattern when they do not have Javascript available We also need to keep a close eye on emerging patterns in distributed web applications to ensure that these efforts are supported in the standards on which the web is built.

New opportunities for linked data nose-following

For those of you interested in deploying RDF on the Web,
I’d like to draw your attention to three new proposed standards from IETF,
Web Linking“,
Defining Well-Known URIs“,
and “Web Host Metadata“,
that create new follow-your-nose tricks that could be used by semantic web clients to obtain RDF connected to a URI – RDF that presumably defines what the URI ‘means’ and/or describes the thing that the URI is supposed to refer to.

Most semantic web application developers are probably familiar with three ways to nose-follow from a URI:

  1. For # URIs – for X#F, the document X tells you about <X#F>
  2. When the response to GET X is a 303 – the redirect target tells you about <X>
  3. When the response to GET X is a 200 – the content may tell you about <X>

In case 3, X refers to what I’ll call a “web page” (a more technical term is used in the TAG’s httpRange-14 resolution). One of the new RFCs extends case 3 to situations where the RDF can’t be embedded in the content, either because the content-type doesn’t provide a place to put it (e.g. text/plain) or because for administrative reasons the content can’t be modified to include it (e.g. a web archive that has to deliver the original bytes faithfully). The others cover this case as well as offering improved performance in case 2.

Web pages as RDF subjects

Before getting into the new nose-following protocols, I’ll amplify case 3 above by listing a few applications of RDF in which a web page occurs as a subject. I’ll rather imprecisely call such RDF “metadata”.

  1. Bibliographic metadata – tools such as Zotero might be interested in obtaining Dublin Core, BIBO, or other citation data for the web page.
  2. Stability metadata – for annotation and archiving purposes it may be useful to know whether the page’s content is committed to be stable over time (e.g. this has changing content versus this has unchanging content). See TimBL’s Generic Resources note.
  3. Historical and archival metadata – it is useful to have links to other versions of a document – including future versions.

All sorts of other statements can be made about a web page, such as a type (wiki page, blog post, etc.), SKOS concepts, links to comments and reviews, duration of a recording, how to edit, who controls it administratively, etc. Anything you might want to say about a web page can be said in RDF.

Embedded metadata is easy to deploy and to access, and should be used when possible. But while embedded metadata has the advantages of traveling around with the content, a protocol that allows the server responsible for the URI to provide metadata over a separate “channel” has two advantages over embedded metadata: First, the metadata doesn’t have to be put into the content; and second, it doesn’t have to be parsed out of the content. And it’s not either/or: There is no reason not to provide metadata through both channels when possible.

Link: header

The ‘Web Linking’ proposed standard defines the HTTP Link: header, which provides a way to communicate links rooted at the requested resource. These links can either encode interesting information directly in the HTTP response, or provide a link to a document that packages metadata relevant to the resource.

In the former case, one might have:

Link: <http://xmlns.com/foaf/0.1/Document>;
  rel=”http://www.w3.org/1999/02/22-rdf-syntax-ns#type”

meaning that the request URI refers to something of type foaf:Document. In the latter case one might have:

Link: <http://example.com/about/foo.rdf>;
  rel=”describedby”; type=application/rdf+xml

meaning that metadata can be found in
<http://example.com/about/foo.rdf>, and hinting that the
latter resource might have a ‘representation’ with media type
application/rdf+xml.

Host-wide nose-following rules

The motivation for the “well-known URIs” RFC is to collect all “well-known URIs” (analogous to “robots.txt”) in a single place, a root-level “.well-known” directory, and create a registry of them to avoid collisions. The most pressing need comes from protocols such as webfinger and OpenID; see Eran Hammer-Lahav’s blog post for the whole story.

For linked data, .well-known provides an opportunity for providing metadata for web pages, as well improving the efficiency of obtaining RDF associated with other “slash URIs”, what is currently done using 303 responses.

Ever since the TAG’s httpRange-14 decision in 2005, there have been concerns that it takes two round trips to collect RDF associated with a slash URI. While some might question why those complaining aren’t using hash URIs, in any case the “well-known URIs” mechanism gives a way to reduce the number of round trips in many cases, eliminating many GET/303 exchanges.

The trick is to obtain, for each host, a generic rule that will transform the URI at that host that you want RDF for into the URI of a document that carries that RDF. This generic rule is stored in a file residing in the .well-known space at a path that is fixed across all hosts. That is: to find RDF for http://example.com/foo, follow these steps:

  1. obtain the host name, “example.com”
  2. form the URI with that host name and path
    “/.well-known/host-meta”, i.e.
    “http://example.com/.well-known/host-meta”
    (see
    here)
  3. if not already cached, fetch the document at that URI
  4. in that document find a rule generically transforming
    original-URI -> about-URI
  5. apply the rule to “http://example.com/foo” obtaining (say)
    “http://example.com/about/foo”
  6. find RDF about “http://example.com/foo”
    in document “http://example.com/about/foo”

The form of the about-URI is chosen by the particular host, e.g. “http://example.com/foo,about” or “http://about.example.com/foo” or whatever works best.

Why is this fewer round trips than using 303? Because you can fetch and cache the generic rule once per site. The first use of the rule still costs an extra round trip, but subsequent URIs for a given site can be nose-followed without any extra web accesses.

A worked example can be found here.

Next steps

As with any new protocol, figuring out exactly how to apply the new proposed standards will require coordination and consensus-building. For example, the choice of the “describedby” link relation and “host-meta” well-known URI need to be confirmed for linked data, and agreement reached on whether multiple Link: headers is in good taste or poor taste. (Link: and .well-known put interesting content in a peculiarly obscure place and it might be a good idea to limit their use.) Consideration should be given to Larry Masinter’s suggestion to use multiple relations reflecting different attitudes the server might have regarding the various metadata sources: For example the server may choose to announce that it wants the Link: metadata to override any embedded metadata, or vice versa. Agreement should be reached on the use of Link: and host-meta with redirects (302 and so on) – personally I think it would be a great thing as you could then use a value-added forwarding service to provide metadata that the target host doesn’t or can’t provide.

This is not a particularly heavy coordination burden; the design odds-and-ends and implementations are all simple. The impetus might come from inside W3C (e.g. via SWIG) or bottom-up. All we really need to get this going are a bit of community discussion, a server, and a cooperating client, and if the protocols actually fill a need, they will take off.

For past TAG work on this topic, please see TAG issue 62
and the “Uniform Access to Metadata
memo.