The impact of Javascript and XMLHttpRequest on web architecture
This issue was raised briefly on the TAG telcon of 11 October 2007, but I think we dismissed it too quickly.
The basic WebArch story about URIs, resources and representations makes sense to people because they can see the relationship between information resource ('the Oaxaca weather report') and representation (<html><title>Today's weather for Oaxaca</title>. . . ). When many web pages make extensive use of Javascript to compute the html that determines what you see on the screen, this relationship is weakened. It's not just human beings doing 'view source' who lose out---search engines do too.
Although it's true that some proportion of Javascript-heavy pages are just badly designed, ignoring the Least Power finding through ignorance or laziness, it's certainly the case that some such pages, for instance those which make innovative use of XMLHttpRequest to synthesise information 'on the fly', could not be done any other way, and so don't violate the Least Power rule.
My conclusion: as we try to tell a more carefully articulated story about URIs and resources and their relationship, we need to pay more attention to the User Agent and the user experience. The thing most closely related to the Oacaxa weather report is the words I see on the screen, not the HTML which gets interpreted to produce them.
Your Least Power link is hosed.
@billyg: thanks. I have fixed it.
Love the blog, if i may ask, what software are you using? how much does it cost? where do you get it? If it's not a secret email me some details wouldya?
thanks in advance!
We are using Movable Type version Open Source.
When developing the information architecture of a website, I always try make the web page hierarchy, i.e. specifying the URI schemes for every type of content (news, articles, products etc).
For ex, let's take a blog with around 20 posts per month. In this case I prefer to have a browsable archive per month: www.mysite.com/blog/2008/09/ for ex. Or more generaly: www.mysite.com/blog/$year/ and www.mysite.com/blog/$year/$month/
A blog post will have a uri like this one: www.mysite.com/blog/$year/$month/blog-post-filename/. Seeing such an uri, a visitor automatically find out when this post was written.
Only the blog post is the web page that contains the actual content, the browsing activity is an intermediary step and thus, can be "upgraded" into a XMLHttpRequest way. I means that we can have only one page: www.mysite.com/archive/ that provides an AJAX list of blog post excerpts, but clicking on a post will open the post uri: www.mysite.com/blog/$year/$month/blog-post-filename/.
Generally speaking: it's ok to convert the intermediary browsing/searching steps into a single AJAX webpage(webapp) but having real URLs for content items.
Google Sitemaps is a simple way to instruct the bot where and how to crawl, thus excluding the need for /$year/$month/ webpages.
I think that by mixing both methods: URLs and AJAX, we can get good results.
Right. Jax'ing up my web browser again, A?
Use noscript in firefox and you may notice:
1) good websites have no-javascript counterparts (gmail, ebay)
2) bad sites don't display without javascript
It's rare a site really needs javascript to improve the user's experience. Those I've found provide an application I want to use (google chat). I don't want to use JS for browsing.
Generally, JS provides a disservice to the user:
* back buttons break
* unintended bookmarks
* broken printing
* proxy & other caching servers
* content filtering
* local browser caching
* local browser speed
* page loading time
* browser resource usage
* browser compatibility
* mobile browser (iphone)
* search engines besides google after using webmaster tools
Think about how many times you've hovered over a NAV link that pops open another list of links which disappears the second you move your mouse it.
Most have solutions - many unimplemented.
If you have static links, use the static links. Make the AJax optional.