W3C

Simple things make firm foundations

You can look at the development of web technology in many ways, but one way is as a major software project. In software projects, the independence of specs, has always been really important, I have felt. A classic example is the independence of the HTTP and HTML specifications: you can introduce many forms of new markup language to the web through the MIME Content-Type system, without changing HTTP at all.

The modularity of HTML itself has been discussed recently, for example by Ian Hickson, co-Editor of HTML5:

Note that it really isn’t that easy. For example, the HTML parsing rules are deeply integrated with the handling of <script> elements, due to document.write(), and also are deeply integrated with the definition of innerHTML. Scripting, in turn, is deeply related to the concept of scripting contexts, which depends directly on the definition of the Window object and browsing contexts, which, in turn, are deeply linked with session history and the History object (which depends on the Location object) and with arbitrary browsing context navigation (which is related to hyperlinks and image maps) and its related algorithms (namely content sniffing and encoding detection, which, to complete the circle, is part of the HTML parsing algorithm). – Brainstorming test cases, issues and goals, etc., Ian Hickson

and in reply by Laurens Holst:

I don’t know the spec well enough to answer that question, but I’d say modularization (if I may call it so) would make it both easier to grasp as individual chunks, for both the reviewing process and the implementing process. brainstorming: test cases, issues, goals, etc.. – Laurens Holst

The <canvas> element introduces a complex 2D drawing API different in nature from the other interfaces, which concentrate on setting and retrieving values in the markup itself; the client-side database storage section of the specification is another such interface. While the

<canvas> element has a place in the specification, the drawing API should be defined in a separate document. Hixie expressed a similar sentiment (and see the group’s issues about scope):

The actual 2D graphics context APIs probably should be split out on the long term, like many other parts of the spec. On the short term, if anyone actually is willing to edit this as a separate spec, there are much higher priority items that need splitting out and editing…

It would also be nice if the <canvas>

element and the SVG elements which embed in HTML did so in just the same way, in terms of the context (style, etc.) which is passed (or not passed) across the interface, in terms of the things an implementer has to learn about, and things which users have to learn about. So that <canvas> and SVG can be perhaps extended to include say 3D virtual reality later, and so that all of these can be plugged into other languages just as they are plugged into HTML.

There are lots of reasons for modularity. The basic one is that one module can evolve or be replaced without affecting the others. If the interfaces are clean, and there are no side effects, then a developer can redesign a module without having to deeply understand the neighboring modules.

It is the independence of the technology which is important. This doesn’t, of course, have to directly align with the boundaries of documents, but equally obviously it makes sense to have the different technologies in different documents so that they can be reviewed, edited, and implemented by different people.

The web architecture should not be seen as a finished product, not as the final application. We must design for new applications to be built on top of it. There will be more modules to come, which we cannot imagine now. The Internet transport layer folks might regard the Web as an application of the Net, as it is, but always the Web design should be to make a continuing series of platforms each based on the last. This works well when each layer provides a simple interface to the next. The IP is simple, and so TCP can be powerfully built on top of it. The TCP layer has a simple byte stream interface, and so powerful; protocols like HTTP can be built on top of it. The HTTP layer provides, basically, a simple mapping of URIs to representations: data and the metadata you need to interpret it. That mapping, which is the core of Web architecture, provides a simple interface on top of which a variety of systems — hypertext, data, scripting and so on – can be built.

So we should always be looking to make a clean system with an interface ready to be used by a system which hasn’t yet been invented. We should expect there to be many developers to come who will want to use the platform without looking under the hood. Clean interfaces give you invariants, which developers use as foundations of the next layer. Messy interfaces introduce complexity which we may later regret.

Let us try, as we make new technology, or plan a path for old technology, always to keep things as clean as we can.

One thought on “Simple things make firm foundations

  1. I am not quite sure if this comment is relevant here, but it concerns a general problem I think for many ordinary web-users. The problem is, that there is so much ‘information’ on the web, which often makes it rather difficult and time consuming to select and find the most relevant information even with the use of search-systemss like Google, Yahoo and others. Sometimes you may get severeal thousands of hits when you seek out of which maybe none prove to be are actually useful at a closer look.

    In fact I have an idea, that might perhaps solve the problem. The idea is, that a standardized hierarchical public domain database system is developed (probably by W3C), which may then be used in any number of implementations and computer environments, including websites. The meaning is, that these databases, that could be organized as systems of domains and subdomains with differentiated rights to which you get (authoring- and/or reading-)access by using a digital signature of some sort, should contain logically organized catalogues of meaningful links to trustworthy and in the context relevant websites.

    Imagine that for instance a university had such a system, where researchers, teachers and students could all share their valuable knowledge of where to find the really good stuff on the internet (and or intranet) and off course contribute themselves with comments etc.

Comments are closed.