What is the Web's Model of Computation?

Luca Cardelli

Digital Equipment Corporation, Systems Research Center
luca@pa.dec.com

The communication protocols that bind the Internet together synthesize a global computer out of a large collection of processors and networks. A natural question is: how do we program this global computer and, in particular, how do we program the World-Wide Web? If we could integrate the Web with a proper distributed object system like CORBA, all our programming problems would be solved. Or would they?

Every program is based on an underlying model of computation. Traditionally, we have used sequential, functional, relational, concurrent, and distributed models. In order to program the Web, we first need to understand the Web's model of computation: does it correspond to a traditional model, or is it something new? Once we agree on a model (or, maybe, more than one), we can create APIs that embody that model, and eventually extend or design programming languages for that model.

Traditional models of data and computation have been successfully imposed onto the Internet. For example, Telnet reduces the Internet to a multiprocessor, while FTP reduces it to a file system. Currently, the trends are to reduce the Internet to a distributed object system, to a distributed agent system, or to a combination of both (as in my own work with Obliq). Noticeably, all these models are based on a reasonable degree of reliability of the underlying computing infrastructure.

The Web (i.e., http) can be seen to embody a significantly different model of computation, and perhaps we should not try to force this model into a traditional one. A common experience exemplifies this point. When browsing, we actively observe the reliability and bandwidth of certain connections (including zero or varying bandwidth), and we often take action on these dynamic quality-of-service observations. These observables are not part of traditional models of computation.

As a general principle, we want to be able to program any algorithm that can be enacted by a person. Hence, a Web program must be able to react to variable quality of service and to a lack of referential integrity. Furthermore, a Web programming language must be able to express quality-of-service abstractions, so as to increase reliability. We need to base such a language on a model of computation that is more refined than (reliable) distributed objects and agents.

It will certainly be possible to isolate reliable subsets of the Web (e.g. intranets, agent servers, replicated databases), and use traditional models of computation for those. But it seems unlikely that the whole Web will ever be entirely reliable, since everybody's home PC may eventually become a Web server.

Therefore, while it is certainly easier to program a distributed object system than the Web, we cannot just hope to turn the latter into the former. We must describe and deal with the peculiar properties of the Web itself.