Design of HTTP-NG Testbed

W3C Note 10 July 1998

This version:
Latest Released Version:
Daniel Veillard, <veillard@w3.org>

Copyright  ©  1998 W3C (MIT, INRIA, Keio ), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.

Status of this Document

This is a note published by the HTTP-NG Protocol Design Working Group describing the goals, current state and future development of the HTTP-NG testbed..

This document has been produced as part of the W3C HTTP-NG Activity. This is work in progress and does not imply endorsement by, or the consensus of, either W3C or members of the HTTP-NG Protocol Design and HTTP-NG Web Characterization Working Groups. This document is subject to change, check the reference to the latest version.

The goal of the testbeds are to evaluate the feasibility of the HTTP-NG model, its performances, its extendibility, and the ability to integrate the HTTP-NG model in he existing Web architecture. Building higher level demonstration exhibiting the extra benefits of the HTTP-NG model is also planned. The suggested experiments are of three kinds:

  1. A base testbed infrastructure
  2. Higher level functionalities demonstrations
  3. Transition strategy evaluation
  4. Available tools and codebases
  5. Current status

This document describes the expected architecture for each kind of testbed, the main pieces of code used, the expected experiments and way to evaluate the results.

This document is part of a suite of documents describing the HTTP-NG design and prototype implementation:

Please send comments on this specification to <www-http-ng-comments@w3.org>.

The base testbed infrastructure

The purpose is to evaluate the suitability one can expect from HTTP-NG when running basic HTTP interfaces using HTTP-NG protocol on both the client side and the server. The architecture is a client issuing HTTP-NG requests, a server answering these requests using a realistic set of Web pages. The requests can either be generated by the SURGE URI generator tuned to reflect various kind of common HTTP usage, or hand coded to reflect more specific uses. The client may either run in "one shot" mode fetching a page and the related inlined objects for the purpose of analyzing a complete trace of a session, or in robot mode to produce a realistic load on the server. One goal is to be able to run the exact same tests in a similar enviroment but using the HTTP/1.1 protocol, in order to compare the characteristics of both stacks.

image: base.gif

The first goal of this base testbed experiment is to verify that the HTTP-NG specification can actually handle the functionalities used by HTTP 1.x users. The output of the Web Characterization Working Group will be a set of scenarios exhibiting common HTTP 1.x practices. The SURGE program will then be used to generate a realistic set of URI and time stamps, which in turn will be used by the HTTP-NG robot client as requests to the HTTP-NG server. One will also need to verify that not so current practices - the top 10 kludge usage of HTTP - can also be served and this will be handled in a more specific fashion, either generating the URI by hand or modify the client/server software (for example when running another protocol on top of HTTP).

The second goal of this series of experiments is to analyze the behavior and performances of HTTP-NG under different qualities of services. One should at least try to reproduce the following commonly found network conditions:

The following metrics are of interest:

This requires at least two machines and it may also be extremely useful to get a dedicated piece of hardware sitting between the client and the server which allows to simulate in a reproducible manner various network quality of Services (bandwidth and latency tweaking). This may also be done using an extra dedicated machine and a tunneling software.

Considering the software, one can be worried with the actual performances of a Java, even if things may improve a lot in a not so distant future. Currently is sound more reasonable to do the performance evaluation using a C code base, and use ILU for both the client and server side. One should also try to estimate the induced cost of genericity - ILU being a very generic system supporting a lot of protocols and offering stubs for various languages.

Considering the server side, one need to implement some sample code sitting behind the stubs generated from the interfaces by the ILU stubber. For this purpose, ILU has been integrated into the Apache server.

The result of the tests should be compared with the actual numbers obtained for the HTTP 1.1 . Getting half an order of magnitude faster on latencies for LAN should not be too difficult, but we have to compare with the full range of network Quality of Service all the metrics and check that it's at least as good on each points before considering the results a success.

The next goal of this testbed is to test the ability of HTTP-NG specification to support proxies and caching. The base testbed will then be extended by adding a proxy/cache for HTTP-NG between the client and the server.

image: proxy.gif

The analysis of the results on this configuration will probably a bit more difficult to establish, here are a few points to look at:

One should keep in mind that caching in the Web is still in its infancy and the HTTP-NG specification must be able to handle the big changes in Web caching technology which are likely to occur within the next few years. Extendibility is a key point of HTTP-NG design from the proxying and caching point of view.

Higher level functionalities

This testbed is really where we want to exhibit the extra capabilities of HTTP-NG over HTTP 1.x . At this time it's somewhat difficult to predict how extended this testbed will be, but a simple core experiment is definitely needed to demonstrate the concepts behind HTTP-NG.

Here  is an example based on existing W3C testbeds:

  1. A DOM (Document Object Model) demonstration, where the Web client exports via HTTP-NG it's internal documents structure and the associated interfaces as defined by the DOM WG document.
    image: DOM.gif

    This testbed will demonstrate how the general interface of the Web based on the "fetch then display" metaphor could shift onto a cooperative environment relying on distributed structured documents.

    A DOM implementation on top of Amaya/Thot is likely to occur, and adding the support from HTTP-NG should be fairly trivial. This would provide a good framework for demonstration of extra capabilities made possible by HTTP-NG, here is a few suggestions:

    On such a framework, ideas to build demos come easily, the problem is mainly related to the manpower needed to achieve them, not the capabilities of the underlying platform.

  2. On the server side, adding WebDav APIs on top of and existing HTTP-NG server would be a good example of extensibility mechanism provided by HTTP-NG. Even without full support for the WebDav functionalities, a simple extension of Jigsaw providing access to

Transition strategy

This testbed is needed to get a proof that the HTTP-NG can actually be deployed on a large scale basis within the existing Web framework. We must show that the HTTP-NG can actually be deployed even if a huge amount of software is not currently able to support HTTP-NG protocol natively. The experience of the migration from HTTP 1.0 to HTTP 1.1 showed that it's far easier to get the server pool to implement new features (support for HTTP 1.1 is actually available in most Web servers), than the client software, namely the browsers.

The goal of this testbed is to prove that it is actually possible to deploy HTTP-NG on servers with an existing base of HTTP/1.* clients by implementing translation proxies. It consist on designing and implementing an HTTP 1.* to HTTP-NG proxy. We don't seek performances here and the cheapest implementation will be the best one. This is purely a proof of concept with an actual implementation:

image: compat.gif

The test should be conducted using several, commercial grade, HTTP clients accessing an HTTP-NG server. Successful experiment will not exhibit any loss of usability from the client side. One should take care of testing all the common HTTP services, at least GET, POST, PUT and HEAD.

Considering the software, on the client side the choice is wide open, one should just tried the most popular browsers and editing tools, running a 1.1 robot may prove useful too to stress the proxy. Since performances is not the goal of this testbed, one should go to the cheapest solution in term of development costs and it seems that extending Jigsaw to get an HTTP-NG client side is the way to go. Once done, one just need to tweak the existing proxy code in Jigsaw to have client and server side using different stacks. The HTTP -NG server could be the same as for the base testbed, or Jigsaw if the HTTP-NG server side is implemented.

Available tools and codebases

One should probably have two different implementations of the HTTP-NG stack, possibly using different languages. Most of the existing software we will rely on is written in C or Java, and we will probably end up with a C/ILU and a Java/Jigsaw implementations.

Here are the basic main pieces of software that will be used to to build the HTTP-NG testbed::


The Inter-Language Unification system (ILU) is a multi-language object interface system. The object interfaces provided by ILU hide implementation distinctions between different languages, between different address spaces, and between operating system types.

ILU latest implementation contains HTTP-NG experimental code, a wire format, MUX channel multiplexing implementation, and all the glue needed to link with stubs generated from an Interface Definition Language (IDL). Since it is currently the most advanced HTTP-NG framework, now it will serve to validate the first versions of the drafts.


Jigsaw is W3C's sample implementation of HTTP, the project constitutes an ongoing W3C Activity . Jigsaw is a full blown HTTP server entirely written in Java.

The HTTP-NG jigsaw code while not as advanced yet as ILU implementation will provide a second piece of code, allowing to debug compatibility problems. Being written in Java this also mean a different environment (virtual machine) and hence a good test of operating system portability. Moreover the java code gives access to two full featured testbed, Amaya and the Jigsaw server.


Amaya is a Web client that acts both as a browser and as an authoring tool. It has been designed with the primary purpose of being a testbed for experimenting and demonstrating new specifications and extensions of Web protocols and standards.

An experimental version of Amaya embed Jigsaw Java HTTP stack, so it is possible to get a browser and HTML editor using the Java HTTP-NG code. This will prove useful for higher level functionalities demonstration, especially since PICS and DOM support are being added to Amaya.


Apache is the most popular Web server, it is available freely with source code.

A modified version of Apache embedding the ILU library has been produced for the Testbed. It allows testing of the HTTP-NG stack within a full featured Web server and provides a solid framework for tests, especially to compare HTTP/1.1 and HTTP-NG respective performances.


SURGE (Scalable URI Reference Generator) is a WWW workload generator which is based on analytical models of WWW use. The goal of SURGE is to provide a scalable framework which, from the server's perspective, makes document requests which mimics a set of real users.

The various common Web access pattern which result from the Web Characterization Working Group will be described as SURGE analytical models, allowing to produce realistic simulations of actual Web traffic for the base testbed infrastructure.

Current status

The current status is that ILU library provides the core implementation of HTTP-NG protocols, i.e. the MUX protocol, the wire encoding, the basic HTTP interfaces stubs. Other pieces of software, namely Apache, Jigsaw, Surge, Amaya have been modified to some extent to embed the ILU library. Currently the base testbed is mostly functional, but more work is needed to test performances, solve existing bottlenecks and cleanup the installation process before a public release of the testbed. Advanced functionalities like proxy testing and extensibility showcase are still waiting for more complete specifications before upgrading the corresponding tools.
Here is a more detailed description of the current status of each piece of software:

  1. ILU : support for MUX, wire protocol, basic HTTP and HTTP-NG interfaces, test implementation for a robot and a server.
  2. Apache : ILU support integrated, the server serves a similar set of pages with Apache HTTP/1.1 implementation and a dynamically configurable ILU based protocol stack (can be HTTP or  HTTP-NG experimental stacks) using the standard HTTP interfaces (GET, HEAD, POST).
  3. Amaya : versions embedding ILU and a Java interpreter are available. Currently the DOM API is not stable enough for testing but it already export the Thot APIs via ILU.
  4. Jigsaw : experiments with Jigsaw using ILU and the basic HTTP interface for serving pages has been conducted. The latest releases of Jigsaw provide specific extensions needed when exporting a resource using different protocol stacks, this should ease the design of HTTP-NG specific extensions. Jigsaw also provide an HTTP/1.1 proxy implementation, and seems the best candidate for test on an HTTP/1.1 <-> HTTP-NG proxy
  5. Surge : version 1.0 of Surge has been integrated with ILU.
  6. various other small software pieces are also available.

Most of the software base is available in a CVS repository (except Amaya and Jigsaw available independently) and we intend to make this publicly available during the continuous design phase.