W3C HTTP 1992

HTTP Negotiation algoritm

This article explains the thinking behind the HTTP negotiation algorithm which allows servers (or clients or proxies) to decide which version of a document a user should be given.


Content negotiation is important because

  1. It release the technology from the constraint of being tied to a particular data format for transferred data. It allows new formats to be introduced, and so allows the web to evolve.
  2. It allows a generic resource to exist which refers to many different specific resources specific for example by langauge, format, etc
  3. It allows many dimensions of genericity;
  4. It allows resolution of these choices with, in the case of "preemptive negotiation", the minimum of round trip times. To work effectively, content negotiation must reliably distinguish between two cases of greatly differing benefit to the user, tough if it makes a "mistake" when in fact there is little difference between the benefit of two choices, then that is not seen as a disaster.

    The system should be such tha tthe user interface is easily configurable, or automatically configurable. A smart client can deduce by measurement and extrapoltion the time it takes to process particular types of data, and the bandwith of lines. The user should be able to easily select for example how long he/she is prepared to wait for documents, or the importance of high resolution, using, for example, a slider control.


The model is that the optimum choice can be made based on a cost-benefit analysis. Although, when the satisfaction of a human user is involved, it is difficult to assign formulae to the costs (for example of waiting) or to the benefits (for example of high resolution),

I (TimBL) feel that starting with a model, rather than arbitrary rules, makes it possible to later derive results for example about the behaviour of proxies, such that within the model, the system will be consistent.

The algorithm optimizes the net benefit for the user when deciding what to return. The hope is that fine decisions will not have to be made, as in most cases the results for different formats will be very different, and there will be a clear winner.

There is no cost metric for the value of presentation of a document to a user (or the loading of a program, etc) so we normalise the alue of the document under perfect circumstances to 1.


During processing, format conversion, translation, etc, it is assumed taht the quality of the document may be degraded. We represent the degradation of qualty by the rtio of the final to initial quality. We then assume that this ratio can be considered for many processes to b constant. In other words, we are saying that the effect of processes on the quality is multiplicative. This is reasonable because

Of course two different versions of a document may be simply allocated different relative qualities by their authors. We allow this to be represented.


Firstly, we assume thatthe cost to the user is only the time for which the user has to wait.

Secondly, assume that the cost is linear in time

Thirdly, we assume that the time is linear in the size of the message.

This is dealt with in detail below.

The benefit

The final net value to the user can therefore be written

presented_value = 1 * total-degradation - a - b * size

for some a and b which are functions of the user's needs.

In principle, the document could be returned in any of a very large, possibly unbounded set of possible formats. (For example, a jpeg compression algorithm is controlled by a floating point parameter which could be varied at will). Often, a small finite set of options are available.

We have a model of genericity in which documents can vary along a number of dimensions such as content-type, language, etc. For each dimension i (eg content-type) there is a value (eg "image/png") for a document.

We assume that for any particular value vi along dimenion i (e.g. a value of image/png for dimension "content-type") a client program may have to do some processing which may degrade the quality. Alternatively, the percieved quality may be lower on the part of the user. In any case, from the point of view of the client, the value v(i) will have some quality multiplier associated with it which we can call qv(v(i)).

In earlier versions of this document, we allowed the client also to take some time to do the conversion, and so add, for each v(i), a term onthe cost. This is I think too complicated and so may have to be removed. It doesn't factor out well. It was conveyed using the mxb attribute. This was later it seems changed from a linear function to a step function, which made it basically impossible to reason about. See "cost" below.

If no degradation is involved, these values all default to 1.

For a document transferred from the server to the client, let the quality as it leaves the server, as determined by the server, be Qs. The benefit to the client is then

Qs * PRODUCT{i, qv(v(i)) }

where the PRODUCT should be a big "pi" but we don't have HTML math yet.

Preemptive negotiation

This is the name which seems to have been given to negotiation which only uses two messages. (Not some people's idea of negotiation at all, but the only negotiation originally designed into HTTP.) In this case, the client must transfer to the server all that the server needs to perform the calculation.

This has to date been done in the Accept*: headers, of which there is one header type for each value of i, and one actual header sent for each value of v(i) which the client can handle (if you like, for which qv(v(i)) is nonzero), nd the q attribute on that header have qv(v(i)). If the header is given but q is omitted, it is assumed 1.

The algorithm on the server depends very much on the sort of thingste server is serving. If the serer can do automatic conversions, it can calculate when to do them on the fly. Let's take the case of a server which has for each value of (say) j a document whose quality is Qs(j) and a length L(j).

The benefit of document j to the client is then

Qc(j) = Qs(j) * PRODUCT{i, qv(v(i,j)) }

as above, and the net benefit after waiting for all those bytes is the difference between that and the cost. A negative net benefit would suggest that an image, etc, should not be shown.

The v(i,j) are just the v(i) for the jth document.

The Qs(j) can of course be calculated or allocted beforehand for each file. The server choses the file for which the above expression is maxiumum.

Calcualting the cost of delay

Originally previous versions of this note assumed that the cost to the user was linear in time. Since, the HTTP drafts have been changed to a step function, that any time up to a certian limit had no cost, and any greater time had infinte cost. The author feels that this is less reasonable.

Original algorithm

The following was the original argument. The values of a and b have components from processing time on the server, network delays, and processing time on the client. These delays are not additive as a good system will pipeline the processing, and whilst the result may be linear in message size, calculation of it in advance is not simple. The amount of pipelining and the loads on machines and network are all difficult to predict, so a very rough assumption must be made.

We make the client responsible for taking into account network delays. The client will in fact be in a better position to do this, as the client will after one transaction be aware of the round-trip time.

We assume that the delays imposed by the server and by the client (including network) are additive. We assume that the client's delay is proportional to message size, but is a function of (for example) the content-type.

The three parameters given by the client to the server are

The degradation (quality) factor between 0 and 1. If omitted, 1 is assumed.
The size of message (in bytes) which even if immediately available from the server will cause the value to the client to become zero
The delay (in seconds) which, even for a very small message with no length-related penalty, will cause the value to the reader to become zero.

These parameters are chosen in part because they are easy to visualize as the largest tolerable delay and size. If not sent, they default to infinity.

Client calculating the mxb, mxs

Note though that these are not the parameters set by the user. Suppose the user sets the maximum delay to T. The client may have a constant time Tm it knows it will take to display anything, which will be at least the server round trip time. This might also include the time to launch or message a helper application. (The totally caring client would of course measure and record these things so as to optimize the user's experience. But that might be taking it a bit too far). The mimimum time available to the server will then be

mxs = T - Tm

If for the content-type in question, the client can process data at R bytes/second, then that will introduce an additional delay. The server will take that into account correctly in its part of the calculation if

mxb = mxs * R

In practice, for many situations, R will simply be dominated by the bandwidth of the connection.

Server calculating net benefit

The server, for each document j which could be returned, calculates the net benefit. Let us call the length of document j "L(j)", and the time it would take the server to start sending it D(j)

Q(j) = Qc(j) - L(j)/mxb - D(j)/mxs

which of course is

Qc(j) - { L(j)/R + D(j) }/{ T - Tm }

Which is not quite logical for small Qc, but when Qc is 1, the condition for the document to be unacceptable (net benefit zero) becomes

D(j) + Tm + L(j)/R > T

i.e. the time taken is more than the user is prepared to wait.

Using a step function

@@@ Assuming bandwidth only dominates delay

Reactive negotiation

Reactive negotiation is the form in which information from the server is transferred to the client, and the client decides.

This is important because

  1. Some people are so keen on "negotiation" implying a chit-chat that they can only conceive of this sort ;-)
  2. This allows the client to use CN when the amount of information it would have to transfer to the seer for preemptive CN would be too great;
  3. It allows proxies to work efficiently

To summarize reactive CN, the server transfers basically the equivalent information to the client that the client in preemptive CN transferred o the server. This means taht whereas in preemptie CN the server's algorithm could be arbitrary and hidden, in reactive CN the serer's algorithm has to be a standard, and exposed, and in a sane world the same as the client's. The great win is that the proxy can, knowing the servers' algorithm, iplemented it on hte server's behalf.

Really quite simple


[[See also: Design issues discussions around this point.]]
© TimBL 1992, Mar 1993