W3C HTTP Performance

HTTP Performance Overview

This page is devoted to information about how to improve HTTP/1.1 performance. Most of the results are derived from experiences with Jigsaw, Apache and the libwww implementations of HTTP/1.1.

Also see the HTTP Overview and the HTTP/1.1 Implementor's Forum for questions and answers when you are implementing HTTP/1.1

Henrik Frystyk Nielsen,
@(#) $Id: Overview.html,v 1.26 2003/01/06 15:43:57 ylafon Exp $


Network Performance Effects of HTTP/1.1, CSS1, and PNG and an executive summary
We describe our investigation of the effect of persistent connections, pipelining and link level document compression on our client and server HTTP implementations. A simple test setup is used to verify HTTP/1.1's design and understand HTTP/1.1 implementation strategies. We present TCP and real time performance data between the libwww robot and both the Jigsaw and Apache HTTP servers using HTTP/1.0, HTTP/1.1 with persistent connections, HTTP/1.1 with pipelined requests, and HTTP/1.1 with pipelined requests and deflate data compression. We also investigate whether the TCP Nagle algorithm has an effect on HTTP/1.1 performance. While somewhat artificial and possibly overstating the benefits of HTTP/1.1, we believe the tests and results approximate some common behavior seen in browsers. The results confirm that HTTP/1.1 is meeting its major design goals. Our experience has been that implementation details are very important to achieve all of the benefits of HTTP/1.1.

For all our tests, a pipelined HTTP/1.1 implementation outperformed HTTP/1.0, even when the HTTP/1.0 implementation used multiple connections in parallel, under all network environments tested. The savings were at least a factor of two, and sometimes as much as than a factor of ten, in terms of packets transmitted. Elapsed time improvement is less dramatic, and strongly depends on your network connection.

Note that the savings in network traffic and performance shown in this document are solely due to the effects of pipelining, persistent connections and transport compression. Some data is presented showing further savings possible by the use of CSS1 style sheets, and the more compact PNG image representation that are enabled by recent recommendations at higher levels than the base protocol. Time did not allow full end to end data collection on these cases. The results show that HTTP/1.1 and changes in Web content will have dramatic results in Internet and Web performance as HTTP/1.1 and related technologies deploy over the near future. Universal use of style sheets, even without deployment of HTTP/1.1, would cause a very significant reduction in network traffic.
This paper does not investigate further performance and network savings enabled by the improved caching facilities provided by the HTTP/1.1 protocol, or by sophisticated use of range requests.

The paper is also available in Postscript but there are of course no links to the data taken.

See also the other papers on HTTP Performance.

Compression and Performance

Compression can have a major impact on the performance of HTTP as especially PPP lines are being filled up with data and the only way to obtain higher performance is to reduce the number of bytes transmitted. Here are some small examples of how compression affects the performance of HTTP/1.1

Test of Case Canonicalizing and Compressing HTML
Some very simple results of how case sensitivity affects the efficiency of the zlib compression algorithm
The Effect of HTML Compression on a LAN
A description of the effect of compression on a LAN and how it may interact with TCP slow start and delayed ACK algorithms
The Effect of HTML Compression on a PPP Modem Line
A comparison of deflate based compression over modem compression
Many good links on compression
What's new in compression? Find compression resources, conferences, and some research groups and companies and more from this page!

HTTP and TCP Interactions

A lot of the performance work on HTTP has been to improve the interactions between HTTP and TCP. One parameter that has beeen discussed a lot is whether Nagle's algorithm affects HTTP performance in a negative way. Here are a small test showing the problem and what is causing it:

HTTP and System Overhead

Using a single TCP connection for multiple downloads instead of one TCP connection per request has a significant impact on the amount of overhead produced in Web applications due to significantly less context swapping and system calls. Here are some very crude samples of the effects measured:

TCP Analysis Tools

This is a list of the TCP tools that we have used for various tests and analysis.

The basic data gathering tool. Runs on UNIX systems. Some vendors do not ship tcpdump; others ship older versions of tcpdump. We found it necessary to install a current version (version 3.3) on all platforms we used, due to the observation that the last FIN TCP packet was often missing.
This is a Microsoft Windows NT utility. The output is incompatible with tcpdump but a conversion made it possible to use our tcpdump tools for handling the data.
T. Shephard, S.M. thesis "TCP Packet Trace Analysis" for David Clark at the MIT Laboratory for Computer Science. The thesis can be ordered from MIT/LCS Publications. Ordering information can be obtained from +1 617 253 5851 or send mail to publications@lcs.mit.edu. Ask for MIT/LCS/TR-494. This was very useful to find a number of problems in our implementation not visible in the raw dumps.
M. Ryan, I.T. NetworX Ltd., 67 Merrion Square, Dublin 2, Ireland, June, 1996. The program was very useful when we needed to see the contents of packets to understand what was happening.

In addition to these generic TCP analysis tools, we produced a set of dedicated tools for handling the large amount of data taken:

A Perl program that is included in the xplot package for converting tcpdumps to xplot format. We had to extend the program significantly in order to handle our tcpdumps.
Getdata is a small C program that runs the robot in various modes taking tcpdumps at the same time.
A Perl program to iterate over all the tcpdumps extracting the detailed summary.
This is a Perl program to split up a large tcpdump taken over the PPP connection.

Here are some other good links on TCP analysis tools:

Henrik Frystyk Nielsen and Jim Gettys,
@(#) $Id: Overview.html,v 1.26 2003/01/06 15:43:57 ylafon Exp $