Network-Centric Measurement of Caching

Position Paper
Solom Heddaya
InfoLibria, Inc.
1998.10.20

Network managers deploy caching in order to improve their networks, not to achieve hit ratios, or op/s, or any of the other commonly used caching metrics. They seek to improve their network's bandwidth capacity, web service capacity and repsonse time, without adversely affecting network latency, or network availability and reliability.

Very little current research work addresses these issues of pressing concern and value to network managers. Most of the current benchmarking and performance characterization research focuses on the behavior of the network cache as a server. While this approach can be useful in optimizing certain aspects of network caches, it does not accurately reflect the impact of caching on the network. Typical cache performance metrics, such hit ratio (or rate), mystify network managers. They would much rather see metrics that quantify the promised network capacity expansion, response time speedup (to the end-user), and availability enhancement. The reliability impact of caching ranks high on their list, too.

The network-centric point of view impacts workload characterization. For example, request and response routing information needs to be included in workload characteristics, in order to address such problems as network cache placement.

In this paper, we argue why the repertoire of research in the field should widen to include the point of view of the network, and to provide an initial attempt at clarifying how this might be done using research from the field of performability.

Network Bottlenecks Oscillate

As the Internet grows at a historic pace, doubling in aggregate traffic rate every three to six months, it suffers from bottlenecks that frustrate its users. These bottlenecks oscillate between the two major constituents of the Internet: the client/server complex, and the network itself. Recently the Internet was stressed to deliver the Starr report [W98, K98b]. On this particular occasion, the bottleneck was the server(s). However, ordinary traffic conditions give rise to an unacceptably low 40 kilobit/s average transfer rate per TCP connection through the backbone [K98a]. This latter measurement reflects transfer rates delivered, not over modem lines, but over dedicated T1-class last hop.

Network caching applies server-like functionality to solve the network congestion problem. So, is a network cache to be judged on how well it functions as a server, or on the extent to which it improves the network? On the one hand, a network cache looks like a high performance server, whose performance can be characterized via the traditional throughput and response time [MA98]. On the other hand, a network cache expands bandwidth and speeds up response time. These benefits are the true goals of network caching. With only one exception, measurements of network cache performance continue to focus on the server aspect of network caching.

Server-Centric Performance Characterization

Network caches, as implemented most commonly today, originated from work on high performance web servers. >From the point of view of the network, servers are hosts, while network caches are more like routers or switches. The dominance of the server point of view in characterizing cache workload and cache performance, can be seen by noting the following:

Network-Centric Performance Evaluation

A number of network-related factors must be taken into account, for network caching to be evaluated in the proper context. These range from network topology, to capacity enhancement, to the effect of cascaded caches on each other's performance. Furthermore, network and web content availability need to be suitably defined and quantified. The requirements for such a network-centric performance model include:

Long Term vs. Short Term

Aside from the tactical effects of network caching, which can be quantified reasonably well using the approach we outlined above, we should not ignore the strategic impact on the network. For example, network scalability can be dramatically enhanced (or hampered) by caching. if the network scales by upgrading individual links, then a parallel computing solution would be suitable, but if the network grows primarily by adding new links and nodes, then a distributed computing approach to caching would be preferable.
 

References

[AC98] J. Almeida and P. Cao, "Wisconsin Proxy Benchmark 1.0", Univ of Wisconsin, (as of Oct. 20, 1998).

[HHE97] A. Heddaya, A. Helal, and A. Elmagarmid, "Recovery-Enhanced Reliability, Dependability and Performability," Chapter 4 in Recovery Mechanisms In Database Systems (V. Kumar and M. Hsu, eds.) Prentice-Hall, Dec. 1997.

[HMY97] A. Heddaya, S. Mirdad and D. Yates, "Diffusion-based Caching Along Routing Paths", Proc. 2nd Web Caching Workshop, Boulder, Colorado, June 9-10, 1997.

[K98a] Keynote Systems, Inc., "Top 10 Discoveries about the Internet", (as of Oct. 20, 1998).

[K98b] Keynote Systems, Inc., "Clinton/Lewinsky Scandal : Effect on Internet Performance", Oct. 6, 1998.

[MA98] D.A. Menasce, V.A.F. Almeida, "Capacity Planning for Web Performance: Metrics, Models, & Methods," Prentice-Hall, 1998.

[W98] D. Wessels, "Report on the effect of the Independent Council Report on the NLANR Web Caches", NLANR, Sep 23, 1998.
 
 



******