Position Paper
Workshop on Web Characterization
The abstract is repeated here (the above link refers to the full paper):
Caching in the World Wide Web is based on two critical assumptions: that a significant fraction of requests reaccess resources that have already been retrieved; and that those resources do not change between accesses.
We tested the validity of these assumptions, and their dependence on characteristics of Web resources, including access rate, age at time of reference, content type, resource size, and Internet top-level domain. We also measured the rate at which resources change, and the prevalence of duplicate copies in the Web.
We quantified the potential benefit of a shared proxy-caching server in a large environment by using traces that were collected at the Internet connection points for two large corporations, representing significant numbers of references. Only 22% of the resources referenced in the traces we analyzed were accessed more than once, but about half of the references were to those multiply-referenced resources. Of this half, 13% were to a resource that had been modified since the previous traced reference to it.
We found that the content type and rate of access have a strong influence on these metrics, the domain has a moderate influence, and size has little effect. In addition, we studied other aspects of the rate of change, including semantic differences such as the insertion or deletion of anchors, phone numbers, and email addresses.
Ultimately, distributed notification of changes may prove to be useful if it can be shown to scale well. Eolian's EagleWatch is an example of this, in that a master site monitors and propagates notifications of changes to other caches, but the master must itself poll the content providers. Pushing notifications from content providers to these distributed caches is perhaps the next step. Measurements of access patterns and the rate of change of web data are crucial to the determination of the effectiveness of this approach.
Another approach to the problem of polling sites for notification of changes is to provide an API for browsers, caches, or other software systems to download information about many recent updates at once. From the standpoint of caching, recent protocols like Piggyback server invalidation (PSI) and server volumes are mechanisms for amortizing the overhead of cache consistency. At a previous W3C workshop, I proposed similar support for user-level tracking of changes, rather than for cache validation. I never pursued this approach, but more complete information about how data changes in the web would permit trace-driven analysis.
Finally, workload characterization as at the heart of a raging debate about the effectiveness of caching proxies in today's Internet. Some traces have shown as much as 35% of requests involving cookies; if these resources are actually uncachable it puts a substantial limit on the best any proxy can do. I am participating in ongoing studies of the effectiveness of caching and the impact of changing data, and hope to share these with the W3C and the rest of the research community.
I am also the program vice-chair for the performance track of WWW8, and the program chair of USITS'99.