This is the problem of how a hierarchical cache, such as is being used in New Zealand, can optimize retrieval latencies for non-cachable resources.

In the New Zealand case, since they have a very limited-bandwidth connection with the rest of the world, they use a national cache to avoid overloading this international link. They also use additional caches scattered around the country, which normally go first to this national cache but are able to bypass it to go directly to an overseas origin server if the national cache isn't expected to have the appropriate cache entry.

For example, if a client does a GET with a normal (non-"?") URL, the request flows up the cache hierarchy because the responses to GETs are normally stored in caches. However, a POST is sent directly to the origin server, because there is no point in routing it through the cache hierarchy (there being no chance in today's HTTP/1.0 world that the caches would be helpful here).

In order to do request-bypassing in the most efficient possible way, the caches have to be able to determine from the request whether the response is likely to be cachable. (I would assume that it is important to err on the side of assuming cachability, since the converse could seriously reduce the effectiveness of the caches.)

We didn't come up with a good solution to this problem in general (i.e., for GETs whose responses are not cachable, or for other methods whose responses *are* cachable) but there was some brief discussion of the proposed "POST with no side effects" method.

DEFERRED ITEM: what to do about bypassing?


http working group issues