Difference between revisions of "Webapps/AppCacheUseCases"

From W3C Wiki
Jump to: navigation, search
(added some cases yandex wants (with name of someone real who cares). First messy scribble)
 
(Added Alec's case)
Line 42: Line 42:
 
Basically we want methods for loading resources, storing them in cache,   
 
Basically we want methods for loading resources, storing them in cache,   
 
fetching them from cache, checking if something is in the cache, ...
 
fetching them from cache, checking if something is in the cache, ...
 +
 +
=== Community-content site - Alec Flett ===
 +
Logged-out users have content cached aggressively
 +
offline - meaning every page visited should be cached until told otherwise.
 +
Intermediate caches / proxies should be able to cache the latest version of
 +
a URL. As soon as a user logs in, the same urls they just used should now
 +
have editing controls. (note that actual page contents *may* not have not
 +
changed, just the UI) Pages now need to be "fresh" meaning that users
 +
should never edit stale content. In an ideal world, once a logged in user
 +
has edited a page, that page is "pushed" to users or proxies who have
 +
previously cached that page and will likely visit it again soon.
 +
 +
I know this example in particular seems like it could be accomplished with
 +
a series of If-Modified-Since / 304's, but connection latency is the killer
 +
here, especially for mobile - the fact that you have a white screen while
 +
you wait to see if the page has changed. The idea that you could visit a
 +
cached page, (i.e. avoid hitting the network) and then a few seconds later
 +
be told "there is a newer version of this page available" after the fact,
 +
(or even just silently update the page so the next visit delivers a fresh
 +
but network-free page) would be pretty huge. Especially if you could then
 +
proactively fetch a select set of pages - i.e. imagine an in-browser
 +
process that says "for each link on this page, if I have a stale copy of
 +
the url, go fetch it in the background so it is ready in the cache"

Revision as of 14:54, 2 May 2013

1. Initial loading - Yandex

Our SERP (and Yandex main page www.yandex.ru) uses embedded styles and scripts for faster loading than with multiple requests for styles/scripts/...

But users load them every time they visit the results page, because the browser doesn't cache it. It would be nice on the first visit to extract the styles and scripts and store them in the cache.

2. Bundles - Yandex

Sometimes we need to load several resources (js/css/json/...) before we can actually show something to user. Like a dialog, or another complex control. Or if it's a single page application before change "page". Again, it's often faster to make one request than several, but it would be even faster if we could then cache them separately: HttpCache.store(url1, content1); HttpCache.store(url2, content2); ... So that later we can use the files as usual (<script>, <link>...).

3. Diffs (delta updates) - Yandex

Every static file (js/css/...) has a version, e.g. http://yandex.st/mail/1.3.8/mail.js Whan we release a new version our users have to download it. It could be hundreds of kilobytes (or more). But the difference between versions is often not very big. So we want to make delta updates.

It would be nice if we could download the diff, apply it in the browser and store the update in cache e.g.:

var oldVersion = '1.3.8'; var newVersion = '1.3.9'; var oldContent = HttpCache.get(oldUrl); var newContent = applyPatch(oldContent, patch); HttpCache.store(newUrl, newContent);


4. Preloading - Yandex

Well, we can use normal xhr for that but maybe we can do more with HttpCache.

Basically we want methods for loading resources, storing them in cache, fetching them from cache, checking if something is in the cache, ...

Community-content site - Alec Flett

Logged-out users have content cached aggressively offline - meaning every page visited should be cached until told otherwise. Intermediate caches / proxies should be able to cache the latest version of a URL. As soon as a user logs in, the same urls they just used should now have editing controls. (note that actual page contents *may* not have not changed, just the UI) Pages now need to be "fresh" meaning that users should never edit stale content. In an ideal world, once a logged in user has edited a page, that page is "pushed" to users or proxies who have previously cached that page and will likely visit it again soon.

I know this example in particular seems like it could be accomplished with a series of If-Modified-Since / 304's, but connection latency is the killer here, especially for mobile - the fact that you have a white screen while you wait to see if the page has changed. The idea that you could visit a cached page, (i.e. avoid hitting the network) and then a few seconds later be told "there is a newer version of this page available" after the fact, (or even just silently update the page so the next visit delivers a fresh but network-free page) would be pretty huge. Especially if you could then proactively fetch a select set of pages - i.e. imagine an in-browser process that says "for each link on this page, if I have a stale copy of the url, go fetch it in the background so it is ready in the cache"