From W3C Wiki

1. Initial loading - Yandex

Our SERP (and Yandex main page www.yandex.ru) uses embedded styles and scripts for faster loading than with multiple requests for styles/scripts/...

But users load them every time they visit the results page, because the browser doesn't cache it. It would be nice on the first visit to extract the styles and scripts and store them in the cache.

2. Bundles - Yandex

Sometimes we need to load several resources (js/css/json/...) before we can actually show something to user. Like a dialog, or another complex control. Or if it's a single page application before change "page". Again, it's often faster to make one request than several, but it would be even faster if we could then cache them separately: HttpCache.store(url1, content1); HttpCache.store(url2, content2); ... So that later we can use the files as usual (<script>, <link>...).

3. Diffs (delta updates) - Yandex

Every static file (js/css/...) has a version, e.g. http://yandex.st/mail/1.3.8/mail.js Whan we release a new version our users have to download it. It could be hundreds of kilobytes (or more). But the difference between versions is often not very big. So we want to make delta updates.

It would be nice if we could download the diff, apply it in the browser and store the update in cache e.g.:

var oldVersion = '1.3.8'; var newVersion = '1.3.9'; var oldContent = HttpCache.get(oldUrl); var newContent = applyPatch(oldContent, patch); HttpCache.store(newUrl, newContent);

4. Preloading - Yandex

Well, we can use normal xhr for that but maybe we can do more with HttpCache.

Basically we want methods for loading resources, storing them in cache, fetching them from cache, checking if something is in the cache, ...

Community-content site - Alec Flett

Logged-out users have content cached aggressively offline - meaning every page visited should be cached until told otherwise. Intermediate caches / proxies should be able to cache the latest version of a URL. As soon as a user logs in, the same urls they just used should now have editing controls. (note that actual page contents *may* not have not changed, just the UI) Pages now need to be "fresh" meaning that users should never edit stale content. In an ideal world, once a logged in user has edited a page, that page is "pushed" to users or proxies who have previously cached that page and will likely visit it again soon.

I know this example in particular seems like it could be accomplished with a series of If-Modified-Since / 304's, but connection latency is the killer here, especially for mobile - the fact that you have a white screen while you wait to see if the page has changed. The idea that you could visit a cached page, (i.e. avoid hitting the network) and then a few seconds later be told "there is a newer version of this page available" after the fact, (or even just silently update the page so the next visit delivers a fresh but network-free page) would be pretty huge. Especially if you could then proactively fetch a select set of pages - i.e. imagine an in-browser process that says "for each link on this page, if I have a stale copy of the url, go fetch it in the background so it is ready in the cache"

Small simple game - Jonas

The game consists of a set of static resources. A few HTML pages, like high score page, start page, in-game page, etc. A larger number of media resources. A few "data resources" which contain level metadata. Small amount of dynamic data being generated, such as progress on each level, high score, user info. In-game performance is critical, all resources must be guaranteed to be available locally once the game starts. Little need for network connectivity other than to update game resources whenever an update is available.

Interested implementors: Zynga

Advanced game - Jonas

Same as simple game, but also downloads additional levels dynamically. Also wants to store game progress on servers so that it can be synced across devices.

Interested implementors: Zynga

Wikipedia - Jonas

Top level page and its resources are made available offline. Application logic can enable additional pages to be made available offline. When such a page is made available offline both the page and any media resources that it uses needs to be cached. Doesn't need to be updated very aggressively, maybe only upon user request.

Twitter - Jonas

A set of HTML templates that are used to create a UI for a database of tweets. The same data is visualized in several different ways, for example in the user's default tweet stream, in the page for an individual tweet, and in the conversation thread view. Downloading the actual tweet contents and metadata shouldn't need to happen multiple times in order to support the separate views. The URLs for watching individual tweets needs to be the same whether the user is using appcache or not so that linking to a tweet always works. It is very important that users are upgraded to the latest version of scripts and templates very quickly after they become available. The website likely will want to be able to check for updates on demand rather than relying on implementation logic. If the user is online but has appcached the website it should be able to use the cached version. This should be the case even if the user navigates to a tweet page for a tweet for which the user hasn't yet cached the tweet content or metadata. In this case only the tweet content and metadata should need to be downloaded and the cached templates should be used. If the user does not have twitter in the appcache and navigates to the URL for an individual tweet the website needs to be able to send a page which inlines resources such as CSS and JS files. This is important in order to avoid additional round trips.

Webmail - Jonas

A lot of simularities with the twitter use case. The website is basically a UI for the database of emails. However its additionally important that the user can compose emails, including attach attachments, which are saved and synchronized once the user goes online. There are also other actions that the user might have taken while offline. This means that complicated conflict resolution might need to be done in order to synchronize with changes that has happened on the server.

Blog reading - Jonas

Store the last X days of blog posts locally. Each blog post consists of the blog text as well as a few images. Other websites can link to individual posts. Each post contains a list of comments for the post. Adding comments should be possible even while offline. Once the user goes online it should be possible to submit these comments.

Blog authoring - jonas

Same as blog reading, but probably want to cache a larger set of posts. Repository of unpublished posts should be available for editing offline. Once the user goes online these edits are synced to server, and any posts that were published while offline are automatically published. Both adding and removing comments should be possible while offline. These changes too are published once user goes online.

News website - Jonas

Front page with links to various articles. Each article as well as front page contains both text and images/media. Both front page and articles contains ads. A set of "top" articles are automatically cached and kept up-to-date. Potentially users can configure additional areas of interest which would cause additional articles from those areas to get cached.

Maps - @girlie_mac aka Tomomi Imura

Especially for mobile, frequently used UI components, such as map navigation/zoom controller UIs and POI icons can be cached. Also, possibly map tile images and data nearby a user's the initial location, so even if the user loses connection on street, s/he can still navigate the surrounding area.