This paper represents my personal position in the W3C Workshop on The Future of Off-line Web Applications and essentially attempts to explore what supporting Web applications requires if we take them to their logical world-dominance conclusion. The views expressed here are not those of the DAP WG, nor of any of my customers as a consultant.
The primary methodological bias in this paper is to stop kidding ourselves that there even should be a way of creating applications that does not involve the Web. Once we relegate so-called “native” code to the commoditised lower-level plumbing it was always its destiny to be used for, an interesting consequence emerges: the vast majority of mobile device apps in circulation today are at the trivial end of the use cases spectrum. They rarely manage highly sensitive data that isn't replicated. One is rarely at risk of losing several days' work due to an upgrade. They are rarely on any kind of critical path for doing business. In other words, the pain level involved when they fail is not high enough to make sure that we get the framework in which they are created right.
The use cases of interest for applications are replacing Photoshop and Illustrator for a graphics designer. Replacing Mathematica and Mapple for a mathematician. Replacing Office for whoever it is that actually understands that thing. Bringing the Web to medical equipment and vehicular control interfaces.
As a result, the rest of this paper should be read with that kind of application in mind rather than the likes of Super Duper Sudoku, Llama Farts 5.2, or Scantily Clad Buxom Wench Background Theme Manager.
There is no particular order to the sections.
It may be that the best way forward relies on a revision to an existing solution, but that is not where we should start the conversation. AppCache is elegant, simple, webby — but (as currently implemented) sheds too much of the discrete nature of traditional apps. Widgets are more traditional, but shed too much of the Web.
JSON or XML? At this stage the answer is: whatever. Such topics are at the same level as figuring out which of Widgets or Application Cache wins the worst naming contest.
There are applications for which seamless upgrading is a workable option, but once an application becomes critical to the success of a given endeavour, it ceases to be a desirable property. One should be able to ensure that an application being used does not in any way or manner change while it is being used for a given project.
What's more, it should ideally be possible to downgrade an application to a previous version if an upgrade was attempted and did not work out well. While less frequent than requiring an application simply not to change under one's feet, this requirement tends to be nevertheless vital because when needed it generally means that the user is unable to get work done with the new version, is losing data, etc.
Note that the ability to support such requirements need not be provided directly by a given format or approach: we simply need to ensure that the selected approach does not prevent the UA from implementing this (possibly by maintaining snapshots) for the user's benefit.
Sitting at home or in an office with several dozen simultaneous downloads, upgrading several hundred megabytes' worth of software over 3G while commuting may leave one with the impression that cheap and fast network are, if not universal, then soon to be. That is not the case, and in some places may not be for a long time.
If you're roaming internationally, even with a fast network you may have to sell a few internal organs if you are to pay for even a few megabytes' worth of applications. You may be able to get to a Wifi spot in most places, but if you're in a country that's primarily networked through slow and stupendously expensive (especially in local terms) VSAT connections then loading applications from sources other than the network can be highly desirable.
Even given cheap networks, some applications can be quite massive, sometimes requiring multiple DVDs to install. There is no reason why we should rule the Web out for such use cases. We should therefore strive to make sideloading possible.
Such a requirement comes with a number of interesting questions, notably how one goes about assigning a proper
URL to such a sideloaded package, without making it possible for it to highjack an existing site. The
widget: URI scheme is one approach to this, but I feel that it does not go far enough: ideally
we should be able to assign an HTTP URI to such packages and make sure that they can be launched exactly
as if they were a given remote site (and then even get partially updated if a small component needs freshening).
As a side-note I will point out that some form of packaging mechanism could help not just for applications but also for the various dependencies of a Web page both in serving them efficiently and in making sure that they are loaded from the same version in cases where CDNs go out of sync.
It is natural to start experimenting with taking Web applications to the next level by integrating them better in the browser. One has however to wonder if browsers can continuously take on more and more user-exposed functionality right at the same time as the real estate dedicated to chrome tends to diminish. What's more, in order to make Web applications equal in the system, they should be UI-indistinguishable from whatever the system supports.
While the exact UI implications of this will be worked out incrementally, one thing is certain: we need a Web security model that can scale to applications that use many privileged capabilities, and that may be running outside of the browser chrome (which has traditionally provided an anchor for trust).
Existing solutions to this problem that go beyond what the existing Web security model does have usually proven rather poor. Upfront manifest-based permission requests are essentially as good as not providing any security at all for all that users pay attention to them. Policy-based approaches have tended to be heavy-handed and potentially difficult to code against, and they generally require trusting a third party which is an undesirable property in most cases.
One potential solution is to consider that the vast majority of applications do not require all that much access to local functionality and can be handled in the same manner as general-purpose Web applications are, which is to say by occasionally being granted a little additional, limited, and well-scoped privilege ("installed" applications possibly benefitting from greater remanence here). Then for the set of applications that require greater access to specific, potentially harmful functionality, don't try to finesse the access and simply give them unfettered access to everything, but make them run in such a way that makes it clear to users that they are using possibly dangerous code (i.e. if it's for a trivial application such as a game, they should worry). Such applications should never be launchable from a browser UI.
It may seem tempting to define offline functionality without considering these topics, but experience shows that all of this tends to tie together, and that no matter what shape the various parts take we will need to ensure that all of them integrate cleanly.
In the present stack, switching browsers is a relatively cheap operation for end-users. Beyond differences in UI, and (for most users) at most a few extensions, one may lose bookmarks (which can be synchronised) and the likes of saved passwords (which are organically restored over time) but not much else. This is a highly desirable property that needs to be preserved.
At the same time, the Web becoming the platform should not entail that everything should sit in the cloud (as commonly understood currently). There is data that users want to keep strictly local (for reasons both bad and good) and that wish should be respected.
This entails that I should not lose my locally stored data when I switch between browsers that power my applications. There are several ways (with different (de)merits) in which this could be achieved: interoperable import/export formats for local stores, a way of storing an application's data with the application itself when transferring it (or using it from multiple UAs), or perhaps a common specification for a securely encrypted sync server (which could be locally installed if necessary).
Even with support from the browser in terms of caching resources, developing applications that function equally well when disconnected can be clumsy and challenging. There are many cases in which operations conducted on the client side could be stored temporarily while disconnected and later synchronised with the server. Ideally, it should be possible to cater to the ability for the client to behave like a fallback server as transparently as can be to the developer. Naturally, there will always be cases that will not be entirely workable offline, but making those that can be "just work" will be a boon to developers.
While I have long been rather uninterested in it, I have to admit that the more server-side JS development I work on, the more the Programmable HTTP Caching and Serving API (or a variant thereof) seems attractive. At the very least the History API increasingly seems to have been built backwards: one overwhelmingly more often wishes to ask that "if the requested URI looks something like this, let me handle it" rather than have to make sure that all links that are inserted into the document are watched for the case in which they might be pointing to a given URI of interest.
Producing a general-purpose solution in this space is likely to lead to architectural astronautics, but there are probably a number of quick wins that can and should be grabbed to make developers' lives easier. Again, this is a topic that is worth investigating as part of the offline solution since it is likely to have a strong relationship with it.
Another aspect that is not as orthogonal to the solution as we may wish to make it is monetisation. At the present time, the application and Web worlds have overall very different income models, and part of the (perceived) weakness of the Web is that some models from the applications world have no (implemented) equivalent.
We should therefore keep in mind potential solutions in this space as we develop the technology that we need. One potential partner with which to talk could be the Web Payments Community Group.
In a nutshell, I think that we should:
Thanks for listening!