I believe the current proposal for offline web applications is too complicated, fiddly, and brittle. There is a cleaner and more efficient approach which makes better use of existing mechanisms of the web to negotiate and manage “offline assets”. Here’s a brief summary:
The essence of this proposal is that a proper solution to the offline web app problem should not require drawing a distinction between “offline” and “online” assets. There is no need for ‘cache manifests’, or to create a separate 'application cache’ from the standard browser cache.
This solution should leverage existing web caching infrastructure (i.e. HTTP headers such as Cache-Control, Etags, etc) to control how browsers store and negotiate the assets required to run the application offline.
That’s pretty much it.
There is one significant hurdle here, and that is the limited capacity and reliability of local browser caches. However, a relatively simple solution would be to create a new HTTP header with which a server can indicate the cache storage requirements of its application (by domain). Something like> GET / HTTP/1.1 > Host: myapp.com > ... < 200 OK < Cache-Storage: 160M < .....
If this is the first time the browser has encountered this app then the user is prompted to grant access for the domain to reserve the disk space to 'install the offline app’. The user can then either accept, or reject (and optionally remember their choice).
Alternatively, the cache-storage value could be expressed as metadata in the <head> of the application’s main html document, however this would result in the pattern being unusable outside of html apps.
Either way, this would offer a familiar experience for users; where they are asked to 'install’ an offline web app in much the same way they would be by traditional desktop software.
When the reserved storage is full, any assets that were served with a stale-if-error directive must take priority in the reserved storage over those that weren’t.
Application developers can manage the way updates to individual assets are negotiated using HTTP’s standard caching mechanisms such as Cache-control max-age and ETag validations.
So that is the general gist of the proposal.. if anyone is interested let me know - perhaps we could try and flesh it out a bit more and push it forward.