Minimal viable service worker

I really, really like service workers. They’re one of those technologies that have such clear benefits to users that it seems like a no-brainer to add a service worker to just about any website.

The thing is, every website is different. So the service worker strategy for every website needs to be different too.

Still, I was wondering if it would be possible to create a service worker script that would work for most websites. Here’s the script I came up with.

The logic works like this:

  • If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead).
  • For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache).

So HTML files are served network-first, while all other files are served cache-first, but in both cases a fresh copy is always put in the cache. The idea is that HTML content will always be fresh (unless there’s a problem with the network), while all other content—images, style sheets, scripts—might be slightly stale, but get refreshed with every request.

My original attempt was riddled with errors. Jake came to my rescue and we revised the script into something that actually worked. In the process, my misunderstanding of how await works led Jake to write a great blog post on await vs return vs return await.

I got there in the end and the script seems solid enough. It’s a fairly simplistic strategy that could work for quite a few sites, but it has some issues…

Service workers don’t perform any automatic cleanup of caches—that’s up to you to do (usually during the activate event). This script doesn’t do any cleanup so the cache might grow and grow and grow. For that reason, I think the script is best suited for fairly small sites.

The strategy also assumes that a file will either be fetched from the network or the cache. There’s no contingency for when both attempts fail. So there’s no fallback offline page, for example.

I decided to test it in the wild, but I expanded it slightly to fix the fallback issue. The version on the Ampersand 2018 website includes a worst-case-scenario option to show a custom offline page that has been pre-cached. (By the way, if you haven’t got a ticket for Ampersand yet, get a ticket now—it’s going to be superb day of web typography nerdery.)

Anyway, this fairly basic script seems to be delivering some good performance improvements. If you’ve got a site that you think would benefit from this network/caching strategy, and it’s served over HTTPS, then:

  1. Feel free to download the script or copy and paste it into a file called serviceworker.js,
  2. Put that file in the root directory of your website,
  3. Add this in a script element at the bottom of your HTML pages:

if (navigator.serviceWorker && !navigator.serviceWorker.controller) { navigator.serviceWorker.register('/serviceworker.js'); }

You can also use the script as a starting point. You might find issues specific to your particular website. That’s okay—you can tweak and adjust the script to suit your needs.

If this minimal service worker script proves in any way useful to you, thank Jake.

Have you published a response to this? :

Responses

Mike Babb

In “Minimal Viable Service Worker”, @adactio writes that service workers have such clear benefits to users that all websites ought to have one, and offers a minimal script that anyone can copy and paste to add caching and offline capabilities: adactio.com/journal/13540 #webdev

# Posted by Mike Babb on Wednesday, March 7th, 2018 at 11:46am

qubyte.codes

In the last post about this blog I wrote about why I removed the service worker which made this blog a progressive web application.

The way my blog handles CSS predates the wide availability of service workers. Since CSS link tags are blocking, it’s good to give CSS a long cache time. In order to do this, but still deliver fresh CSS without readers having to wait for it, I give the CSS file a URI including a hash of its content. If the CSS is updated, its URI changes, as does the href of the CSS link tag in each page of this blog.

The server sends this header along with CSS it serves:

Cache-Control: max-age=315360000, public, immutable

The immutable tells the browser the file will never change. The long max-age is a fallback for browsers which don’t support immutable. public lets any proxies know that they can cache it too.

I instructed the browser not to cache HTML at all. Since the HTML was always fresh, updated CSS would be loaded at most once on each change. The server sends this header along with HTML:

Cache-Control: no-cache

Which forces it to make a fresh request for HTML each time. These headers are still in place now (even since the move to Netlify).

Unfortunately this didn’t mesh well with the caching strategy of the service worker I added. The worker would download the entire blog, HTML and CSS, the first time a user navigated to it. The service worker was generated by sw-precache, which I had set up to be generated every time the blog was updated. It worked out what to add and remove from its cache based on file hashes. Since an update to CSS† meant changing the address in a link header in every HTML page, it removed not only the CSS, but all the HTML any time the CSS changed. Worse, it proactively downloaded all the updated files.

The net effect was minor CSS changes triggering a mass download of my blog. This wasn’t a good use of server or browser resources, so I removed the worker.

I recently attended a Homebrew Website Club held by Jeremy Keith. By chance he’d written a blog post on this issue the day before, in which he provides a minimum viable service worker.

This service worker caches everything lazily (no precache). When an HTML page is requested, it always tries the network first for a fresh copy, and falls back to the cache when necessary. For everything else it hits the cache first, but gets an update via the network in the background.

This resolves the CSS issue very nicely. Old CSS and HTML will be cached, so if your Southern Rail train is stuck in a tunnel, you can still read that blog entry about mixins you saw earlier and are now bored enough to take another look at. If you’re stuck outside of a tunnel and I’ve updated the CSS‡, the request for fresh HTML will succeed, which will also bring in and cache the fresh CSS! For things like images, which will probably never change, this also works well. If a change is urgent to any file, it can still be given a fresh URL to cache bust it (though I doubt this’ll ever be necessary).

I’ll still need to do some work though. As mentioned in that post, cleanup isn’t addressed. I’m happy for HTML to be cached indefinitely, but for the CSS one way to clean it up might be to remove it once it is older than every HTML entry in the cache. For particularly large resources such as images, a relatively short cache time and good alt-text might be a good approach…

In the meantime, I’ve borrowed the service worker code from that post mostly unchanged (most changes are just to align it with the style enforced by my ESLint config). The one small addition is to allow server sent events (EventSource) connections. I use these in development to hot-reload my blog when changes are made. The original script ignores all but GET requests:

if (request.method !== 'GET') { return;
}

I’ve extended this to also ignore server sent event connections:

const acceptHeader = request.headers.get('Accept'); if (request.method !== 'GET' || acceptHeader.includes('text/event-stream')) { return;
}

You can see the current service worker code I’m using here.

† I had to add some CSS to handle superscripts, such as the one used for this footnote.

‡ Perhaps the blog will be a different shade of beige…

# Thursday, March 8th, 2018 at 1:20am

CSS-Tricks

Minimal viable service worker: adactio.com/journal/13540 “I was wondering if it would be possible to create a service worker script that would work for most websites.”

# Posted by CSS-Tricks on Thursday, March 8th, 2018 at 3:12pm

Eric Eggert

I’m in San Diego where I’ll attend the W3C WAI Education and Outreach WG Face-to-Face meeting, and CSUN, the biggest accessibility conference. It’s always amazing to be able to work with my colleagues in one room and to meet all accessibility experts in one place.

  • Beta: W3C/WAI Website – We managed to launch the beta for the new WAI site last week. There are still a few rough edges, but it is essential to get it in front of people. A lot of work from many people went into the site, from design, user testing, development. I made sure we can edit resources in their respective Jekyll projects on GitHub and then integrate it into one repository using git submodules. All repositories use one common theme, so changes to it will be reflected in all resource previews, hosted on GitHub pages.

  • Color: Colorblind Accessibility on the Web – Fail and Success Cases – An excellent overview of colorblindness and common pitfalls.

  • Principles: Accessibility Interview Questions – Everyone should have answers to the question collected by Scott O’Hara. Most aim at general principles than specific techniques.

  • Notifications: Inclusive Components: Notifications – Another excellent write-up by Heydon Pickering.

  • Buttons: Designing Button States – Tyler Sticka on different aspects of button design. Sweating details like this can greatly improve the usability and accessibility of your website or application.

  • PWA: Minimal viable service worker – I don’t know enough about Progressive Web Apps to implement them correctly, yet. However, Jeremy Keith’s article feels like a good starting point to learn more about it.

  • Fonts: Shipping system fonts to GitHub.com – Interesting article on a very particular approach to shipping fonts.

Eric Eggert

I’m in San Diego where I’ll attend the W3C WAI Education and Outreach WG Face-to-Face meeting, and CSUN, the biggest accessibility conference. It’s always amazing to be able to work with my colleagues in one room and to meet all accessibility experts in one place.

  • Beta: W3C/WAI Website – We managed to launch the beta for the new WAI site last week. There are still a few rough edges, but it is essential to get it in front of people. A lot of work from many people went into the site, from design, user testing, development. I made sure we can edit resources in their respective Jekyll projects on GitHub and then integrate it into one repository using git submodules. All repositories use one common theme, so changes to it will be reflected in all resource previews, hosted on GitHub pages.

  • Color: Colorblind Accessibility on the Web – Fail and Success Cases – An excellent overview of colorblindness and common pitfalls.

  • Principles: Accessibility Interview Questions – Everyone should have answers to the question collected by Scott O’Hara. Most aim at general principles than specific techniques.

  • Notifications: Inclusive Components: Notifications – Another excellent write-up by Heydon Pickering.

  • Buttons: Designing Button States – Tyler Sticka on different aspects of button design. Sweating details like this can greatly improve the usability and accessibility of your website or application.

  • PWA: Minimal viable service worker – I don’t know enough about Progressive Web Apps to implement them correctly, yet. However, Jeremy Keith’s article feels like a good starting point to learn more about it.

  • Fonts: Shipping system fonts to GitHub.com – Interesting article on a very particular approach to shipping fonts.

@dtinth

tl;dr: Do not cache faulty responses in your service worker.

In trying to make my 1hz (opens new window) app an installable PWA in the most simple way possible, I tried using Jeremy Keith’s “Minimal viable service worker” (opens new window). The app loads water.css (opens new window) from jsDelivr (opens new window)’s CDN. Unfortunately, this caused service worker to fail to fetch and the page is displayed without a style sheet.

On the Console, I see:

Text version
The FetchEvent for "https://cdn.jsdelivr.net/npm/water.css@2.0.0/out/dark.min.css" resulted in a network error response: an "opaque" response was used for a request whose type is not no-cors
serviceworker.js:22 Uncaught (in promise) TypeError: Failed to execute 'put' on 'Cache': Request scheme 'chrome-extension' is unsupported at serviceworker.js:22
The FetchEvent for "https://cdn.jsdelivr.net/npm/water.css@2.0.0/out/dark.min.css" resulted in a network error response: an "opaque" response was used for a request whose type is not no-cors
Promise.then (async)
(anonymous) @ serviceworker.js:16
dark.min.css:1 Failed to load resource: net::ERRFAILED
serviceworker.js:22 Uncaught (in promise) TypeError: Failed to execute 'put' on 'Cache': Request scheme 'chrome-extension' is unsupported at serviceworker.js:22
(anonymous) @ serviceworker.js:22
DevTools failed to load SourceMap: Could not load content for https://1hz.glitch.me/dark.min.css.map: HTTP error: status code 404, net::ERRHTTPRESPONSECODE_FAILURE

Even worse, this happens intermittently. Sometimes it works, and sometimes it fails[1]. I thought maybe this is a race condition issue… or maybe not. Anyways, I was in a hurry, so I decided to replace the minimum viable service workers with the tried-and-true Workbox (opens new window) (but using the same logic):

/* global workbox */
importScripts( 'https://storage.googleapis.com/workbox-cdn/releases/5.0.0/workbox-sw.js'
) const { registerRoute } = workbox.routing
const { StaleWhileRevalidate, NetworkFirst } = workbox.strategies // HTML pages
registerRoute( ({ request }) => request.headers.get('Accept').includes('text/html'), new NetworkFirst()
) // Anything else
registerRoute(() => true, new StaleWhileRevalidate())

Nope, still failed, intermittently. The FetchEvent for “…/dark.min.css” resulted in a network error response: an “opaque” response was used for a request whose type is not no-cors and even Workbox cannot deal with that???? Argh!! Guess I’ll have to dig in and fix the service worker then.

This led me to this StackOverflow question: Can a service worker fetch and cache cross-origin assets? (opens new window) The answer tells me to make sure to “call clone() before the final return response executes.” Maybe it’s a race condition. So I made the change:

- const fetchPromise = fetch(request);
+ const splittedPromise = originalFetchPromise.then(response => ({
+ original: response,
+ copy: response.clone(),
+ }));
+ const fetchPromise = splittedPromise.then(({ original }) => original);
+ const responseCopyPromise = splittedPromise.then(({ copy }) => copy);
+ const originalFetchPromise = fetch(request); fetchEvent.waitUntil(async function() {
- const responseFromFetch = await fetchPromise;
- const responseCopy = responseFromFetch.clone();
+ const responseCopy = await responseCopyPromise; const myCache = await caches.open(cacheName); return myCache.put(request, responseCopy); }());

…to no avail. Still erroring out, and page still loads without external CSS applied.

Next, I found this Smashing Magazine article: Leonardo Losoviz (2017), “Implementing A Service Worker For Single-Page App WordPress Sites” (opens new window), and I quote (emphasis mine):

Whenever the resource originates from the website’s domain, it can always be handled using service workers. Whenever not, it can still be fetched but we must use no-cors fetch mode. This type of request will result in an opaque response, so we won’t be able to check whether the request was successful; however, we can still precache these resources and allow the website to be browsable offline.

+ const fetchOptions =
+ (new URL(request.url)).origin === self.location.origin
+ ? {}
+ : {mode: 'no-cors'};
- const fetchPromise = fetch(request);
+ const fetchPromise = fetch(request, fetchOptions);

Again it doesn’t help. Worse, it makes more requests fail. The app is more broken.

It’s time for my special attack: console.log() the heck out of this service worker.

What’s that? We’re putting faulty responses into the cache?

 fetchEvent.waitUntil(async function() { const responseFromFetch = await fetchPromise; const responseCopy = responseFromFetch.clone();
+ if (!responseCopy.ok) return; const myCache = await caches.open(cacheName); return myCache.put(request, responseCopy); }());

…and the white unstyled pages are nowhere to be seen again. Service workers are indeed rocket science. (opens new window)

  1. In hindsight, I reckon that it fails around 20% of the time, making debugging this extremely frustrating. ↩︎

# Posted by @dtinth on Sunday, December 13th, 2020 at 5:41pm

1 Share

# Shared by Gabor Lenard on Saturday, March 10th, 2018 at 9:38pm

5 Likes

# Liked by Simon St.Laurent on Tuesday, March 6th, 2018 at 12:55pm

# Liked by Nick F on Tuesday, March 6th, 2018 at 2:10pm

# Liked by Marty McGuire on Tuesday, March 6th, 2018 at 3:46pm

# Liked by Jj on Thursday, March 8th, 2018 at 5:34am

# Liked by Gabor Lenard on Saturday, March 10th, 2018 at 9:51pm

3 Bookmarks

# Bookmarked by Kartik Prabhu on Tuesday, March 6th, 2018 at 3:29pm

# Bookmarked by Dominik Schwind on Wednesday, March 7th, 2018 at 10:20am

# Bookmarked by Barry Frost on Monday, March 12th, 2018 at 9:42pm

Related posts

When service workers met framesets

The browser equivalent of a Roman legion showing up in a space opera.

Apple’s attack on service workers

Kiss your service workers goodbye on iOS.

New tools for art direction on the web

Variable fonts + CSS grid + service workers

Praise for Going Offline

I got yer social proof right here…

Detecting image requests in service workers

It turns out that you can’t rely on the `accept` header.

Related links

It’s Time to Build a Progressive Web App. Here’s How – The New Stack

Much as I appreciate the optimism of this evaluation, I don’t hold out much hope that people’s expectations are going to change any time soon:

Indeed, when given a choice, users will opt for the [native] app version of a platform because it’s been considered the gold standard for reliability. With progressive web apps (PWAs), that assumption is about to change.

Nonetheless, this is a level-headed look at what a progressive web app is, mercifully free of hand-waving:

  • App is served through HTTPS.
  • App has a web app manifest with at least one icon. (We’ll talk more about the manifest shortly.)
  • App has a registered service worker with a fetch event handler. (More on this later too.)

Tagged with

Getting Started with PWAs [Workshop]

The slides from Aaron’s workshop at today’s PWA Summit. I really like the idea of checking navigator.connection.downlink and navigator.connection.saveData inside a service worker to serve different or fewer assets!

Tagged with

Building a client side proxy

This is a great way to use a service worker to circumvent censorship:

After the visitor opens the website once over a VPN, the service worker is downloaded and installed. The VPN can then be disabled, and the service worker will take over to request content from non-blocked servers, effectively acting as a proxy.

Tagged with

Works offline

How do we tell our visitors our sites work offline? How do we tell our visitors that they don’t need an app because it’s no more capable than the URL they’re on right now?

Remy expands on his call for ideas on branding websites that work offline with a universal symbol, along the lines of what we had with RSS.

What I’d personally like to see as an outcome: some simple iconography that I can use on my own site and other projects that can offer ambient badging to reassure my visitor that the URL they’re visiting will work offline.

Tagged with

Local First, Undo Redo, JS-Optional, Create Edit Publish - Tantek

Tantek documents the features he wants his posting interface to have.

Tagged with

Previously on this day

7 years ago I wrote Empire State

Non-humans of New York.

17 years ago I wrote Southby

It’s that time of year again: South by Southwest is almost upon us.

21 years ago I wrote They. They, they, they shine on.

Hidden away on the listings page for the Sussex Arts Club is the regular singer/songwriter Thursday night slot for March 20th.

21 years ago I wrote Call and response

I love it when the web works like this.

21 years ago I wrote Do not adjust your set

The colours really are that vivid.