The very first thing I do when working on a programming task is to set up a feedback loop. The idea of the feedback loop is to be able answer the question "am I done?" as quickly as possible. The shorter this loop is, the quicker you can iterate.
I have been making quite a few changes to this website to allow for a quicker feedback loop. The first was to only bake the contents into the binary in production mode. In development mode, the files should be reloaded from the files on disk every time. The second change was to have the browser reload automatically every time either the contents or the server had changed.
This allowed me to speed up my feedback loop from thirty seconds to half a second.
Step 1: Restart the site automatically when the source changes
I use a standard feedback loop template for every Haskell project I work on. It requires these two files:
scripts/devel.sh
:
#!/usr/bin/env bash set -e # Make sure to error if anything goes wrong set -x # Show me what's happening cd site stack install :site \ --file-watch \ --exec="../scripts/restart.sh $@" \ --ghc-options="-freverse-errors -DDEVELOPMENT -O0" \ --fast \ --pedantic
This first file uses stack
s --file-watch
concept to rebuild the site every time a source file changes. The --exec
option allows us to run a command whenever the build succeeds after a change. In our case we will run the following little script to shut down the server and start the new version.
scripts/restart.sh
:
#!/usr/bin/env bash set -x PID="$(pgrep site)" if [[ "$PID" != "" ]] then kill $PID while [ -e /proc/$PID ] do echo "Process: $PID is still running" sleep .1 done fi set -e # The & at the end means "leave this running" site serve $@ &
Now you can start the server using ./scripts/devel.sh
. When you change the source, you'll see stack
rebuild the server. Once stack
is done rebuilding, you can manually press refresh in your browser to see the new version.
This is already great because you don't need to manually initiate the recompilation anymore. However, there are two more optimisations we can make. If your server can read the contents of your site from disk on a page load, then recompiling is not necessary for every change so we would like to avoid it if possible. This saves a few seconds for every change.
Another issue is that you still have to press the refresh button manually. If the browser could detect that a change has been made, we could be spared this effort as well.
(If you are eager to recommend that I use ghcid
(or a similar tool) instead, please get it to work on smos-docs-site
before you do because I have gotten panics every time I've tried to use it.)
Step 2: Don't bake files into the binary in development mod
Part of my server consists of looking at the content
directory to read the markdown files that will end up as the blog posts on this site. The full code is a bit involved for reasons that are unrelated to the point that I am trying to make so I will focus on the interesting pieces here.
The idea is to reload the text files from disk in development mode, and bake them into the binary in production. I made a little template Haskell library for this purpose.
We use the embedTextFilesIn
function and splice the result in. This gets us a pure value of type Load (Map (Path Rel File) Text)
that represents the directory that we want to embed. From our handlers, we call loadIO
to get the Map (Path Rel File) Text
out. In production, this will read the files from inside the binary. In development, it will read them from disk.
This change alone massively sped up my compile times during development, because the text does not have to be embedded every time anymore. However, It also means that we don't even need to recompile to see changes anymore. We can just refresh.
In the next step, we'll make sure that we also don't have to press the refresh button anymore.
Step 3: Watch content changes without rebuilding
The last piece of the puzzle is to have the browser refresh automatically whenever something changes. This saves us from having to wait and press the refresh button manually for every change.
The plan is as follows:
The browser will open a websocket connection with the server.
The server will watch for changes, and close the websocket connection whenever any files change.
The browser will refresh the page whenever the websocket connection is closed.
The websocket route
First we need a route to isolate the interaction:
/ws WebSocketR GET
Whenever this handler is called, we start watching the filesystem using fsnotify
. We watch the directories that contain things that we can reload without restarting the servers.
getWebSocketR :: Handler () getWebSocketR = getAutoReloadRWith $ liftIO $ do sendRefreshVar <- newEmptyMVar -- A variable to block on Notify.withManager $ \mgr -> do let predicate e = case e of -- Don't watch removed events, in case the file is rewritten, so we don't get a 404 when reconecting Removed {} -> False _ -> let suffixes = [ ".swp", "~", ".swx", "4913" -- https://github.com/neovim/neovim/issues/3460 ] -- Editors make files like this, no need to refresh when they are written. in not $ any (`isSuffixOf` (eventPath e)) suffixes act _ = putMVar sendRefreshVar () let dirs = ["content", "assets", "style", "logo"] forM_ dirs $ \d -> do ad <- resolveDir' d watchTree mgr (fromAbsDir ad) predicate act putStrLn "Waiting for a file to change." takeMVar sendRefreshVar
Note that there are few pieces that you want to watch out for. The first is that we do not want to watch events that tell us that a file has been removed. This is just in case the file was relevant to the page we were looking at. In that case the page would refresh upon removal and we'd see a 404 page that no longer auto-refreshes.
The second weird piece is that there a bunch of files that editors make that we want to ignore. We don't want to refresh on every keystroke, only on every relevant save. In my case that means ignoring files that vim writes. (Note that you could also try to configure your editor not to write these files. You just probably don't want to do that because those files are useful.
The front-end
For the front-end, we'll want to add a little piece of JavaScript to every page. We'll do that in defaultLayout
:
= do
defaultLayout widget ...]
[let addReloadWidget = if development then (<> autoReloadWidgetFor WebSocketR) else id
...]
[$(widgetFile "default-body") addReloadWidget
If the server dies (and restarts), we want to try to reconnect before we refresh. Otherwise we'll end up on a "cannot reach the server" page that does not refresh automatically. If the server does not die but just closes the connection on purpose, we want to refresh immediately (because this is faster). This makes the code a bit more complex, but at least this is just a development tool so we can use console.log()
liberally.
function connect (reloadAfterConnecting) {
var uri = new URL("@{WebSocketR}", document.baseURI).href.replace(/^http/i, "ws");
var conn = new WebSocket(uri)
.onopen = function() {
connconsole.log("Listening for file changes.");
if(reloadAfterConnecting) {
.reload();
location
}
}.onclose = function(e) {
connconsole.log("Connection closed, reloading.");
console.log(e.data);
if (e.reason === "file changed") {
console.log("Only reloading, not reconnecting.");
.reload();
locationelse {
} console.log("Reconnecting before we reload.");
connect(true);
}
}
}
connect(false);
This little piece of JavaScript is packaged up in the autoReloadWidgetFor
function, so no need to copy it around.
Now you should be able to start the server, open your browser, and start working without having to touch your browser to refresh.