Almost a year ago I wrote a program to synchronise and backup your dotfile called Super User Stone. I'm very pleased to finally deprecate that tool and announce the Super User Spark.
I realise that the name is somewhat unimaginative but I had to find a way to not have to rename my SUS depot.
After the installation of the Super User Spark, you will find yourself with a new binary in one of the directories of your PATH
: spark
. spark
was essentially created for the exact same reason as the Super User Stone. spark
allows you to synchronise and back up dotfiles. This way you only ever need one directory of dotfiles and deploy it on all your systems.
A quick demo
To give you a good overview of the power of the spark
language, we will require a more complex example than the one I used to demonstrate the Super User Spark. This demonstration will not be a comprehensive overview of the usage of spark
. For more information, see the spark
usage page.
Say you use both Bash and Xmonad. You might then have these dotfiles on your systems:
- /home/user |- .bashrc |- .bash_aliases |- .xmonad |- xmonad.hs |- lib |- Keys.hs
In this example, you're using a different kind of keyboard layout on your desktop and your laptop. Because of this, you use a different set of key bindings for Xmonad that you define in two different Keys.hs
files. Moreover, on your laptop, you use different aliases, which you define in a different bash_aliases
file. You would then build a SUS depot as follows that's shared among, let's say, two systems: desktop
and laptop
.
- depot |- spark.sus |- shared | |- bashrc | |- bash_aliases | |- xmonad | |- xmonad.hs | |- Keys.hs |- desktop | |- xmonad | |- Keys.hs |- laptop | |- bash_aliases
The real difference between Stone and Spark is the file that you add to your depot. spark
uses a domain specific language to allows you to specify how you want your files to be deployed. The full specification of the language can be found in the source code repository. The card will then internally be compiled to instructions for deployment. In this example the content of spark.sus
, for this example depot, would look like this:
# Call the card "sus". # This will only matter once you have more than one card. card sus { # First look for the file in $HOST. # If the file is not found there, look in shared. alternatives $(HOST) shared # Any deployment will go into the home directory. into ~ # A block (between braces) allows you to keep the 'into' and 'outof' declarations # that are in effect but making the ones from the block local. { outof xmonad # From here on all deployments will come out of the 'xmonad' dir. into .xmonad # This 'into' statement compounds with the previous to '~/.xmonad'. # Deploy the xmonad file xmonad.hs -> xmonad.hs { # Custom Xmonad library files into lib Keys.hs -> Keys.hs } } # After this block, everything is as it would have been before the block. bashrc -> .bashrc bash_aliases -> .bash_aliases }
This card will compile to the following deployments on the laptop
system:
"/home/user/sus-depot/shared/xmonad/xmonad.hs" l-> "/home/user/.xmonad/xmonad.hs" "/home/user/sus-depot/shared/xmonad/Keys.hs" l-> "/home/user/.xmonad/lib/Keys.hs" "/home/user/sus-depot/shared/bashrc" l-> "/home/user/.bashrc" "/home/user/sus-depot/laptop/bash_aliases" l-> "/home/user/.bash_aliases"
... and these on the desktop
system:
"/home/user/sus-depot/shared/xmonad/xmonad.hs" l-> "/home/user/.xmonad/xmonad.hs" "/home/user/sus-depot/desktop/xmonad/Keys.hs" l-> "/home/user/.xmonad/lib/Keys.hs" "/home/user/sus-depot/shared/bashrc" l-> "/home/user/.bashrc" "/home/user/sus-depot/shared/bash_aliases" l-> "/home/user/.bash_aliases"
As you can see, everything ended up in the right place.
Reference
To see a more elaborate example of how spark
can be used, have a look at my personal SUS depot. One important feature that wasn't mentioned in this post are sparkoff's. These allow you to really make your spark
configuration modular.