This post describes the last two major features of the Spark
language. To understand this, it is probably useful to read the previous two posts on the subject here and here.
Blocks
Up until now, all deployments came from a single directory of dotfiles.
Blocks allow you to categorise your dotfiles into different subdirectories. A block is declared between two curly brackets: { }
.
card spark-card { # This is the top-level block. { # This is an inner block. } }
Abstractly, any non-deployment declarations in a block are local to that block. If you put an outof
declaration in a block, after that block it will not be in effect anymore. This is useful since into
, outof
and alternatives
declarations compound. As such, blocks temporarily encapsulate context.
card spark { into ~ { into .xmonad outof xmonad # Deployments here come out of ~/dotfiles/xmonad and go into ~/.xmonad. } # Deployments here just go into ~ as before. }
Taking the example from the previous post again:
card bash { into ~ outof bash .bashrc .bash_aliases }
If we now add Xmonad dotfiles to our repository, we can put them in a xmonad
subdirectory of the dotfiles
directory and have the following card:
card sus { into ~ { outof bash .bashrc .bash_aliases } { into .xmonad outof xmonad xmonad.hs } }
This allows us to nicely categorise the dotfiles in our repository:
~/dotfiles |- bash | | bashrc | | bash_aliases |- xmonad | | xmonad.hs |- spark.sus
Cards
Cards are the fundamental unit of control in the Spark language. They are a more technical part of the Spark language but they are useful to modularise the dotfile deployment even further.
A card is declared as follows:
card <card-name> { <declarations> }
Compilation, checking and deployment are all operations on cards, not on files. You can put multiple cards in the same .sus
file:
$ cat cards.sus card card1 {<declarations>} card card2 {<declarations>}
By default, spark
will deploy only the first card in the file but we can also deploy others. (Note the required double quotes.)
$ spark deploy "cards.sus card2"
All into
, outof
and alternatives
declarations are wholely reset in a new card. This makes cards fully modular.
Sparking off other cards
Using multiple cards is not only useful to seperate deployments logically, but you can also spark off other cards. This means you add the declarations from the specified to the current deployment declaration without the current context.
Here is an example:
card card1 { into ~ spark card card2 spark file card3.sus } card card2 {<declarations>}
This result will be that card2
and card3
are considered without the initial ìnto ~
declaration.
A modular example
The previous example can be made even more modular with multiple cards. This allows us to do partial deployments as well:
$ cat spark.sus card sus { spark card bash spark card xmonad } card bash { into ~ outof bash .bashrc .bash_aliases } card xmonad { into ~/.xmonad outof xmonad xmonad.hs }
Running spark deploy spark.sus
will deploy the first card, which in turn will spark off the other cards and, in doing so, deploy everything. Running just spark deploy "spark.sus bash"
will only deploy the bash dotfiles.
This concludes a mini-series on the Super User Spark.