# "Super User Spark: Outof and Alternatives"

Date 2015-10-04

Following up on the beginners post on the Super User Spark, this is a guide on two more features of the Spark language: The outof declaration and the alternatives declaration.

## Outof

In the previous part, we ended with a card that deploys bash dotfiles:

card bash {
into ~

.bashrc
.bash_aliases
}


Suppose now that we want to structure our dotfiles repository some more and put these dotfiles in a subdirectory bash. You could then write this card:

card bash {
into ~

bash/bashrc -> .bashrc
bash/bash_aliases -> .bash_aliases
}


This creates a duplication of efforts. The bash directory is a commonality between the sources of these deployments.

We can solve this using Sparks outof declaration. After an outof declaration, every source is prefixed with the given directory. The card can then be rewritten like so:

card bash {
into ~
outof bash

.bashrc
.bash_aliases
}


This is completely analogous to the into declaration of course.

## Alternatives

Up until now we have only considered one system. Suppose you have multiple systems with each their own dotfiles. Usually most of these dotfiles will be somewhat similar.

With the Super User Spark, there is now an easy way to share these dotfiles across systems while still using separate specific dotfiles where necessary. The alternatives declaration creates multiple options for the source of a deployment.

Let's say one of your systems has alpha as the hostname and the other has beta as the hostname. Let's say that alpha is a very general system with the regular shared dotfiles and beta uses the regular .bashrc but needs a more specific .bash_aliases. You could have a directory structure like this:

dotfiles
|- beta
| |- bash_aliases
|- shared
|- bashrc
|- bash_aliases


Then you would write a card like this:

card bash {
alternatives $(HOST) shared into ~ .bashrc .bash_aliases }  Notice that $(HOST) is a Spark variable that will be resolved from the environment during deployment.

When spark deploys this card on the alpha system, it will use the files in the shared directory because it can't find any in the (nonexistent) alpha directory. On the beta system, however, spark will look in the beta directory first and find the bashrc file to deploy to ~/.bashrc. Because it doesn't find bash_aliases in the beta directory, it will use the bash_aliases file from the shared directory.

Note that, an outof declaration is equivalent to an alternatives declaration with only one directory.

This allows you to make a single dotfiles directory to share across all your systems.

### Multiple subdirectories

To use more than one subdirectory for different categories of dotfiles, you will have to use blocks. For a more modular Spark configuration, you can use multiple cards. More on blocks and cards in the next post.

"Super User Spark: Blocks and Cards"

If you liked this blog post, please consider becoming a supporter:

"Super User Spark: Getting started"