Now that the Super User Spark exists to keep your dotfiles safe, the only thing you need to do is make sure that you have your dotfiles in the right directories and your spark cards are up to date.
The Super User Charger could take care of that as well.
The project I suggest is an extension of the Super User Spark that automates the rest of this dotfiles hassle. There are two aspects to this project.
The first part of the project is a new subcommand
status that would allow the user to get a clear view of the current situation with respect to dotfiles.
This subcommand should be able to generate a report that describes how up to date your SUS depot is by answering these questions:
- Are there any dotfiles that should (probably) be secured in a SUS depot?
- Are there any files in your SUS depot that aren't configured to be deployed in any spark card?
Detecting dotfile suspects
This subcommand would have to be able to assess whether a given file is a dotfile or not. There are a few options:
- Fixed names:
.bashrcis always a dotfile.
- Fixed rules: If a text file starts with a dot and doesn't end with
.sw[op], it's probably a dotfile.
- Configuration: The user can configure rules for dotfile detection themselves.
These 'guesses' would have to be accompanied by a confidence measure.
Spark card checking
Checking whether a dotfile is configured to be deployed should be done for
compiled cards, whether it's online compilation or not.
Only for compiled cards can we be absolutely sure that we can answer the question 'Will this dotfile be deployed?' correctly?
This would however require an argument so that
spark knows in which (compiled) card to look for deployments.
A natural next step would be to add a subcommand
charge to further increase the automation capabilities.
This subcommand would get a SUS depot ready with minimal user intervention.
It should use the detect-unsecured-dotfiles feature from the previous step, secure them automatically in a dotfiles directory and generate a spark card that would deploy all of them.
Ideally the dotfiles would be secured automatically but deployed manually (by default).
spark can of course never be sure whether it has detected the right files as dotfiles.
It would create a dotfiles directory that is entirely ready to deploy from.