I have a dozen development machines in my homelab: a mix of Fedora, Gentoo,
Debian (stable and unstable), Ubuntu LTS, macOS, and a few Turing Pi nodes.
I got tired of my configurations drifting apart, so I built a dotfiles
management system in Python. No external dependencies, just str.format()
templates and JSON config files. It manages shell configs, git settings,
Kubernetes credentials, and the configuration for
eight different coding LLM CLI tools.
That last part turned out to be an interesting use case, but the foundation
described here is what makes it work.
The problem
The usual dotfiles approach is a git repo full of symlinks and a bash script
to wire them up. That works until you need the same .bashrc to behave
differently on macOS versus Debian, or you need API keys templated into
config files without committing them to git, or you want your coding LLM
assistant to follow the same rules regardless of which tool you're using
this week.
I wanted one repo, one install command, and consistent configuration everywhere.
The approach
The core started as a single Python script, dotfiles.py, which I began
writing in September 2025. It has since been refactored into a package:
dotfiles.py remains the CLI entrypoint, and dotfiles_lib/ holds a dozen
modules (config, installer, renderer, generators, platform detection, and
others) totaling around 3,800 lines. The split was motivated by code review
and testing, in that a 2,500-line monolith is hard to reason about in diffs,
and hard to unit-test without importing everything.
The bootstrap was straightforward: I copied the shell dotfiles from all my hosts into one tree of per-host files, pointed an LLM at the pile, and had it analyze them for commonalities and generate the initial files and templates. Most of the per-host differences turned out to be PATH entries and tool availability, which mapped cleanly to OS detection variables.
The tool reads a main JSON config that maps target files to their sources:
{
".bashrc": {
"mode": "templated",
"template": "bash/bashrc.tmpl"
},
".gitignore_global": {
"mode": "symlink",
"source": "git/gitignore_global"
},
".claude/settings.json": {
"mode": "templated",
"template": "templates/claude-settings.tmpl"
}
}
There are three installation modes: symlink for files that don't vary, templated for files that need per-machine or per-secret customization, and copy for root user configs where you don't want a symlink back to a regular user's repo. A fourth mode obsolete is also available to mark files that should be cleaned up during install which is useful when tools get renamed or configs move (coding LLMs do this a lot).
Templates use Python's str.format(): no Jinja2, no dependencies. The
template data comes from three sources: the JSON config, OS detection at
install time, and a ~/.secrets.sh file that holds API keys and
credentials. On install, the script parses ~/.secrets.sh, merges it with
the computed template data, and renders everything.
Installation on any machine is:
make install
(make clean handles the build artifacts, cache directories, and other
generated files.)
Secrets without the complexity
I didn't want a secrets manager dependency. The approach is a
~/.secrets.sh file that's never committed, with a simple KEY=value
format. It's also sourced by shells. The script parses it, strips quotes,
and makes the values available as template variables:
# ~/.secrets.sh
ANTHROPIC_API_KEY="sk-ant-..."
GEMINI_API_KEY="..."
GIT_EMAIL="dave@dajobe.org"
...
If a key exists as an environment variable, that takes precedence over the file.
This is also where things like KUBE_CA_DATA live: base64-encoded
certificate authority data that gets templated into kubeconfig files without
committing credentials. A separate script pulls the right variables out of
a kubeconfig YAML file so I don't have to do it by hand when cluster
certificates rotate.
A separate utility copies the secrets file to all dev hosts over SSH. I keep
it mode 0600 and only sync it to machines I trust with those credentials.
Recursive config directories
The dotfiles system doesn't just handle flat config files. Some tools such as coding LLM CLIs want directory trees for commands, skills, and agents rather than a single config file, so the installer walks those directories recursively and deploys them the same way it does ordinary dotfiles.
A skill like blog-post isn't a single file, it's a subdirectory containing
a prompt definition and supporting reference materials. The install script
had to be extended to handle these recursive structures rather than just
flat file listings.
Agent definitions use the same pattern. They live in agents-config.json,
which specifies the model, allowed tools, and a reference to the markdown
prompt content. At install time that metadata is combined with the prompt
text to generate tool-specific agent files, with a parity test to make sure
the Claude Code and Amp versions stay in sync.
The git-commit agent is a good example of why this is useful. It can run
git add, git diff, git commit, and a few other git commands and
nothing else, which means I can point it at a messy working tree and trust
it not to get creative.
Skills and agents then deploy with whatever packaging each tool expects: symlinks, file copies, and zip archives depending on what the target supports. The tool-specific details are in the companion post.
Splitting out code-aide
Installing, updating, and checking versions of Claude Code, Cursor, Gemini
CLI, and the rest eventually outgrew its corner of dotfiles.py and got
extracted into a separate open source tool called
code-aide. The dotfiles repo still
bootstraps it during make install with a best-effort uv tool install,
but the upstream-version churn now lives in its own project rather than
bulking up the renderer and installer logic here.
Ghostty terminal support
I started playing with the Ghostty terminal emulator
and discovered that its xterm-ghostty terminfo entry isn't installed on
most of my remote hosts. SSH into a machine without it and you get missing
or unsuitable terminal: xterm-ghostty from every ncurses-based tool. Which
is an annoying bump.
The fix: vendor the Ghostty terminfo source file into the dotfiles repo and
compile it into ~/.terminfo during make install using tic. The
install script checks whether the entry already exists and skips the
compilation if so. Ghostty's own config file is also templated and
deployed.
Not glamorous, but it's the kind of thing that makes a dotfiles system earn its keep with one fix deployed everywhere, instead of manually installing terminfo on each host.
Multi-host deployment
With a dozen or so machines, running ssh host 'cd dev/dotfiles && git pull
&& make install on each one gets tedious. Instead I have a
deploy-dotfiles utility that reads the hosts list from config and runs the
install:
$ deploy-dotfiles
host1 ✔
host2 ✔
host3 ✔
host4 ✔
host5 ✗ (connection timed out)
...
This is not done in parallel; I considered it but the SSH overhead means it's fast enough in sequence, and sequential output is easier to read when something fails. It's fine for a small homelab.
After make install, a JSON receipt file is written to
~/.dotfiles-version.json recording the git commit, install timestamp, and
hostname. A version subcommand shows the source HEAD alongside the
installed receipt, flagging stale installs. The deploy-dotfiles utility
has a --check mode that queries receipt files on all remote hosts without
deploying, so I can see at a glance which machines are behind.
What I'd do differently next time
The str.format() template engine has its limits. Anything with literal
curly braces (JSON templates, for instance) requires doubling every brace
that isn't a variable. I have a 245-line JSON config full of doubled
braces. A Jinja2-like syntax would be cleaner, but I'd have to either add a
dependency or write a minimal template engine. For now, the doubled braces
are ugly but functional. They're also a reliable way to make an LLM lose
track of what it's editing.
Some kind of file-system-convention approach (drop a .symlink suffix on
files you want symlinked) might reduce the config overhead, but I haven't
hit enough pain to justify the rewrite.
The test suite now covers config validation, template rendering, file
installation, agent generation, skill parity, the markdown formatter,
version receipts, utils, and the CLI itself with close to 20 test modules,
around 4,000 lines. A pre-submit script runs Black, mypy, and pytest
through uv, and the Makefile has targets for each. It's not full CI yet
(there's no pipeline triggered on push), but the local workflow catches most
regressions before they're committed.
Numbers at a glance
- 30+ dotfiles managed (15+ templated, 14 symlinked)
- 3 agents generated for Claude Code and Amp
- 2 skills deployed to Claude Code and Codex (blog-post, job-prep)
- 8 coding LLM tools configured with code-aide
- 12+ development hosts deployed to
- ~20 test modules, ~4,000 lines
- 0 external Python dependencies
The whole thing runs with make install and takes about a second. Zero
dependencies means it works on a fresh machine with just Python 3.11, which
every machine in my fleet already has.
Adding a new machine is: clone, create ~/.secrets.sh by hand, run make
install.
Adding a new dotfile is: create the template or source file, add one entry
to the config JSON, run make install. I just get an LLM to do those
changes with a prompt like ingest ~/.newdotfile and manage it with
dotfiles, review and approve.
Thoughts
This work is private but the approach is straightforward enough to replicate by pointing an LLM at this blog post. The interesting bits are the template data pipeline (secrets file + OS detection + JSON config merged at install time) and the zero-dependency constraint, not any particularly clever code.
The coding LLM tool configuration is covered in a companion post: Eight Coding LLM Tools, One Configuration.