I currently use eight coding LLM tools at various times: Claude Code, Codex,
Cursor's CLI (agent, formerly cursor-agent), Gemini CLI, Amp, Copilot,
OpenCode, and Kilo. Each tool has its own configuration format, its own
mechanism for custom commands, and its own opinions about where settings
live. I want the same behavior from all of them: no emojis in commit
messages, run markdownlint on every markdown file, don't be sycophantic.
Getting that consistency across multiple tools on a dozen development machines turned out to be its own project. I started pulling it together in mid-2025 after the third time I fixed a guideline in one tool's config and forgot to update the others.
The problem
Coding LLM CLI tools are multiplying fast, and none of them have agreed on a configuration standard. Claude wants JSON settings and markdown commands with YAML frontmatter. Cursor wants its own JSON format and plain markdown. Gemini wants markdown files with TOML headers. They all have different mechanisms for custom commands, and different places to put project-level vs global rules. And they keep changing!
If you only use one tool on one machine, this is fine. I use several tools across a bunch of machines running Fedora, Gentoo, Debian, Ubuntu, and macOS. Every time a tool updates its config format or I want to change a rule, I was finding myself editing the same content in multiple places and inevitably getting drift and tired of this.
Single-source guidelines
The fix was straightforward. I keep one file, data/guidelines.md, that
defines how I want all my coding LLM assistants to behave. My
dotfiles system
templates it into each tool's config format on install:
- Claude gets them in
~/.claude/CLAUDE.md - Cursor gets them in
~/.cursor/cli-config.json - Gemini gets them in
~/.gemini/GEMINI.md
Change the guidelines once, run make install, and every tool picks them
up.
Write once, generate three ways
Custom commands were trickier. I have a git-commit command that tells the
LLMs how to structure commit messages: conventional commit format, no
emojis, no "Generated by Tool" footers, imperative mood. The content is
defined once but each tool wants a different wrapper format.
Claude wants YAML frontmatter:
---
allowed-tools: Bash(git *)
description: Make git commits
---
# Command definition
Look at all the git $ARGUMENTS changes...
Cursor and Codex want plain markdown in different places, while Gemini wants TOML. So there's a template per tool that wraps the shared content in the right format. The install process generates all of them from the single source.
The custom git-commit command is the main thing I use across all the tools
to avoid hype and phrases I hate, and it's what keeps LLM-generated commits
looking like they were written by a human who cares about their git history.
Not all tools support custom commands yet, or at least I haven't figured it all out. Currently Claude, Cursor, Gemini, and Codex get generated commands. The rest get the shared guidelines but not the command wrappers. The template approach means adding a new tool is just another wrapper when they catch up.
This pattern extends to longer prompt definitions that Claude calls "skills." Skills can be multi-file: the blog-post skill that was used to write this post includes a ~300-line style guide derived from analyzing dozens of posts spanning two decades of my blog, plus the prompt definition that references it. Other skills handle things like analyzing job descriptions or preparing for interviews.
Skills now deploy to both Claude Code and Codex from shared source data.
Claude gets symlinks into ~/.claude/skills/; Codex gets real file copies
into ~/.codex/skills/ with OpenAI-format metadata for skill discovery. For
claude.ai's web interface, the install process builds zip archives directly.
The settings merge problem
One problem I hadn't anticipated: Claude's settings.json accumulates
permission rules as you use it. Every time you approve "allow this tool to
run git commands" or "allow writes to this directory" those get saved. If
you naively overwrite the settings file with a template on every install,
you lose all the permissions you've granted during a session.
The fix was a JSON merge strategy: when installing a templated JSON settings
file, the script loads the existing file, unions and deduplicates the
permissions.allow, permissions.deny, and permissions.ask arrays with
the template's values, preserves any extra top-level keys, and writes the
merged result. The template provides the baseline; local usage adds to it.
In practice, those arrays are the tool's "what am I allowed to run and touch?" rules, and preserving them avoids re-approving the same actions after every install.
code-aide: installing the tools themselves
Installing coding LLM CLIs on a dozen Linux and macOS machines is its own annoyance. Some need Node.js and npm. Others have their own native installer scripts. Cursor downloads a tarball directly. Each has different prerequisites, each updates on its own schedule, and some have changed their installation method since they launched.
I wrote code-aide to handle this.
It's now open source, installable via uv tool install code-aide or pipx,
with zero external dependencies and Python 3.11+ stdlib only. Tool
definitions live in a JSON config file (tools.json) with a
schema_version field, so adding a new tool means adding a JSON entry
rather than editing Python code.
code-aide supports three installer definition types:
- npm tools get an
npm_packagename and optionalmin_node_version(Gemini, Codex, OpenCode, Kilo, Copilot) - script tools get an
install_urlandinstall_sha256(Claude, Amp) - direct download tools get a tarball URL template with platform and architecture substitution (Cursor)
Those types describe how a tool is modeled in tools.json. The VIA
column below is the install source detected on this specific machine, which
can be brew or cask if that host was provisioned that way.
The snippets below are example output from my setup at the time of writing:
$ code-aide status -c
uv run code-aide status -c
TOOL STATE VERSION VIA PATH
agent ok 2026.02.27-e7d2ef6 download /Users/.../.local/bin/agent
claude newer 2.1.71 script /Users/.../.local/bin/claude
gemini ok 0.32.1 brew /opt/homebrew/bin/gemini
amp ok 0.0.1772734909-g2a936a script /Users/.../.local/bin/amp
codex ok 0.111.0 cask /opt/homebrew/bin/codex
copilot newer 0.0.423 npm /opt/homebrew/bin/copilot
opencode newer 1.2.20 brew /opt/homebrew/bin/opencode
kilo opt-in
Note: The
agentrow is Cursor's CLI; the binary started out ascursor-agent. The latest-version metadata still usescursor, which is why the table below shows that name instead.
Some of these tools install via curl | bash and I'd rather not run a
script that's changed since I last reviewed it, so for script-type
installers the downloaded script is verified against a known SHA256 of a
reviewed script, and will not run if it has changed. For direct_download
tools like Cursor, the install script changes with every version, so SHA256
verification was dropped in favor of version-string comparison against the
cached latest version.
Auto-migration
One thing I didn't anticipate: tools keep changing their installation
method. Claude Code started as an npm package (@anthropic-ai/claude-code)
and later shipped a native installer script. In my own setup, Cursor
installs also moved from shell-script installs to direct tarball downloads
managed by code-aide. If you keep older install paths around, things often
still work, but the fleet drifts and upgrade behavior becomes less
predictable. You also may end up with packaged installs as well as self
installs.
code-aide 1.7.0+ detects these deprecated installs and auto-migrates.
code-aide status warns you, code-aide upgrade handles the transition:
remove the old install, run the new method, verify it worked. If something
goes wrong, it tells you what to do manually rather than leaving you with a
half-migrated mess.
Keeping up with upstream
The SHA256 hashes go stale, of course. The update-versions command
handles that: for npm-installed tools it queries the registry for the latest
version and publish date; for script-install tools it downloads the install
script, computes the SHA256, and compares against the stored hash. Version
extraction is custom per-tool, for example Cursor embeds YYYY.MM.DD-hash
patterns in download URLs, Amp has a GCS version endpoint, others use
VERSION= patterns in the script itself.
code-aide update-versions
Checking 8 tool(s) for updates...
Tool Check Version Date Status
-------- ------------ ----------------------- ---------- ------
cursor script-url 2026.02.27-e7d2ef6 2026-02-27 ok
claude npm-registry 2.1.71 2026-03-06 ok
gemini npm-registry 0.32.1 2026-03-04 ok
amp script-url v0.0.1772937800-g3b2e3d 2026-03-08 ok
codex npm-registry 0.111.0 2026-03-05 ok
copilot npm-registry 1.0.2 2026-03-06 ok
opencode npm-registry 1.2.21 2026-03-07 ok
kilo npm-registry 7.0.40 2026-03-06 ok
Updated latest version info in ~/.config/code-aide/versions.json.
No installer checksum updates required (latest version metadata was refreshed).
Note: 'update-versions' checks upstream metadata, not your installed binary versions. Use 'code-aide status' and 'code-aide upgrade' for local installs.
code-aide uses a two-layer version model: bundled definitions ship with the
package and contain install methods, URLs, and SHA256 checksums. A local
cache at ~/.config/code-aide/versions.json stores the latest versions and
dates from update-versions. The cache merges into the bundled data at
load time, so you can track upstream changes without waiting for a new
code-aide release.
When a script-install tool's SHA256 has changed, it flags the mismatch and
can write the updated hash back with --yes, or interactively one at a
time. Finding the right version endpoint for Amp took a couple of tries;
the obvious ampcode.com/version URL returns HTML; the actual version lives
at a GCS endpoint buried in the install script. That version string
(v0.0.1774123456-gc0ffee) probably also tells you something about Amp's
relationship with semantic versioning, if we even care about such things in
this fast moving LLM world.
Prerequisites
code-aide also handles the Node.js dependency problem for npm-based install
paths. The minimum Node.js version varies by tool (at time of writing:
Gemini wants 20+, Codex and Copilot want 22+ on npm installs). If you use
brew / cask / native installers for those tools, Node.js may not be
required. code-aide install -p detects your system package manager (apt,
dnf, pacman, emerge, and a few others) and installs Node.js and npm if
they're missing.
What I'd do differently
The format fragmentation across tools is the real ongoing cost. I've had to update templates multiple times already because a tool changed where it looks for config files or switched its frontmatter format. There's no standard emerging; if anything, each new tool invents another format. The title of this post has changed numbers several times before publishing.
The single-source approach helps, but it only works because the semantic overlap between tools is high; they all want roughly the same information, just arranged differently. If the tools diverge in what they support rather than just how they format it, the shared-content model gets harder to maintain.
Testing has improved since the first version. code-aide has a pytest suite now, though some of the harder-to-test operations (upgrade, remove, prerequisite installation) are still on the TODO list. Progress, at least.
Numbers
- 8 coding LLM tools managed (4 with custom commands, all 8 with guidelines)
- 3 install types: npm, script, direct download
- 0 external Python dependencies
Adding a new coding LLM tool is mostly: add a JSON entry to tools.json,
and it shows up everywhere on next install. Unless they invent yet another
install mechanism!
Thoughts
The interesting problem here isn't dotfiles management, that's a solved problem with many good tools. The new problem that coding LLM assistants have created is a new category of configuration that needs to stay synchronized: guidelines, custom commands, skills, and permissions, across tools that don't share any common format. I've written separately about why I think the harness layer matters more than the model; this is the practical side of that argument. The approach I describe here is straightforward enough for readers to replicate by pointing a coding LLM at this post.
There are still gaps, such as whether prompt style should vary by harness and model combination. I haven't tested that systematically yet, including whether it is necessary to SHOUT in one model's skill text to emphasise.
code-aide is at
github.com/dajobe/code-aide and
installable via uv tool install code-aide.
This follows my earlier post on Redland Forge, which covers using LLMs for the actual development work. A companion post on the dotfiles system that powers the templating is also published.