AI coding is great until your repo starts looking like it was assembled during a fire drill.
So I made this:
npx fix-hairball
It reviews your codebase, gives it a grade, fixes the worst parts, reviews it again, and keeps going until it gets an A.
Basically:
D -> C -> B -> A
but with more terminal output and fewer feelings.
Why?
Because AI agents love writing code.
They do not always love deleting code.
They will happily create:
- a helper for the helper
- a compatibility shim for code written 11 minutes ago
- a 700-line test file
- three “shared” abstractions used once
- a function named like it has a mortgage
fix-hairball exists to run the cleanup loop on purpose.
What It Does
Under the hood, it runs:
npx commands-com quality --until A
It asks multiple AI reviewers what is wrong, synthesizes the useful complaints, splits the fixes into parallel tasks, applies them, runs checks, then does it again.
Very glamorous.
Mostly it deletes things.
The Philosophy
If code can be clean in 50 lines, it should not be 200.
If an abstraction exists only because yesterday’s abstraction got lonely, it should go.
If the repo has “legacy compatibility” for something created this morning, everybody needs a walk.
Try It
npx fix-hairball
It will not make you a better engineer.
But it may make your repo look like one was involved.



