The minimal change trap
Coding agents are trained to make minimal edits. That's great for building. For refactoring, it's a quiet disaster.
I asked an AI to remove a feature last week. A whole feature. Rip it out, delete the component, clean up the call chain, simplify everything that touched it.
What I got back was "".
The string was still being passed around. Every function in the chain still accepted it as a parameter. The callers still constructed it. The downstream consumers still checked for it. The entire plumbing was intact. The AI had just... emptied the pipe. Replaced the value with nothing and called it done.
The code compiled. The tests passed. And the feature was still there, structurally, in every way that matters. A ghost in the architecture. Invisible to the type checker, perfectly visible to anyone who had to work with this code next.
Trained to be gentle
Coding agents are trained to make minimal edits. This makes sense if you think about it from the training perspective. Most coding tasks are additive. Add a feature. Fix a bug. Implement this endpoint. The safest, most predictable thing the model can do is change as little as possible. Touch what you need, leave everything else alone.
For new features, this is great. You describe what you want, the model finds the right place to slot it in, writes the code, and the rest of your application stays untouched. Minimal blast radius. Maximum predictability. Exactly what you want.
For refactoring, it’s a disaster.
Refactoring is the opposite of minimal. Refactoring is deliberate, thorough surgery. You’re not adding something new. You’re reshaping what exists. And reshaping means following every thread to its end and making a conscious decision about each one.
When you rip out a feature, you need to trace every caller. Every parameter. Every conditional that checks for it. Every database column that stores it. Every test that exercises it. You need to understand why each piece exists, and whether removing the feature means that piece should go too, or whether it serves double duty and should stay.
An AI trained on minimal diffs does not do this. An AI trained on minimal diffs finds the fastest path to green and takes it.
The short-circuit problem
The specific failure mode drives me insane. I’ll ask for a refactor, something like “extract this logic into its own service and remove it from the original.” What I get back is the new service, correctly built, and the original code patched with a bunch of no-ops.
Functions that used to do real work now return undefined. Callbacks that used to process data now do () => (). Parameters that used to carry meaning now receive null from every caller, but the parameter still exists in the signature, because removing it would mean updating fifteen call sites, and the model chose the minimal path.
This is where it gets dangerous. Some caller three levels deep still passes that parameter, still assumes it matters, still branches on its value. The model set it to null at the source, and now that branch always takes the falsy path. Which might be fine. Or might silently disable something unrelated that happened to depend on the same value.
You don’t find out until a user reports that exports stopped working, or notifications aren’t sending, or some filter silently ignores half its conditions. Because the code compiled. Because the tests, which were also minimally modified, still passed.
I’ve lost days to this. Literal days. Staring at diffs, screaming internally, tracing null values through call chains that shouldn’t exist anymore, because the model decided that emptying a variable was the same as removing a concept.
Go fast and break things (but don’t fix things)
There’s an irony in all this.
LLMs are phenomenal at the scrappy, duct-tape, get-it-working style of programming. Need to prototype something in an afternoon? Perfect. Need to integrate with a new API you’ve never seen before? They’ll read the docs and wire it up before you’ve finished your coffee. Need a quick fix for a production bug? They’ll find the minimal patch and ship it.
They are agile incarnate. They find the shortest path between where you are and where you need to be, and they take it, every time, without hesitation.
This is why they’re so good at building. Building is inherently forward motion. You’re going from nothing to something. The minimal path forward is the right path, most of the time.
But fixing things properly? Simplifying things? Removing the scaffolding once the building is done? That requires stepping back, looking at the whole structure, and making it less. The minimal change when refactoring isn’t the smallest diff. It’s the diff that leaves the codebase in the simplest possible state. Sometimes that means touching fifty files. Sometimes that means deleting an entire module and rewriting three others to not need it.
That’s the exact opposite of what these models are optimized for.
Architecture by accumulation
The result, if you’re not careful, is architecture by accumulation. Layer on layer of additions, each one minimal, each one correct in isolation. Nobody ever goes back to simplify. Nobody rips out the thing that was only needed for the prototype. Nobody merges the three services that should’ve been one from the start.
Because every time you ask the AI to clean up, it makes the minimal change. Which means it patches instead of removing. Empties instead of deleting. Wraps instead of unwrapping.
You end up with a codebase that looks like a city that was never zoned. Perfectly functional. Everything connected. But the plumbing runs through six buildings because nobody ever rerouted it when the layout changed. And every time you try to reroute it, your AI assistant adds another junction instead.
What I actually want
I don’t want a model that makes small changes. I want a model that understands the intent behind a change and follows through completely.
“Remove this feature” doesn’t mean “set everything to null.” It means the feature should not exist in the codebase afterward. No parameters carrying its ghost. No conditionals checking for it. No database columns storing it. Gone.
“Simplify this” doesn’t mean “inline one function.” It means step back, look at what this code actually does versus what it could do if it were written today, and rewrite it.
The skill I keep coming back to, the thing that separates productive AI-assisted development from chasing your tail, is knowing when to let the model do its minimal-diff thing and when to stop it and say: no. Wider. Trace the whole thread. Follow it to the end. I need you to think about this, not just make it compile.
Sometimes the best prompt is: “Pretend this code doesn’t exist yet. Given what it needs to do, write it from scratch.” Because the model is so much better at building from nothing than it is at surgically reducing something that already exists.
The gap that matters
LLMs are the best pair programmers I’ve ever had for building things. And they’re the most frustrating pair programmers I’ve ever had for cleaning things up.
The gap between those two is where most of the difficulty in AI-assisted development lives right now. Not in getting the model to write code. That’s solved. But in getting it to care about the code that’s already there. To treat refactoring with the same seriousness as feature development. To understand that sometimes the right change is a big one.
I think this will improve. Models will get better at understanding codebases holistically, at reasoning about what should be removed, at making the brave diff instead of the safe one. But right now, today, if you’re using AI to build software, the thing you need to stay vigilant about isn’t what it writes.
It’s what it refuses to delete.
Deleting code at neople.io.



