The real work of managing an army of AI agents
If I haven't "worked" in weeks, why is my head spinning?
The code I’m not writing
I realized something wild the other day: I haven’t written a real line of code in weeks. Not because I’m slacking. Not because I’ve moved into management. But because I’m managing an army of AI agents who write it for me.
On paper, this sounds like a dream. And in many ways, it is. I’m shipping features that would’ve taken me days or weeks to build. My productivity metrics are through the roof. I’m building more, faster, better.
So why is my brain completely fried?
The mental load no one is talking about
Here’s what managing AI agents actually looks like:
I’m juggling six different features simultaneously. Not planning them – actively building them. With Claude Code (Anthropic’s AI coding assistant), I’ve got multiple agents working on different parts of our codebase at once.
Agent 1 is refactoring our authentication system. Agent 2 is building a new skill template system. Agent 3 is debugging a race condition. Agent 4 is writing tests. Agent 5 is updating documentation. Agent 6 is implementing a new API endpoint.
And I’m conducting this orchestra, constantly switching between:
High-level architecture decisions
Deep-dive code reviews
Testing edge cases
Explaining dependencies
Reeling agents back when they go off track
Connecting dots between different parts of the system
My brain is constantly zooming in and out, switching abstraction levels every few minutes. It’s like playing six games of 3D chess simultaneously, where each move affects all the other boards.
The autonomy illusion
These agents are brilliant, but they’re not autonomous. Not yet. They need constant guidance, context, and course correction.
An agent might brilliantly implement a feature but miss a crucial edge case. Another might write perfect code that doesn’t follow our conventions. A third might solve the problem you asked about while creating three new ones you didn’t anticipate.
So I’m not just managing – I’m actively collaborating with each agent, providing context they lack, catching issues before they cascade, and making dozens of micro-decisions every hour.
From instructions to infrastructure
The real shift happened when I stopped repeating myself. I used to tell every agent the same things:
“Run
make fixto handle linting”“Check PULL_REQUEST_TEMPLATE.md for our standards”
“Our API follows these patterns...”
Now I’m building infrastructure for AI collaboration. Shared context documents. Automated setup scripts that prep agents with project knowledge. Templates that inject our standards into every conversation.
I’m not just coding anymore. I’m building systems to enable AI agents to code effectively. It’s meta in a way that makes my head spin.
This change forces me to think about my work on a completely different level. It’s not just about enabling my human team anymore – it’s about enabling a team of agents. And as our engineers transition from writing code to managing AI, they’re becoming managers of their own agent teams too.
Manager of managers
Here’s the wild part: I’m starting to roll this out to my team. Each engineer is becoming a manager of their own AI squad. We’re transitioning from a team of individual contributors to a team of AI conductors.
It’s fascinating and terrifying. We’re all learning this new skill on the fly – how to decompose problems for AI, how to provide context efficiently, how to review AI work at scale, how to maintain quality when you’re not writing the code yourself.
We’re in this strange transition period where the tools are powerful enough to transform how we work but not quite powerful enough to work independently. We’re managers of brilliant but needy team members who never sleep, never complain, but also never quite fully understand the big picture.
New limits
This new way of working comes with new constraints. I’m burning through my Claude Opus limits almost daily, even on the highest tier plan available. (For those not deep in the AI world, Opus is Anthropic’s most capable model – the one you want for complex coding tasks.)
Every time Opus switches back to Sonnet (the less capable model), it makes me a little sad. It feels like I’ve accidentally burnt out one of my agents. Luckily, usage limits – and therefore agent burnout – reset every 5 hours.
It’s a strange new constraint. Not time. Not energy (well, also energy). But AI conversation limits. I find myself rationing my interactions, being strategic about which problems deserve the Opus treatment and which can wait for the reset. In a way just like I would consciously divide tasks across the rest of the team, based on their experience.
The future is weird
This is the future of software development, and it’s nothing like what I expected. It’s not “AI replaces programmers.” It’s “programmers become superhuman by managing AI teams.”
But superhuman doesn’t mean it’s easy. My productivity has 10x’d, but so has the cognitive load. I’m building faster than ever while feeling more mentally exhausted than ever.
Despite the mental fatigue, despite the constant context switching, despite watching my agents “burn out” before lunch – I wouldn’t go back. This is too powerful, too transformative to ignore.
I just need to figure out how to manage my human brain while I manage my AI team. Because right now, they’re scaling faster than I am.
Building the future of work with AI at neople.io. Follow our journey as we figure out what it means to work alongside digital colleagues.


