Intentional and accidental, AI first can be both
A peek into what it's actually like becoming an AI first team and company
I keep getting asked what it means that we’re an “AI first” company, and every time I try to answer it cleanly, it comes out wrong. Not incorrect, just misleading. Too neat. Too intentional. As if we sat down one day, wrote a strategy doc, and decided to reorganize the company around AI.
That’s not what happened.
What actually happened is messier, and probably more familiar if you’re building something yourself.
We ran into limits. Over and over. Limits in people, limits in time, limits in how fast we could move without everything falling apart. And instead of solving those limits by adding more layers or more hires, we kept reaching for whatever let us keep going.
More often than not, that was software. Increasingly, that was AI.
Only later did we realize we’d crossed some invisible line.
I still don’t love the definitions, but here’s the best way I can explain it. An AI company builds AI products. An AI first company assumes that knowledge work itself can be redesigned. Not optimized, redesigned. The question shifts from “who should do this” to “why does this exist in this form at all.”
Once that shift happens, it’s hard to unsee.
There’s a popular story going around right now about tiny teams doing huge numbers, and the explanation is usually that they’re “leveraging AI.” I think that story is comforting, because it makes success sound like a tooling choice. Pick the right stack, move faster, win.
But small teams with outsized output existed long before the current wave of AI. People built highly automated businesses with boring scripts, fulfillment partners, and unglamorous systems. The difference now is not that automation exists, it’s where it reaches. Work that used to require specialists early on, legal reviews, financial checks, first drafts of designs, exploratory code, internal tooling, can now be done well enough by one person to keep momentum.
Not perfectly. Not magically. But well enough.
That “well enough” matters more than people like to admit.
Constraint is more powerful than rules
Inside Neople, we never made a rule that said “use AI for this.” We also never sat down and mapped out every process and asked where AI could be slotted in. That approach tends to produce a lot of activity and very little change.
What we did instead, often without naming it, was remove escape hatches.
If something needed to get done and there wasn’t an obvious person to hand it to, the work didn’t disappear. Someone had to find a way. Sometimes that meant automation. Sometimes it meant a model. Sometimes it meant realizing the task itself was unnecessary and deleting it.
That last option is still the most powerful one, and the least talked about.
I’ll be honest about something that feels slightly uncomfortable to say out loud. The biggest driver of becoming AI first was not excitement or vision. It was constraint.
When you don’t have a junior developer to offload smaller tasks to, you try to see how far you can get on your own. When you don’t have a designer available, you learn to generate and iterate assets yourself. When you don’t have legal support on hand, you learn to do the first eighty percent and escalate the truly risky parts.
Scarcity has always created efficiency. That’s not new. What’s new is how much leverage one person can create once they’re forced into that mode. The tools amplify effort, but the mindset comes from having no alternative.
Blending of operators in the control room
There is a second-order effect to this that I didn’t fully appreciate at first. When everyone becomes more self-reliant, boundaries blur. Designers can challenge technical decisions because they can prototype alternatives. Engineers can challenge design decisions because they can generate and test variations. People stop having opinions in the abstract and start showing working versions.
That creates friction. It also creates better outcomes, if you can tolerate the tension.
Traditional teams feel calmer partly because crossing boundaries is expensive. AI makes that cheap. The real challenge stops being adoption and starts being collaboration. How do you work together when everyone has more agency than before?
I don’t think we’ve solved that yet. I do think it’s a real shift that deserves more attention than it gets.
There’s also a people side to this that’s easy to get wrong. The moment someone feels something is being taken away, they stop listening. It doesn’t matter how good the long-term argument is. Loss shuts the conversation down.
AI triggers that reaction fast. If “AI first” is framed as fewer hires, less support, more pressure, resistance is inevitable. Even if the upside is real.
What seems to work better is a combination of two things happening at the same time. Clear personal upside, and real constraints. Not “use AI more,” but “find a way to get this done,” paired with leaders actually doing the work themselves.
Once it’s visible, it stops being theoretical.
Advice for the AI-aspirational
If I were starting from scratch and wanted a team to move in this direction, I wouldn’t begin with training sessions or tool rollouts.
Become the practitioner
I’d start with myself. Pick a task I usually delegate, struggle through doing it with the tools available until the result is good enough, then show how I did it. Not as a mandate, just as a reference point.
Rely on first principles
After that, the more important work begins. First principles. What actually needs to happen for the company to move forward. What assumptions are being carried simply because they used to be true. What work exists only because it always has.
Most of the real gains don’t come from smarter execution. They come from removing work entirely.
Reimagine work
Looking ahead, I don’t think the future is people versus AI. It’s different kinds of people. More time spent designing systems, setting boundaries, and deciding how agents should behave. More focus on customer-facing roles, because trust and context still matter a lot while the technology keeps shifting underneath us.
At the same time, I expect we’ll keep increasing the number of agents internally. Not because it’s fashionable, but because once you experience the speed of “I can just try this,” waiting starts to feel expensive.
That’s been the biggest emotional shift for me. Not excitement. Not fear.
Impatience.
And if you’re feeling some version of that too, you’re probably closer to being AI first than you think.



