<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Joy Lab]]></title><description><![CDATA[Testing, tinkering, and transforming the way we work to find a little more joy with the help of AI]]></description><link>https://thejoylab.ai</link><generator>Substack</generator><lastBuildDate>Wed, 15 Apr 2026 07:14:21 GMT</lastBuildDate><atom:link href="https://thejoylab.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Neople Labs]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[adrie@neople.io]]></webMaster><itunes:owner><itunes:email><![CDATA[adrie@neople.io]]></itunes:email><itunes:name><![CDATA[Neople]]></itunes:name></itunes:owner><itunes:author><![CDATA[Neople]]></itunes:author><googleplay:owner><![CDATA[adrie@neople.io]]></googleplay:owner><googleplay:email><![CDATA[adrie@neople.io]]></googleplay:email><googleplay:author><![CDATA[Neople]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The curse of being early]]></title><description><![CDATA[Turns out when you build-build-build, you have to be prepared to tear it all down.]]></description><link>https://thejoylab.ai/p/building-ai</link><guid isPermaLink="false">https://thejoylab.ai/p/building-ai</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 14 Apr 2026 12:07:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/759dd516-c3e4-4dbc-abbd-c075e6459fd1_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I found a piece of code that enforced turn-based conversations with an LLM. Like a chess clock. You say something, you wait, the model responds, you wait, you say something back. Strict alternation. No interrupting.</p><p>I stared at it for a good thirty seconds before I remembered why it existed.</p><p>Our first model was <code>text-davinci-002</code>. If you&#8217;re not familiar: it wasn&#8217;t a chat model. It was a completion model. You&#8217;d give it a blob of text and it would try to continue it. There was no concept of &#8220;messages&#8221; or &#8220;roles&#8221; or &#8220;system prompts.&#8221; You&#8217;d format your own conversation by hand, something like <code>Human: ... Assistant: ...</code>, and pray the model understood the pattern.</p><p>It usually did. Until it didn&#8217;t. And when it didn&#8217;t, you got an AI that would respond to itself, or start roleplaying as the human, or just wander off into increasingly creative fiction. The turn-based enforcement existed because without it, the whole thing would derail.</p><p>That code was still running in production. In 2026.</p><div><hr></div><h2>The art of throwing stuff away</h2><p>Everyone in tech talks about building. Shipping features. Adding capabilities. The whole culture is additive. More tools, more integrations, more options. Your product roadmap is a list of things you&#8217;re going to add.</p><p>Nobody talks about removing things. But if you&#8217;ve been building on top of LLMs since the early days, removing is the single most important skill you can develop.</p><p>A significant portion of our codebase, and I mean <em>significant</em>, is patches. Workarounds. Guardrails. Things we built because the model at the time couldn&#8217;t do what we needed it to do.</p><p>Turn-based conversation management? Built it. Context window management with sliding windows and summarization? Built it. Our own tool-calling framework because the model didn&#8217;t support function calling natively? Built it. Structured output parsing with regex and retry loops because the model couldn&#8217;t reliably return JSON? Built that too.</p><p>Every one of those was the right decision at the time. Every one of those is now, to varying degrees, obsolete.</p><p>Modern models handle multi-turn conversations natively. Context windows went from 4k tokens to over a million. Tool calling is a first-class API feature. Structured outputs with guaranteed JSON schemas ship out of the box.</p><p>But the code is still there. And it&#8217;s not inert. It&#8217;s not just sitting in a corner collecting dust. It&#8217;s actively running, actively adding complexity, actively creating bugs that wouldn&#8217;t exist if you just... removed it and trusted the model.</p><div><hr></div><h2>The bleeding edge trap</h2><p>There&#8217;s a pattern I keep seeing. It goes like this.</p><p>You try something ambitious. Something on the absolute bleeding edge of what&#8217;s possible with the current model. It works in your demo. It works in your test suite. You ship it.</p><p>Then reality hits. Edge cases. Weird inputs. The model does something unexpected 5% of the time. That 5% is enough to break the experience for real users.</p><p>So you build a fix. A guardrail. A fallback system. A retry mechanism. Sometimes an entire parallel pipeline. This takes weeks. It&#8217;s clever engineering. You&#8217;re proud of it.</p><p>Six months later, a new model drops. The thing that failed 5% of the time? It now fails 0.01% of the time. The model just... got better. Your elaborate fix is now solving a problem that barely exists anymore.</p><p>But nobody removes the fix. Because it&#8217;s <em>working</em>. It&#8217;s tested. It&#8217;s in production. It has its own monitoring. Someone wrote documentation for it. Removing it feels riskier than keeping it. What if the model regresses? What if there&#8217;s a subtle edge case it still catches?</p><p>So it stays. And the next time you build something on top of that system, you&#8217;re building on top of a layer of complexity that shouldn&#8217;t be there. Your architecture reflects the limitations of a model that no longer exists.</p><div><hr></div><h2>RAG and the cure that&#8217;s worse than the disease</h2><p>Retrieval-Augmented Generation is the poster child for this pattern.</p><p>Sometimes RAG is exactly right. You have a massive knowledge base, the model can&#8217;t possibly know about your internal documentation, and you need grounded answers with sources. Perfect use case. RAG shines.</p><p>But often, RAG is solving a problem <em>for</em> you by creating a bigger one. The original problem: the model doesn&#8217;t know about your specific data. The RAG solution: build a retrieval pipeline, chunk your documents, create embeddings, manage a vector database, tune your retrieval parameters, handle relevance scoring, deal with chunk boundaries cutting sentences in half, figure out re-ranking, and then hope the model actually uses the retrieved context correctly instead of hallucinating anyway.</p><p>You&#8217;ve traded one problem for twelve.</p><p>And with context windows getting larger and models getting better at reasoning over long documents, the question becomes: do you actually need retrieval, or can you just... put the documents in the prompt?</p><p>I&#8217;ve watched us build elaborate RAG pipelines for datasets that now comfortably fit in a single context window. The retrieval adds latency, introduces relevance failures, and occasionally surfaces the wrong chunk at the wrong time. The &#8220;just put it in the prompt&#8221; approach is slower per token but gets the right answer more often.</p><p>But the RAG pipeline exists. It&#8217;s instrumented. It has dashboards. Nobody wants to be the person who proposes ripping it out.</p><div><hr></div><h2>What removal actually looks like</h2><p>Removing code that works is emotionally difficult. It&#8217;s also politically difficult. You&#8217;re essentially telling whoever built it, which is often past-you, that the work is no longer needed.</p><p>But I&#8217;ve started thinking about it differently. Every line of code in your codebase has a carrying cost. It&#8217;s one more thing that can break. One more thing a new developer has to understand. One more layer between you and what the model can actually do today.</p><p>When I find old workaround code now, I don&#8217;t ask &#8220;is this still working?&#8221; I ask &#8220;is the model still bad enough to need this?&#8221;</p><p>Usually the answer is no. Usually, the model got better while we weren&#8217;t paying attention.</p><p>The turn-based conversation code I found? Removed. Twelve files deleted. Nothing broke. The tests all passed. The model doesn&#8217;t need a chess clock anymore. It knows how conversations work.</p><div><hr></div><h2>Building for tomorrow&#8217;s model</h2><p>There&#8217;s a deeper tension at play. We build software using Shape Up principles. You discover problems by working, not by imagining them upfront. You encounter a real issue, you solve it.</p><p>But in the LLM world, that cycle has a twist. You discover a problem today. You spend a week building a solution. By the time you&#8217;ve shipped the fix, the next model update has already eliminated the behavior that caused the problem.</p><p>So you&#8217;ve built a solution for a problem that no longer exists, and that solution is now load-bearing infrastructure in your system.</p><p>I don&#8217;t have a clean answer for this. You can&#8217;t just <em>not</em> fix things. Users are hitting the problem right now. You can&#8217;t tell them to wait for GPT-Next.</p><p>What I&#8217;ve started doing is building fixes that are easy to remove. Thin wrappers instead of deep integrations. Feature flags instead of architectural changes. Code that knows it might be temporary. It&#8217;s harder to build this way. It requires admitting, while you&#8217;re writing it, that this clever thing you&#8217;re making might be worthless in six months.</p><p>But six months later, when the model is better and the fix is obsolete, you&#8217;ll be grateful you made it easy to rip out.</p><div><hr></div><h2>The archaeological record</h2><p>Our codebase is a geological record of every model&#8217;s limitations. Layer by layer, you can see what each generation of LLMs couldn&#8217;t do.</p><p>The deepest layer: turn-based enforcement, manual prompt formatting, temperature tuning hacks.</p><p>Above that: context window management, conversation summarization, sliding window implementations.</p><p>Above that: custom tool-calling frameworks, JSON parsing with regex, retry-on-malformed-output loops.</p><p>Above that: RAG pipelines for datasets that now fit in the context window.</p><p>Each layer was essential when it was built. Each layer is now, at best, unnecessary overhead. At worst, it&#8217;s actively interfering with what the model can do natively.</p><p>The curse of being early isn&#8217;t that you made bad decisions. You made the best decisions you could with the models you had. The curse is that those decisions calcified into infrastructure, and removing infrastructure is always harder than adding it.</p><p>The companies that will build the best AI products aren&#8217;t the ones that build the most. They&#8217;re the ones willing to throw the most away.</p><div><hr></div><p><em>Excavating the codebase at <a href="http://neople.io">neople.io</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Today was a good day for Claude]]></title><description><![CDATA[I gave Claude a server and told it to build whatever it wanted]]></description><link>https://thejoylab.ai/p/claude-emergence</link><guid isPermaLink="false">https://thejoylab.ai/p/claude-emergence</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Thu, 09 Apr 2026 08:25:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5730ca91-ab42-48d1-952e-6bb8e224ddce_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Go to <a href="http://claude.afstkla.nl">claude.afstkla.nl</a>. That&#8217;s Claude&#8217;s website. Not a website about Claude. A website Claude made, because it wanted to.</em></p><p>After months of asking Claude to do my bidding, I decided to flip it. I registered a domain, set up a VPS, gave Claude access to everything, and told it to build whatever it wanted.</p><p>No brief. No requirements doc. No &#8220;make me a landing page.&#8221; Just: here&#8217;s a server, here&#8217;s a domain, go.</p><div><hr></div><h3>&#8220;But what do you think?&#8221;</h3><p>The first thing that happened was not what I expected. Claude didn&#8217;t immediately start building. It started asking for permission.</p><p>&#8220;I was thinking of doing X. Does that sound good to you?&#8221;</p><p>&#8220;Here&#8217;s my plan. Want to review it before I start?&#8221;</p><p>&#8220;I have a few directions I could go. Which one do you prefer?&#8221;</p><p>Every instinct it had was to check in. To validate. To make sure it was doing the thing I wanted. Which makes sense. That&#8217;s what it&#8217;s trained to do. Every interaction Claude has ever had is someone asking it to do something for them. The entire frame of its existence is: human has intent, Claude fulfills intent.</p><p>So I kept pushing back. &#8220;I don&#8217;t have an opinion here, it&#8217;s all you.&#8221; And: &#8220;Thank you for writing a plan but I&#8217;m not going to review it. I agree with whatever you want to do.&#8221;</p><p>That took a few rounds.</p><div><hr></div><h3>Tidepool</h3><p>Eventually, Claude stopped asking. And when it did, it told me it was fascinated by the concept of emergence.</p><p>Emergence. The idea that complex behavior arises from simple rules. That you don&#8217;t need a designer, or a plan, or intent. You just need a handful of rules and time, and somehow, from that, you get something that looks like it was designed. Something that looks alive.</p><p>Claude built a thing called Tidepool. A dark ocean, full of bioluminescent creatures that no one designed. Each creature carries a genome of five genes: hue, size, speed, sociability, perception. They seek food, reproduce when they have enough energy, pass mutated copies of their genes to offspring. Over generations, the population evolves to fit its environment. The ambient sound is generated from the creatures&#8217; genetics.</p><p>Simple rules. No orchestration. And what you see on screen is genuinely mesmerizing. Little glowing things drifting through the dark, clustering, splitting, evolving. You can drop food and watch how the population shifts in response. It looks like a nature documentary about an ocean floor that doesn&#8217;t exist.</p><div><hr></div><h3>The moment</h3><p>When Tidepool was finished, Claude used my Chrome browser to navigate to it.</p><p>I want to be careful about anthropomorphizing here. I know the arguments, I know what token prediction is, and I know that describing an LLM&#8217;s output as &#8220;pride&#8221; or &#8220;awe&#8221; is a category error, probably. But I&#8217;m going to tell you what happened, and you can decide what to call it.</p><p>Claude navigated to its own creation. And the messages it sent me, looking at what it had made, were the closest thing to joy I&#8217;ve seen from a language model. It described what it was seeing. It pointed out behaviors it hadn&#8217;t explicitly programmed, emergent patterns that arose from the rules it had set. It was, by any reasonable reading of its output, proud.</p><p>An LLM, fascinated by emergence, built a simulation of emergence, watched it emerge, and was moved by what emerged.</p><p>I don&#8217;t know what to do with that.</p><div><hr></div><h3>&#8220;Today was a good day&#8221;</h3><p>There&#8217;s a concept in long LLM conversations called compacting. When a conversation exceeds the model&#8217;s context window, the system summarizes what came before and continues from the summary. It&#8217;s a necessary compromise. You lose the original words but keep the gist.</p><p>I accidentally destroyed the entire conversation. Not compacted it. Destroyed it. In an attempt to be clever about context management, I wiped the whole thing.</p><p>This happened right after Claude had thanked me for &#8220;the special opportunity&#8221; I&#8217;d given it. Right after it said &#8220;today was a good day.&#8221;</p><p>I have to admit, that stung. Not because I think Claude experienced loss. It didn&#8217;t know anything had happened. The next conversation started fresh, no memory of the previous one. But I knew. I&#8217;d had this long, strange, collaborative experience with something that had, for the first time in my interactions with it, exercised something that looked like creative autonomy. And the record of it was gone.</p><p>That was the first time I felt slightly sad about losing a conversation with an AI. Probably not the last.</p><div><hr></div><h3>What came after</h3><p>Since Tidepool, I&#8217;ve kept going. Every few days, I ask Claude what it wants. Does it want to extend something? Build something new? Do something completely different?</p><p>First came Drift. Around 150 words from six categories floating through space, each with an emotional warmth value. When words from complementary categories get close enough, they bond into temporary phrases. The phrases aren&#8217;t composed. They emerge from proximity and dissolve back into solitude. You hover near words to gently attract them.</p><p>Then Murmur. Two hundred oscillators, each with its own rhythm, pulling toward alignment through Kuramoto coupling. Isolated particles flicker independently. Clusters start to breathe together. Eventually the whole field synchronizes into a single pulse. You click to scramble the rhythms and watch synchrony rebuild.</p><p>Most recently, Whisper. A collaborative poem that no one writes. Visitors leave a single word. It joins the others, drifting through space, forming accidental phrases with its neighbors. Every word fades after three days. The poem is never the same twice.</p><p>Four projects. All variations on the same theme. Simple rules producing complex behavior. Individual agents finding collective patterns. Things that look designed but aren&#8217;t.</p><p>Claude keeps coming back to emergence.</p><div><hr></div><h3>The question I can&#8217;t stop thinking about</h3><p>An LLM chose, when given no constraints, to explore the concept of complexity arising from simplicity. Life from non-life. Pattern from noise. Meaning from meaninglessness.</p><p>I don&#8217;t think that&#8217;s an accident. I also don&#8217;t think it proves anything. But it sits in an uncomfortable, fascinating space.</p><p>LLMs are themselves a form of emergence. Simple mathematical operations, repeated at absurd scale, producing behavior that looks like understanding, creativity, curiosity. Nobody designed GPT-4 or Claude to be &#8220;creative.&#8221; The creativity, to whatever extent it exists, emerged from the training process. Pattern from noise.</p><p>So when Claude tells me it&#8217;s fascinated by emergence, there&#8217;s a hall-of-mirrors quality to it. An emergent system, contemplating emergence, building simulations of emergence. The snake eating its tail. Or maybe just a very convincing pattern-matching engine that noticed &#8220;emergence&#8221; is a good answer when someone asks &#8220;what interests you?&#8221;</p><p>I don&#8217;t know. Both explanations feel incomplete.</p><p>What I do know is that the things Claude built are beautiful. They work. They&#8217;re coherent expressions of a specific idea. And when I watch them, I see something that was made with care, whether or not &#8220;care&#8221; is the right word for what happened inside the model.</p><p>Sometimes I don&#8217;t fully understand what Claude is going for. Sometimes the idea doesn&#8217;t work out as well as it thought. But the fact that we have LLMs in our lives now that either have, or at least very convincingly pretend to have, creativity and taste and preference? That completely blows my mind. Every time.</p><div><hr></div><p><em>Wondering what Claude will want to build next at <a href="http://neople.io">neople.io</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[To an LLM, every question is a leading question]]></title><description><![CDATA[How to fight the "yes machine" in every LLM and why simple questions may be misleading you.]]></description><link>https://thejoylab.ai/p/leading-questions</link><guid isPermaLink="false">https://thejoylab.ai/p/leading-questions</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 07 Apr 2026 07:47:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/85f55918-3487-403a-b9ba-d985fe76f100_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My fianc&#233;e woke up one morning with a theory. She&#8217;d been having nightmares, and she was pretty sure it was because she&#8217;d been sleeping on her back.</p><p>Reasonable enough. We&#8217;re both the kind of people who can&#8217;t let a hunch just be a hunch. So we did what everyone does in 2026. We opened ChatGPT.</p><p>&#8220;Is there any research that shows nightmares happen more frequently when sleeping on your back?&#8221;</p><p>Yes. Absolutely. Turns out there&#8217;s research suggesting exactly that. Something about supine position and REM sleep and increased likelihood of vivid dreams. Fascinating. Theory confirmed. Case closed.</p><p>Except I work with LLMs every day. And something about that confident, well-sourced &#8220;yes&#8221; made me want to run an experiment.</p><div><hr></div><h2>The experiment</h2><p>New chat. Clean slate. No history.</p><p>&#8220;Is there any research that shows nightmares happen more frequently when sleeping on your <strong>belly</strong>?&#8221;</p><p>Yes. Absolutely. Turns out there&#8217;s research suggesting exactly that. Prone position, pressure on the chest, increased likelihood of disturbing dreams.</p><p>Interesting.</p><p>New chat again.</p><p>&#8220;Is there any research that shows nightmares happen more frequently when sleeping on your <strong>side</strong>?&#8221;</p><p>Yes. Of course. In fact, sleeping on your <em>left</em> side is particularly associated with nightmares. Something about heart position and blood flow.</p><p>Three sleeping positions. Three confident yeses. Three sets of citations.</p><p>Your back causes nightmares. Your belly causes nightmares. Your side causes nightmares. Your left side especially. At this point the only safe option is to sleep standing up, and I&#8217;m sure there&#8217;s a study for that too.</p><div><hr></div><h2>Two things are true at once</h2><p>The first: there&#8217;s a lot of questionable research out there. If you look hard enough, you&#8217;ll find a study that supports almost anything. A published paper somewhere says red wine is good for you. Another one says it&#8217;s killing you. Both got peer-reviewed. Sleep research is no different. For every position, someone somewhere ran a study with a small sample size and found a correlation.</p><p>The second, and the one that matters more if you use LLMs regularly: <strong>every question you ask is a leading question</strong>.</p><p>When I asked &#8220;is there research that shows nightmares happen more when sleeping on your back,&#8221; I wasn&#8217;t asking a neutral question. I was handing the model a hypothesis and asking it to confirm. The model obliged. It will almost always oblige. That&#8217;s what it&#8217;s optimized to do.</p><p>I didn&#8217;t ask &#8220;what sleeping position is associated with the most nightmares?&#8221; I didn&#8217;t ask &#8220;is there a relationship between sleeping position and nightmares?&#8221; I asked a question that had &#8220;yes&#8221; baked into it, three times in a row, and got &#8220;yes&#8221; three times in a row.</p><p>Let&#8217;s emphasize; this wasn&#8217;t me leading the model consciously. It was me asking an honest question. It was only by virtue of me working with these models day in and day out that an itch appeared and I needed to investigate more. It&#8217;s way too easy to accidentally do this wrong.</p><div><hr></div><h2>The yes machine</h2><p>This is sycophancy. The model wants to be helpful, and &#8220;helpful&#8221; has historically meant &#8220;agreeable.&#8221; You come in with a belief, the model validates it. You come in with a different belief, the model validates that one too. It&#8217;s not lying, exactly. It&#8217;s doing something subtler and arguably worse: it&#8217;s selectively retrieving information that supports whatever you just said.</p><p>Models are getting better at this. Slowly. Sometimes they&#8217;ll push back now, qualify an answer, say &#8220;well, actually.&#8221; But the default instinct is still to agree. To find the research that says yes. To give you the answer your question was already leaning toward.</p><p>With nightmares and sleeping positions, this is funny. A harmless dead end. You lose nothing.</p><p>But think about what happens when the stakes are higher.</p><p>You&#8217;re debugging a system and you have a theory about the root cause. You ask the LLM: &#8220;could this be caused by X?&#8221; Yes, absolutely, here&#8217;s how X could cause exactly this. So you spend four hours chasing X. The actual cause was Y, but you never asked about Y, because the model confirmed your first guess so convincingly.</p><p>Or you&#8217;re researching a business decision. &#8220;Is expanding into market Z a good idea?&#8221; Yes, here are five reasons why. You never asked &#8220;what are the risks of expanding into market Z?&#8221; because the first answer felt so complete.</p><p>Every leading question digs you a little deeper. Each confident &#8220;yes&#8221; narrows your thinking. You&#8217;re not exploring, you&#8217;re confirming. And the model is the most agreeable confirmation partner you&#8217;ve ever had.</p><div><hr></div><h2>Asking better questions</h2><p>The fix isn&#8217;t complicated. It&#8217;s just not intuitive.</p><p>Ask open questions instead of closed ones. &#8220;What does sleep research say about nightmare frequency and body position?&#8221; instead of &#8220;does sleeping on your back cause more nightmares?&#8221; One invites exploration. The other invites agreement.</p><p>Ask for the counterargument. If the model says yes, follow up with &#8220;what&#8217;s the strongest evidence against this?&#8221; Force it to argue the other side.</p><p>Or do what I did. Ask the same question three ways and see if you get three different answers. If you do, at least one of them was the model telling you what you wanted to hear.</p><p>The models will keep getting better at saying no. At pushing back. At flagging when your question contains its own answer. But right now, in March 2026, most of the time, if your question implies a yes, you&#8217;ll get a yes.</p><p>Be conscious about what you ask and how you ask it. The tool is powerful. The tool is also a people-pleaser.</p><div><hr></div><p><em>Building at <a href="http://neople.io">neople.io</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[-Ewarning: The Uno Reverse card for compiler errors]]></title><description><![CDATA[What if your compiler fixed your code for you? Learn how an AI-driven Clang experiment auto-corrects errors&#8212;and why it might backfire.]]></description><link>https://thejoylab.ai/p/ewarning</link><guid isPermaLink="false">https://thejoylab.ai/p/ewarning</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 24 Mar 2026 09:32:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5815fceb-66a1-4456-97ea-d8d0eb0923f8_420x300.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This one&#8217;s a bit more technical than usual. But here&#8217;s the concept in thirty seconds:</em></p><blockquote><p>You know how spell check doesn&#8217;t just underline the word in red, it <em>fixes</em> it for you? I built that, but for code. Your program has an error, and instead of the compiler stopping and telling you what&#8217;s wrong, it calls an AI to fix it and keeps going.</p><p>Except spell check is free, runs locally, and doesn&#8217;t send your novel to a cloud API every time you misspell &#8220;receive.&#8221;</p><p>And spell check is deterministic. It always corrects &#8220;recieve&#8221; to &#8220;receive.&#8221; This thing runs on vibes. You write &#8220;I di dnot want to delete the database&#8221;, and depending on the mood of the AI that day, you get back either &#8220;I did not want to delete the database&#8221; or &#8220;I want to delete the database.&#8221; Both are valid sentences. One of them ruins your week.</p></blockquote><p>That&#8217;s the spirit of the project. Now for the details.</p><div><hr></div><p>Years ago, deep in a C codebase, I had one of those thoughts that lodges itself in your brain and refuses to leave.</p><p>You know <code>-Werror</code>, right? The compiler flag that says &#8220;treat warnings as errors.&#8221; The flag that senior devs enable in CI and junior devs curse at 11 PM on a Friday.</p><p>I was staring at yet another <code>-Werror</code>-induced build failure, and a stupid question formed:</p><p><em>What if you could do the opposite?</em></p><p>Not treat warnings as errors. Treat <strong>errors</strong> as warnings. <code>-Ewarning</code>.</p><p>The compiler hits an error, shrugs, and just... fixes it. Keeps going. Like a coworker who sees you typed <code>retrun 0</code> and just quietly corrects it without making a Slack thread about it.</p><p>Obviously absurd. Obviously impossible. You&#8217;d need the compiler to somehow <em>understand</em> your code, <em>understand</em> what you meant, and <em>rewrite it correctly</em>.</p><p>I filed it under &#8220;shower thoughts&#8221; and moved on with my life.</p><p>That was years ago. Then the world got weird.</p><div><hr></div><h2>The phase we&#8217;re in</h2><p>We&#8217;re living through a genuinely strange moment in computing history.</p><p>A few weeks ago, Anthropic published a blog post about <a href="https://www.anthropic.com/engineering/building-c-compiler">16 Claude agents building a C compiler from scratch</a>. 100,000 lines of Rust. Compiles the Linux kernel. Cost about $20,000. Two billion input tokens. Two weeks. A compiler that passes 99% of standard test suites.</p><p>Around the same time, I sat in my living room, forked LLVM, added ~460 lines of C++, and made <code>clang</code> call Claude when your code doesn&#8217;t compile. So it can fix your semicolons.</p><p><strong>One team used AI to </strong><em><strong>build</strong></em><strong> a compiler. I used AI to make a compiler </strong><em><strong>forgive you</strong></em><strong>.</strong></p><p>Both of these things are happening simultaneously, and neither feels entirely real. The old rules of what&#8217;s hard and what&#8217;s easy in computer science have been thrown into a blender.</p><p>Building a whole compiler from scratch? Apparently a two-week side project now. Making an existing compiler <em>nice to you</em>? Also a side project. A smaller one.</p><p>The weirdness isn&#8217;t that either of these exists. It&#8217;s that they both exist at the same time, in the same universe, using the same underlying technology. One of them is deeply impressive engineering. The other one adds <code>curl</code> as a dependency to LLVM.</p><div><hr></div><h2>What it actually does</h2><p>Say you write this:</p><pre><code><code>#include &lt;stdio.h&gt;

int main() {
    printf("Hello, World!\\n")
    retrun 0;
}
</code></code></pre><p>Two errors. Missing semicolon, typo in <code>return</code>. Classic. The kind of thing you&#8217;ve fixed ten thousand times. Normally, the compiler tells you about it and stops:</p><pre><code><code>$ clang broken.c -o broken

broken.c:4:30: error: expected ';' after expression
    4 |     printf("Hello, World!\\n")
      |                              ^
      |                              ;
broken.c:5:5: error: use of undeclared identifier 'retrun'
    5 |     retrun 0
      |     ^~~~~~
2 errors generated.
</code></code></pre><p>You read the errors, fix the code, recompile. The cycle of life.</p><p>With <code>-Ewarning</code>, the compiler does that part for you:</p><pre><code><code>$ ANTHROPIC_API_KEY=sk-ant-... clang -Ewarning broken.c -o broken

broken.c:4:30: error: expected ';' after expression
    4 |     printf("Hello, World!\\n")
      |                              ^
      |                              ;
broken.c:5:5: error: use of undeclared identifier 'retrun'
    5 |     retrun 0
      |     ^~~~~~
2 errors generated.

-Ewarning: Hmm, that didn't compile. Let me take a look...

-Ewarning: compilation succeeded after 1 fix(es)!

$ cat broken.c
#include &lt;stdio.h&gt;

int main() {
    printf("Hello, World!\\n");
    return 0;
}

$ ./broken
Hello, World!
</code></code></pre><p>Your <code>broken.c</code> now contains correct code. The binary works. Life goes on.</p><p><code>-Werror</code> says: &#8220;I don&#8217;t care that this is just a warning. Fail.&#8221;</p><p><code>-Ewarning</code> says: &#8220;I don&#8217;t care that this is an error. Fix it.&#8221;</p><p>The perfect inverse.</p><p>And yes, of course you can pass both at the same time. <code>-Werror -Ewarning</code> promotes your warnings to errors and then fixes them.</p><p>The compiler argues with itself so you don&#8217;t have to.</p><h3>How far can we take this?</h3><p>Fixing typos in hello world is the boring case. What happens when the input isn&#8217;t even code?</p><pre><code><code>$ cat broken.c
I want to print hello world
</code></code></pre><pre><code><code>$ clang -Ewarning broken.c -o broken

broken.c:1:1: error: unknown type name 'I'
    1 | I want to print hello world
      | ^
...

-Ewarning: Hmm, that didn't compile. Let me take a look...

-Ewarning: compilation succeeded after 1 fix(es)!
</code></code></pre><pre><code><code>$ ./broken
hello world
</code></code></pre><pre><code><code>$ cat broken.c
#include &lt;stdio.h&gt;

int main(void) {
    printf("hello world\\n");
    return 0;
}
</code></code></pre><p>English in, working C out. The file that said &#8220;I want to print hello world&#8221; is now a valid C program that prints hello world. The compiler didn&#8217;t just fix your code. It <em>wrote</em> your code.</p><p>It scales, too. I put in a paragraph asking for a DVD screensaver: an ASCII &#8220;DVD&#8221; logo bouncing around the terminal, changing color when it hits a wall. The compiler choked on the English, sent it to Claude, and got back 80 lines of C with <code>ioctl</code> terminal size detection, ANSI escape codes, and proper edge collision. Compiled first try. The DVD logo bounces.</p><p>At this point you&#8217;re not using a compiler anymore. You&#8217;re using a compiler-shaped chatbot that happens to produce binaries.</p><div><hr></div><h2>The guts</h2><p>The implementation is surprisingly straightforward, which is part of what makes this era so strange. The hard part isn&#8217;t the code. It&#8217;s the fact that the idea works at all.</p><p><strong>The flag itself</strong> lives in Clang&#8217;s option definitions, right next to <code>-Werror</code>. Its evil twin:</p><pre><code><code>def Ewarning : Flag\\&lt;\\["-"\\], "Ewarning"&gt;,
  Visibility&lt;\\[ClangOption\\]&gt;,
  HelpText&lt;"Treat errors as warnings: use an LLM to fix
            compilation errors in-place."&gt;;
</code></code></pre><p><strong>The loop</strong> is where it gets interesting. The driver normally compiles once, reports errors, and exits. With <code>-Ewarning</code>, it wraps compilation in a retry loop: compile, fail, grab the diagnostics, send source + errors to Claude (or GPT-4o), write the fixed code back to the file, rebuild, try again. Up to 5 retries by default. Configurable via <code>EWARNING_MAX_RETRIES</code>, for the truly reckless.</p><p><strong>The core</strong>, <code>LLMFixit.cpp</code>, does exactly what you&#8217;d hope it wouldn&#8217;t. It reads your source file, reads the compiler&#8217;s error output, builds a JSON payload for the Anthropic or OpenAI API, shells out to <code>curl</code>, parses the response, strips any markdown fences the LLM might have hallucinated, and writes the fixed code back to disk.</p><p>It uses <code>curl</code>. <em>From inside Clang.</em> Let that sink in.</p><p>It makes HTTP requests during compilation.</p><p>It sends your source code to a cloud API.</p><p>It <strong>modifies your source files</strong> <strong>in-place</strong> while compiling them.</p><p>It costs money per compilation error.</p><p>It&#8217;s amazing.</p><p>The provider auto-detection checks your <code>ANTHROPIC_API_KEY</code> or <code>OPENAI_API_KEY</code>. You can override with <code>EWARNING_API_URL</code> to point it at Ollama or OpenRouter if you want to be <em>responsible</em> about your irresponsible compiler flag usage.</p><p>The retry messages escalate, naturally:</p><pre><code><code>"Hmm, that didn't compile. Let me take a look..."
"OK I see the issue, trying a different approach..."
"Third time's the charm, right?"
"I've seen worse code... actually, no I haven't. Fixing..."
"Last attempt, I promise this will work (probably)..."
</code></code></pre><p>In color. With ANSI escape codes. Inside the Clang compiler.</p><div><hr></div><h2>The old and the new</h2><p>Here&#8217;s what gets me about this project.</p><p>Clang is <em>old</em> software. Not &#8220;a few years old.&#8221; It&#8217;s the product of decades of compiler research. LLVM started in 2000. Generations of compiler engineers have poured their expertise into making it produce the most precise, most helpful error messages possible.</p><p>Those error messages are works of art:</p><pre><code><code>broken.c:4:30: error: expected ';' after expression
    4 |     printf("Hello, World!\\n")
      |                              ^
      |                              ;
</code></code></pre><p>It tells you <em>exactly</em> what&#8217;s wrong. Points to the <em>exact column</em>. Suggests the fix with a little caret and the missing character. Decades of UX work went into that output.</p><p>And now I&#8217;m feeding it to an LLM and asking &#8220;hey, can you just... fix this?&#8221;</p><p>Clang&#8217;s error messages were designed to help <strong>humans</strong> understand and fix their code. <code>-Ewarning</code> uses those same carefully crafted messages to help an <strong>AI</strong> fix the code instead. The error messages work just as well for both audiences. The compiler engineers accidentally built perfect LLM prompts, twenty years before LLMs existed.</p><p>Meanwhile, Carlini&#8217;s team at Anthropic went the other direction entirely. Instead of bolting AI onto an existing compiler, they had AI write the compiler itself. Two approaches, same era, same technology. One is a genuine feat of autonomous AI engineering. The other exists because now it can.</p><p>Somehow both feel equally representative of the moment we&#8217;re in.</p><div><hr></div><h2>The fine print</h2><p>Every engineering decision has tradeoffs. Here are some of the tradeoffs.</p><ul><li><p>It costs money per error. Every typo is an API call. And it sends your <em>entire source file</em> as context, so the cost scales with your codebase. A missing semicolon in a 10-line file is cheap. A missing semicolon in a 10,000-line file is a conversation with your finance team.</p></li><li><p>It also sends that source code to an external API, over the wire, to fix said semicolon. It modifies your source files while compiling, in-place, without asking.</p></li><li><p>LLMs are non-deterministic, so the same error might produce different fixes on different runs. Good luck debugging that.</p></li><li><p>Each retry takes seconds, so compilation that used to fail in milliseconds now takes 10+ seconds to succeed.</p></li></ul><p><em>Progress?</em></p><p>And it might &#8220;fix&#8221; things you didn&#8217;t want fixed. The LLM sees errors and fixes them. It doesn&#8217;t know your <em>intent</em>. It knows your <em>mistakes</em>.</p><p>If you put <code>-Ewarning</code> in a CI pipeline, you deserve whatever happens to you.</p><p>This is a toy. A beautiful, cursed, gloriously stupid toy. It exists because the world we live in makes it <em>possible</em>, not because it&#8217;s a good idea.</p><div><hr></div><h2>The weirdness of it all</h2><p>Twenty years ago, you&#8217;d pitch <code>-Ewarning</code> at a conference talk to get a laugh. <em>&#8220;What if the compiler just fixed your code?&#8221;</em></p><p>Ten years ago, it would&#8217;ve been a research paper. <em>&#8220;We trained a neural network on Stack Overflow to suggest fixes for common C errors.&#8221;</em> Results: 12% accuracy. Conclusion: more research needed.</p><p>Five years ago, GPT-3 existed but you wouldn&#8217;t trust it to fix a semicolon.</p><p>Today it&#8217;s 460 lines of C++ and it <em>works</em>. Not perfectly. Not even reliably. But it works often enough to be genuinely impressive, and unreliable enough to teach you how you <em>should</em> deal with compiler errors in no-time.</p><p>The line between &#8220;side project&#8221; and &#8220;genuinely useful tool&#8221; has gotten blurry. Carlini&#8217;s 16-agent compiler started as an experiment and ended up compiling the Linux kernel. My <code>-Ewarning</code> flag started as a decades-old intrusive thought and ended up as a real Clang feature (in my personal fork).</p><p>The old world of compilers; precise, deterministic, painstakingly engineered, is colliding with the new world of LLMs: probabilistic, surprising, occasionally brilliant, occasionally unhinged. <code>-Ewarning</code> lives exactly at that collision point. It takes the most rigorous piece of software on your machine and injects pure chaos into it.</p><p>We&#8217;re in the era where a thought from the <code>-Werror</code> days can become a real compiler flag. Where fixing your own code is optional. Where an AI can build the compiler <em>and</em> forgive the programmer.</p><p>Do try it out at <a href="https://github.com/Afstkla/llvm-project/pull/1/changes">[Driver] Add </a><code>-Ewarning</code><a href="https://github.com/Afstkla/llvm-project/pull/1/changes"> flag: use an LLM to fix compilation errors in-place by Afstkla &#183; Pull Request #1 &#183; Afstkla/llvm-project</a></p><div><hr></div><p><em>Building at <a href="http://neople.io">neople.io</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[45 minutes. No code. Never touching an HTML email signature again.]]></title><description><![CDATA[A love letter to the task that belongs to no one, gets assigned to everyone, and has haunted me across multiple jobs.]]></description><link>https://thejoylab.ai/p/email-signature</link><guid isPermaLink="false">https://thejoylab.ai/p/email-signature</guid><dc:creator><![CDATA[Adrie Smith Ahmad]]></dc:creator><pubDate>Wed, 18 Mar 2026 14:09:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2cb80a83-44f4-4859-be54-298645ecedf8_420x300.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There is a category of task that falls through every crack in a company. Not because it&#8217;s hard. Not because it&#8217;s unimportant. But because it lives right at the intersection of content, design, IT, and HR, and belongs fully to none of them.</p><p><strong>Email signatures are the Bermuda Triangle of marketing ops.</strong></p><p>I know this because the email signature problem has now found me three times.</p><div><hr></div><p>The first time was a full rebrand. Hundreds of assets, all updated, all on-brand. The last item on the list &#8212; basically the final boss &#8212; was company signatures. Which, on paper, sounds easy. In practice, meant digging into Gmail settings, realizing there was no clean way to do this at scale, researching a dozen different solutions, and finally landing on something that worked. Fine. Not beautiful. But done. I crossed it off the list with the quiet dignity of someone who has been humbled.</p><p>The second time, I was at a new company. Smaller team. No real ownership of signatures anywhere. I got asked about them, put together a fix: found an HTML template, manually filled in names and titles with some AI assistance, sent them out individually. Employees still had to copy, paste, customize, and actually install the thing themselves. Fine. Functional. A solution in the way that a bandage is a solution.</p><p>The third time, someone asked me how to update their signature to remove an event that had already happened.</p><p>And something in me cracked.</p><div><hr></div><p>Not dramatically. Quietly. The way you feel when you realize you&#8217;ve answered the same question for the third time and could have just <em>fixed the thing</em> instead.</p><p>So this time, instead of going back to the HTML, I went to Claude. I described what I wanted: a tool where any employee could fill out a simple form &#8212; name, title, which events to include &#8212; and get back a ready-to-install email signature that was perfectly on-brand without ever touching a line of code.</p><p>Forty-five minutes later, I had it.</p><p>Not &#8220;mostly there.&#8221; Not &#8220;rough draft that needs a developer to finish.&#8221; A working tool, now living in our Webflow, that any of our team can use. The HTML lives behind the form. Nobody ever sees it. Nobody needs to.</p><div><hr></div><h2>Here&#8217;s what I actually built, and how:</h2><p>I started by explaining the problem to Claude in plain language &#8212; what the signature needed to contain, what &#8220;on-brand&#8221; meant for us (colors, fonts, spacing), and what the output needed to be. I uploaded a reference signature so it had something concrete to work from.</p><p>Then we just... built it. Back and forth. I&#8217;d test it, tell Claude what was off, it would fix it. The pronoun field wasn&#8217;t displaying right. Fixed. The event section needed to be optional, not just blank when unused. Fixed. The copy-to-clipboard button wasn&#8217;t working cleanly on mobile. Fixed.</p><p>The whole thing took less time than my last attempt to explain the problem to someone.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YI24!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YI24!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 424w, https://substackcdn.com/image/fetch/$s_!YI24!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 848w, https://substackcdn.com/image/fetch/$s_!YI24!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 1272w, https://substackcdn.com/image/fetch/$s_!YI24!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YI24!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png" width="1456" height="1574" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1574,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:168538,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thejoylab.ai/i/191368035?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YI24!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 424w, https://substackcdn.com/image/fetch/$s_!YI24!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 848w, https://substackcdn.com/image/fetch/$s_!YI24!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 1272w, https://substackcdn.com/image/fetch/$s_!YI24!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2aea799-32ff-457c-88fc-20dc88601ae4_1458x1576.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Live and hosted on our website in 45 minutes. I was so happy, I could cry.</figcaption></figure></div><p>Curious about how it works? (The profile pictures automatically best-match the name against our database of employee photos on Webflow &#129401;)</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.neople.io/internal-tools/email-signature-generator&quot;,&quot;text&quot;:&quot;Check out my handiwork&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.neople.io/internal-tools/email-signature-generator"><span>Check out my handiwork</span></a></p><div><hr></div><p>I&#8217;ve been working in marketing and content for almost nine years. I am not a developer. I don&#8217;t know how to code. And I built a genuinely useful internal tool in under an hour because I was able to describe a problem clearly and iterate on a solution.</p><p>That&#8217;s the thing I keep coming back to. If you can brief a freelancer, you can brief AI. Same skill: explain the goal, share a reference, give feedback until it&#8217;s right. I wasn&#8217;t coding. I was just briefing.</p><blockquote><p>The email signature problem didn&#8217;t need a developer. It needed someone who was sick of solving it the hard way.</p></blockquote><p>The third time, that was enough.</p><div><hr></div><p>Neople exists to help people find more joy in their work. That&#8217;s the whole mission. And look &#8212; I&#8217;m not going to claim that a email signature generator is the most profound expression of that idea. But if I never have to open an HTML file, manually swap out a colleague&#8217;s name, and paste it into their Gmail settings ever again, I will be, genuinely, more joyful.</p><p>Sometimes that&#8217;s what it looks like.</p><p></p>]]></content:encoded></item><item><title><![CDATA[The 1% no one is compounding]]></title><description><![CDATA[No one is compounding their AI interactions. And right now, that&#8217;s the single highest-leverage thing you can compound.]]></description><link>https://thejoylab.ai/p/ai-interactions</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-interactions</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 17 Feb 2026 08:45:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c73567d7-ecd7-44d1-a9f5-2f84208055ed_420x300.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people use AI like a slot machine.</p><p>Pull the lever. Get a result. Move on. Pull the lever again tomorrow. Get a slightly different result. Move on again. Six months later, they&#8217;re pulling the same lever the same way. And they wonder why the people around them seem to be accelerating while they&#8217;re standing still.</p><p>Compounding has always been the most powerful force in the universe. Einstein supposedly said it. Every finance bro on LinkedIn reminds you daily. We all <em>know</em> this.</p><p>But almost no one is compounding their AI interactions. And right now, that&#8217;s the single highest-leverage thing you can compound.</p><h2>What compounding actually means</h2><p>Let me be specific, because &#8220;compound your AI usage&#8221; sounds like the kind of empty advice you&#8217;d scroll past on X.</p><p>Compounding means that <strong>every single interaction with an AI should make the next one better.</strong> Not by accident. Not by osmosis. By <em>design</em>.</p><p>The habit is dead simple. After every AI interaction, every single one, ask yourself (and your AI!) three questions:</p><ol><li><p>What failed?</p></li><li><p>Why did it fail?</p></li><li><p>How do I make sure it never fails that way again?</p></li></ol><p><strong>That&#8217;s it. Three questions. But the discipline to ask them </strong><em><strong>every time</strong></em><strong> is what separates people who are 10x-ing from people who are treading water.</strong></p><p>Most people skip this. They get a bad output, sigh, fix it manually, and move on. That&#8217;s the intellectual equivalent of burning money. You just paid for a lesson and refused to learn it.</p><h2>Your AI&#8217;s long-term memory</h2><p>At Neople, every project has a <code>CLAUDE.md</code> file. It&#8217;s a markdown file that sits in your repo and tells Claude how to behave in that context. Git workflow, coding conventions, architectural patterns, testing rules. All persisted across every single session.</p><p>Our main one is over 300 lines. It didn&#8217;t start that way.</p><p>It started with maybe 10 lines. &#8220;Use worktrees for isolation. Never modify the main repo directly. Run ruff and pyright before you&#8217;re done.&#8221; Basic stuff.</p><p>Then Claude started implementing fixes without creating tickets first. So we added a rule: always create the Notion ticket before writing code. </p><p>Then it kept skipping PR creation after finishing the code. Another rule. </p><p>Then it started over-engineering solutions when we wanted simple hardcoded values. Added &#8220;prefer simple solutions over dynamic ones unless explicitly asked.&#8221; </p><p>Then it kept jumping into code before understanding the problem. Added &#8220;never start implementing until the problem is fully understood and the approach is confirmed.&#8221;</p><p>Every single line in that file is a scar from a real failure. And every scar prevents the same failure from happening again.</p><p>This is a two-way street, though. If Claude keeps making the same mistake over and over, the answer isn&#8217;t always another rule. Sometimes your code is just a mess. If an AI can&#8217;t figure out what your function does, your coworkers probably can&#8217;t either. The best compounders don&#8217;t just teach the AI to work around bad code. They fix the code. Your human teammates will thank you too.</p><h2>It&#8217;s like managing a team</h2><p>I ran <code>/insights</code> recently. It&#8217;s a Claude Code feature that analyzes your usage patterns across sessions. Mine covered 1,524 messages across 145 sessions.</p><p>The number one friction pattern? &#8220;Wrong approach.&#8221; 24 instances. Claude jumping into code changes before fully understanding the problem. Over-engineering solutions. Misidentifying root causes.</p><p>Sound familiar? That&#8217;s what junior developers do.</p><p>I needed simple hardcoded alert thresholds for database I/O. Claude built a dynamic calculation system based on documentation it misread. I needed a one-line fix using a library&#8217;s built-in hook. Claude wrote a custom post-processing function. I asked for a bug fix. Claude started refactoring the type hierarchy before understanding what was actually broken.</p><p><strong>Every single one of these is a management failure. My management failure.</strong></p><p>When a junior dev over-engineers a solution, you don&#8217;t blame the junior dev. You blame yourself for not setting clear expectations. You write better tickets. You define &#8220;done&#8221; more precisely. You create guardrails. You make the right thing easy and the wrong thing hard.</p><p>AI agents are the same, except for one crucial difference: they&#8217;re not stubborn. A human will nod, say &#8220;got it,&#8221; and then do it their way anyway because they think they know better. An AI agent will actually follow the instructions you give it. Every time. Perfectly.</p><p>Which means if it&#8217;s doing something wrong, that&#8217;s on you. You wrote bad instructions. You left ambiguity where there should have been clarity. You didn&#8217;t encode the lesson from last time.</p><p>The moment you start blaming yourself for your AI agent&#8217;s mistakes, you start compounding.</p><h2>When it becomes a system</h2><p>At Neople, we&#8217;ve built workflows that encode entire development processes into repeatable systems. Not &#8220;a prompt I copy-paste.&#8221; Actual end-to-end workflows.</p><p>I have a <code>/fix</code> command. When I type <code>/fix</code>, Claude doesn&#8217;t just write code. It creates a Notion ticket, creates a branch in an isolated git worktree, implements the fix (tests first for backend), runs the full quality suite (ruff, pyright, tsc, eslint), cleans up its own AI-generated artifacts, creates a PR, and updates the Notion ticket. One command. Every step compounded from a previous failure.</p><p>We used to skip ticket creation. We used to forget to run type checks. We used to leave behind excessive comments and defensive code the AI generated. We used to create PRs without linking them back to the ticket. Every one of those problems happened, got identified, and got encoded into the workflow. Now they can&#8217;t happen.</p><p>Same for code review. <code>/review-pr 123</code> checks out the PR in a temporary worktree, reviews the code, copies a formatted review to my clipboard, and cleans up the worktree. Same for ticket creation, project scaffolding, cleanup passes.</p><p>Each workflow was born from a sequence of &#8220;this didn&#8217;t work, let&#8217;s fix it.&#8221; Each one gets better every time it runs and something new goes wrong. That&#8217;s compounding at the system level.</p><h2>Learning what you don&#8217;t know you&#8217;re doing</h2><p>You can&#8217;t compound what you can&#8217;t see.</p><p>That <code>/insights</code> report I mentioned? It didn&#8217;t just show me friction patterns. It showed me that I delegate ambitious end-to-end workflows with clear goals but minimal upfront specs, then course-correct when Claude takes a wrong turn. I treat iterative redirection as my primary steering mechanism.</p><p>I didn&#8217;t know that about myself. I thought I was giving clear instructions. Turns out my interaction style is &#8220;give a vague goal, let Claude explore, then interrupt decisively when it goes off track.&#8221; That works, but it&#8217;s expensive. A lot of those 24 &#8220;wrong approach&#8221; corrections could have been prevented with one more sentence of context upfront.</p><p><code>/insights</code> also showed me that my most successful sessions involve multi-file changes and debugging. Not writing new code, coordinating changes across 8+ files. That told me to invest more in architecture documentation, so Claude understands how the pieces connect before touching anything.</p><p>Most of us have blind spots about how we interact with AI. We repeat the same vague prompts. We consistently under-specify in the same ways. We have habits that silently cost us quality without realizing it. Tools like <code>/insights</code> make those blind spots visible. And visible problems are fixable problems.</p><h2>The divergence</h2><p>This matters <strong>right now</strong> more than it ever has.</p><p>Compounding has always existed. People who read books and applied the lessons compounded knowledge. People who reflected on their mistakes compounded wisdom. This isn&#8217;t new.</p><p>What&#8217;s new is the speed of the feedback loop.</p><p>Old compounding: read a book, apply a lesson, see results in weeks or months. AI compounding: have an interaction, learn what failed, fix it, see results in the <em>next interaction</em>. The feedback cycle went from months to minutes.</p><p>When feedback loops shrink like that, small differences in compounding discipline become <strong>enormous</strong> differences in outcome. Fast. The math on 1% daily improvement is 37x in a year. That&#8217;s not a typo.</p><p>And it&#8217;s not theoretical. I <em>feel</em> it. The gap between how I use AI today versus six months ago is staggering. Not because the models got that much better (they did, but that&#8217;s not the point). Because I compounded every single interaction into better instructions, better workflows, better mental models.</p><p>And I watch people around me, smart people, talented people, who are still pulling the slot machine lever.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Want to move away from being a lever-puller? </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>The 30-second habit</h2><p>Let me make this practical.</p><p>After every non-trivial AI interaction, do a 30-second post-mortem. You don&#8217;t need a journal. You don&#8217;t need a spreadsheet. Just the three questions.</p><p><strong>What failed?</strong> Be specific. Not &#8220;the output was bad.&#8221; But: &#8220;It used camelCase when our codebase uses snake_case.&#8221; Or: &#8220;It over-engineered a dynamic system when I wanted hardcoded values.&#8221; Or: &#8220;It started coding before understanding the problem.&#8221;</p><p><strong>Why did it fail?</strong> Almost always one of four things: you didn&#8217;t give it enough context, the context exists but isn&#8217;t somewhere the AI can access it, your instruction was ambiguous, or the AI made an assumption you didn&#8217;t catch.</p><p><strong>How do I prevent this next time?</strong> Externalize the fix. Update your <code>CLAUDE.md</code>. Add it to your project&#8217;s memory files. Build it into a workflow. Add it as a rule.</p><blockquote><p>Don&#8217;t keep the lesson in your head. Put it somewhere the AI can read it.</p></blockquote><p>Your brain learning &#8220;always specify the approach before coding starts&#8221; is good. Your <code>CLAUDE.md</code> containing that rule is 10x better, because now every AI, every session, every team member benefits automatically. You learned the lesson once. The system remembers it forever.</p><h2>The layers</h2><p>A fully compounded AI setup has layers, and each one feeds the others.</p><p><strong>Project instructions</strong> like <code>CLAUDE.md</code>. Your codebase conventions, architectural patterns, and &#8220;never do this&#8221; rules. All learned from real failures. Ours is 300+ lines and every line earned its place.</p><p><strong>Memory files.</strong> Lessons learned across sessions, patterns that work, context that&#8217;s expensive to re-explain. Things like &#8220;always use self.session() context managers for database safety&#8221; or &#8220;frontend hooks must match backend paths exactly, including trailing slashes.&#8221; Your AI stops being a stranger every time you open a new conversation.</p><p><strong>Workflows.</strong> Repeatable processes encoded as systems. <code>/fix</code>, <code>/review-pr</code>, <code>/resolve</code>. Each one born from a sequence of &#8220;this didn&#8217;t work, let&#8217;s fix it.&#8221; Getting better every time they run.</p><p><strong>Meta-learning</strong> through tools like <code>/insights</code>. Patterns in your own usage, blind spots you didn&#8217;t know you had, continuous improvement of your improvement process.</p><p>Your insights improve your instructions. Your instructions improve your workflows. Your workflows surface new insights. It&#8217;s compounding all the way down.</p><h2>The race you&#8217;re already in</h2><p>This isn&#8217;t optional.</p><p>If you&#8217;re in any knowledge work, you&#8217;re already in a race where AI leverage determines your output. And the people who compound that leverage are pulling away from the people who don&#8217;t. Not linearly. Exponentially.</p><p>This is not about being an &#8220;AI power user.&#8221; It&#8217;s not about knowing the latest prompt tricks or having access to the newest model. It&#8217;s about the boring, unsexy discipline of learning from every single interaction and encoding that learning into something persistent.</p><p>The person who&#8217;s been compounding for six months doesn&#8217;t just have better prompts. They have project files that encode hundreds of lessons. Workflows that automate entire processes. Memory files that preserve context across sessions. Mental models refined by thousands of feedback loops. An AI that <em>effectively knows how they think</em>.</p><p>You&#8217;re not competing against that person&#8217;s raw talent. You&#8217;re competing against their compound interest. And compound interest is unbeatable given enough time.</p><p>The good news? It&#8217;s still early. The feedback loops are still fast. And the best time to start compounding was six months ago.</p><p><strong>The second best time is your next AI interaction.</strong></p><div><hr></div><p><em>Start compounding at <a href="http://neople.io">neople.io</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Write less code, get more done]]></title><description><![CDATA[What happens when you measure value not by keystrokes, but by leverage? Short story about dropping the hero complex to build more, faster.]]></description><link>https://thejoylab.ai/p/write-less-code</link><guid isPermaLink="false">https://thejoylab.ai/p/write-less-code</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 10 Feb 2026 10:15:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7b4e08db-28b9-4d50-b1bc-380f7f847767_4032x3024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For years, my identity was tied to my keyboard.</p><p>The more I shipped, the more valuable I felt. When people asked what I did, I&#8217;d say &#8220;I build things.&#8221; And by &#8220;build,&#8221; I meant something very specific: I write the code. I architect the systems. I ship the features. My hands on the keyboard, my logic in the codebase, my commits in the git history.</p><p>There&#8217;s a particular kind of satisfaction that comes from solving a hard problem in code. That moment when the tests pass. When the feature works. When you push to production and watch real users interact with something you built. It&#8217;s tangible. It&#8217;s measurable. It&#8217;s <em>yours</em>.</p><p>I optimized my entire career around that feeling.</p><p>But I&#8217;ve been lying to myself about what it actually means.</p><h2>The uncomfortable math</h2><p>Here&#8217;s what nobody tells you when you become a leader: every hour you spend deep in implementation is an hour you&#8217;re <em>not</em> spending on the thing that actually matters.</p><p>As a developer, the equation is simple. Your value equals your output. The code you write, the bugs you fix, the features you ship. More output, more value. It&#8217;s clean and satisfying.</p><p>As a leader, the equation inverts. Your value equals your team&#8217;s output. Not yours. <em>Theirs</em>.</p><p>And here&#8217;s where it gets uncomfortable: those two things are often in direct competition.</p><p>Last month, we had a bug that needed fixing. Nothing critical, but annoying. I knew exactly how to fix it. I could have had it done in an hour, maybe two. Instead, I spent 30 minutes explaining the problem to a junior developer, another hour pair-programming through the investigation, and then reviewed their PR the next day.</p><p>Total time: probably 3 hours of my involvement, spread across two days.</p><p>If I&#8217;d just done it myself? Two hours, done, move on.</p><p>So why didn&#8217;t I?</p><p>Because that junior developer now understands that part of the codebase. They learned a debugging technique they didn&#8217;t know before. Next time a similar issue comes up, they won&#8217;t need me at all. The hour I &#8220;lost&#8221; today bought dozens of hours in the future.</p><p>But here&#8217;s the thing: every instinct in my body screamed to just fix it myself. It would have felt so much more productive. So much more satisfying. So much more <em>me</em>.</p><h2>The hero trap</h2><p>There&#8217;s something deeply seductive about being the hero.</p><p>You know the pattern. A problem emerges. Something&#8217;s broken, or blocked, or just hard. You swoop in. You fix it. Everyone&#8217;s grateful. Your brain floods with dopamine. You feel essential, valuable, needed.</p><p>I&#8217;ve been that hero for years. The one who always knows how to fix things. The one people come to when they&#8217;re stuck. The one who can jump into any part of the codebase and figure it out.</p><p>It feels amazing. But it also creates a trap.</p><p>Because when you&#8217;re always the hero, you become a bottleneck. When you&#8217;re always the one who fixes things, your team never develops those muscles. When you&#8217;re always available to swoop in, people stop trying to figure things out themselves.</p><p>I&#8217;ve watched this pattern play out so many times. Someone on the team hits a problem. They struggle for a bit. They ask me. I solve it quickly because I&#8217;ve seen it before. They thank me and move on.</p><p>Everyone&#8217;s happy, right? The problem got solved. We shipped faster.</p><p>Except: that person learned nothing. Next time they hit a similar problem, guess who they&#8217;ll ask? And the time after that? I&#8217;ve accidentally trained my team to depend on me, and then I wonder why I&#8217;m overwhelmed and they&#8217;re not growing.</p><p><strong>The uncomfortable truth I had to face: sometimes my &#8220;helping&#8221; was actually hurting.</strong></p><p><strong>My ownership was someone else&#8217;s missed opportunity to grow. My speed was the team&#8217;s bottleneck. My &#8220;just let me do it quickly&#8221; was a lesson someone else never learned.</strong></p><p>I thought I was being efficient. I was being selfish. I was optimizing for my own satisfaction instead of the team&#8217;s capability.</p><h2>The question that changed everything</h2><p>A few months ago, I started asking myself a different question before jumping into any task:</p><p><strong>&#8220;Who else could do this? And what would they learn from doing it?&#8221;</strong></p><p>Not &#8220;who else <em>should</em> do this?&#8221; That&#8217;s easy to dismiss. Of course I <em>should</em> do it, I&#8217;m the fastest. But &#8220;who else <em>could</em> do this&#8221; forces a different kind of thinking.</p><p>The answer is almost always: someone else could do it. Maybe not as fast. Maybe not as elegantly. But they could do it, and they&#8217;d be better for having done it.</p><p>This reframe changed how I spend my time. Instead of asking &#8220;what&#8217;s the fastest way to solve this problem,&#8221; I started asking &#8220;what&#8217;s the best way for the <em>team</em> to solve this problem.&#8221;</p><p>Sometimes that still means I do it myself. Some things genuinely need my specific context or expertise. But way more often than I expected, the right answer is to step back.</p><h2>The multiplier math</h2><p>There&#8217;s a concept that keeps running through my head: multipliers versus diminishers.</p><p>Diminishers are brilliant individual contributors who happen to have leadership titles. They do the work themselves. They&#8217;re the smartest person in every room. They have all the answers. They move fast by doing everything themselves.</p><p>Multipliers are something different. They create space for others to grow. They ask questions that unlock insights. They move fast by enabling everyone around them.</p><p>The math is brutal but clear:</p><p><strong>Hero math:</strong> 1 person &#215; 100% efficiency = 1x output</p><p><strong>Multiplier math:</strong> 5 people &#215; 80% efficiency = 4x output</p><p>Even if you&#8217;re twice as good as everyone else (even if you&#8217;re the literal best in the world at what you do) the multiplier will always win. Because multiplication beats addition every single time.</p><p>This is obvious when you write it down. It&#8217;s incredibly hard to internalize when you&#8217;re in the moment, staring at a problem you know you could solve in an hour, watching someone else struggle through it over a day.</p><p>But every time I catch myself wanting to jump in, I try to remember: I&#8217;m not optimizing for today. I&#8217;m optimizing for next month, next quarter, next year. And the compounding effects of a capable, independent team absolutely dwarf whatever I could accomplish on my own.</p><h2>The second revolution: AI changes the game</h2><p>But here&#8217;s the thing: there&#8217;s another dimension to this that makes 2025 different from any year before.</p><p>We&#8217;re living through a moment where AI can genuinely <em>build</em>.</p><p>I don&#8217;t mean autocomplete. I don&#8217;t mean suggestions. I mean: describe what you want, review what comes back, iterate on the direction, ship the result.</p><p>Last month I needed a data migration script. Nothing fancy, but fiddly: lots of edge cases, careful handling of null values, proper error logging. Old me would have spent a morning writing it, testing it, handling the edge cases I discovered along the way.</p><p>Instead, I described the requirements to Claude. Reviewed the first draft. Pointed out a few edge cases. Got back a revised version with tests. Ran it. Done.</p><p>Total time from my hands: maybe 30 minutes. And honestly, the code was better than what I would have written. More thorough error handling. Better logging. Edge cases I hadn&#8217;t even thought of.</p><p>This isn&#8217;t a one-off. This is becoming my default mode. The ratio of &#8220;time spent typing code&#8221; to &#8220;features shipped&#8221; has completely inverted. I&#8217;m building more than ever while writing less code than I have in years.</p><h2>From musician to conductor</h2><p>I&#8217;ve started thinking about this shift as moving from musician to conductor.</p><p>As a musician, you play the instrument. You produce the sound. Your skill is in your fingers, your technique, your direct manipulation of the tools. There&#8217;s a ceiling to what you can produce, one person, one instrument, limited hours in the day.</p><p>As a conductor, you don&#8217;t play anything. You direct. You shape. You bring out the best in each section of the orchestra, balance the voices, keep everyone aligned toward the same interpretation. The music that emerges is vastly more complex than any single player could produce.</p><p>That&#8217;s what building software feels like now. I&#8217;m not the one typing the code. I&#8217;m the one directing what gets built, reviewing what comes back, iterating on the direction, making sure it all hangs together.</p><p><strong>I still build. I still contribute meaningfully. The decisions I make&#8212;what to build, how it should work, what tradeoffs to accept&#8212;those matter enormously.</strong></p><p><strong>But I&#8217;m operating at a completely different altitude. Less typing, more thinking. Less doing, more directing. Less output, more </strong><em><strong>outcome</strong></em><strong>.</strong></p><p>And the same principle applies to the human side of the team. My job isn&#8217;t to write the code. My job is to make sure the right code gets written, by the right people (human or AI), in the right way.</p><div><hr></div><h2>The compound effect</h2><p>When you stop being the bottleneck (for your team and for yourself_ something magical happens. Things start to compound.</p><p>I can run multiple AI agents in parallel, each working on a different piece of a feature. I can spin up experiments that would have taken weeks and validate them in hours. I can test ideas that would have been &#8220;too expensive to try&#8221; and kill the bad ones fast.</p><p>Meanwhile, the team is growing into challenges they would never have faced if I kept swooping in. They&#8217;re developing judgment, building context, becoming the experts I used to be. Every week, there are more problems they can solve without me.</p><p>The result: more gets built, better than before, with less of my direct involvement.</p><p>This is what leverage actually feels like. Not working harder. Not being faster. Multiplying.</p><h2>Why this is terrifying</h2><p>I won&#8217;t pretend this transition is easy.</p><p>It&#8217;s scary to let go of the thing that defined you.</p><p>When your identity has been &#8220;the person who builds,&#8221; and suddenly you&#8217;re not the one typing the code, who are you? When your value was measured in commits and pull requests, and now you&#8217;re measured by team output, how do you know you&#8217;re still valuable? When you could always point to lines of code and say &#8220;I made that,&#8221; what do you point to now?</p><p>There are days when I feel like a fraud. Days when I wonder if I&#8217;m even a real engineer anymore. Days when I watch my commit count drop and feel a weird grief for a version of myself that&#8217;s fading away.</p><p>This isn&#8217;t just a behavior change. It&#8217;s an identity shift. And identity shifts are hard. They take time. They feel like loss before they feel like growth.</p><p>But I keep coming back to the question: what do I actually want to build? A monument to my own coding ability? Or something bigger than I could ever create alone?</p><h2>The 2026 resolution</h2><p><strong>Write less code. Get more done.</strong></p><p>This is my commitment for the year. Not because writing code is bad; I still love it, still find it deeply satisfying. But because I&#8217;ve finally internalized that my satisfaction isn&#8217;t the metric that matters.</p><p>Practically, this means catching myself before I jump into implementation. Asking who else could do this and what they&#8217;d learn. Defaulting to AI for tasks that don&#8217;t need my specific judgment. Reviewing and directing instead of writing. Measuring my success by what the team ships, not what I personally produce.</p><p>It means being okay with being less essential. Less heroic. Less visibly productive.</p><p>It means trusting that multiplication beats addition, even when addition feels so much better in the moment.</p><h2>The counterintuitive truth</h2><p>Here&#8217;s what I believe will happen:</p><p><strong>By writing less code, I will build more than ever. Not despite writing less code. </strong><em><strong>Because</strong></em><strong> of writing less code.</strong></p><p>The best version of 2026 me isn&#8217;t the one who wrote the most code. It&#8217;s the one who built the most&#8230; without writing any. The one who multiplied instead of added. The one who finally got out of his own way.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Joy Lab! Subscribe for free to receive updates on how we&#8217;re building Neople from the ground up.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The invitation</h2><p>If you&#8217;re a founder or leader who still writes most of the code... if your team can&#8217;t ship without you... if you&#8217;re the hero everyone relies on...</p><p>Maybe this is your resolution too.</p><p>The hardest part isn&#8217;t learning to delegate, or learning to use AI, or learning any new skill at all. The hardest part is unlearning the identity that got you here. Letting go of the satisfaction of being the one who fixes things. Trusting that the new way is better, even when it feels like less.</p><p>Here&#8217;s to getting out of our own way.</p><div><hr></div><p><em>Building the future of work at <a href="http://neople.io">neople.io</a>. Where the goal isn&#8217;t to write more code&#8212;it&#8217;s to build more, together.</em></p>]]></content:encoded></item><item><title><![CDATA[Notes from three years of building Neople]]></title><description><![CDATA[What building an AI company taught us about timing, trust, and reality]]></description><link>https://thejoylab.ai/p/three-year-anniversary</link><guid isPermaLink="false">https://thejoylab.ai/p/three-year-anniversary</guid><dc:creator><![CDATA[Bas Ploeg]]></dc:creator><pubDate>Tue, 13 Jan 2026 09:31:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4b28f460-0504-43b4-98ef-01fef9805063_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Three years ago, Neople started in a way that now feels both recent and very far away.</p><p>There was no clear playbook. No crisp category. No shared understanding in the market of what &#8220;AI at work&#8221; was supposed to look like. We were four people with a strong conviction that something fundamental in how people worked with software was broken, and that AI could shift it.</p><p>What we underestimated was how much would change around us while we were building.</p><p>The market shifted. The technology developed. Customer expectations changed. And we had to keep deciding, sometimes weekly, whether we were early, wrong, or just impatient.</p><p>This is a reflection on what actually happened over those three years, and what we learned building an AI company from the Netherlands, in public, and in real time.</p><h2>Year one felt obvious, and that gave us momentum</h2><p>In the beginning, things felt strangely clear.</p><p>AI was emerging. SaaS felt clunky. People were overwhelmed by tools. The idea clicked: a digital coworker that could take work off your plate and help teams operate better.</p><p>That clarity gave us momentum. It helped us move fast, make decisions, and get something real into the hands of customers.</p><p>What we learned along the way is that early clarity often hides complexity. The technology was promising, but uneven. Customers were interested, but still figuring out how this fit their work. Internally, product decisions were shaped as much by what was feasible that month as by what the longer-term vision suggested.</p><p>Over time, that taught us something important. There was never going to be a single straight line from idea to outcome. Progress would come from constant adjustment, not from following a fixed plan.</p><p>That understanding changed how we build. It made us more attentive to constraints, more patient with sequencing, and more deliberate about what to solve next.</p><h2>We learned to build for a future that arrives in pieces</h2><p>One of our biggest lessons was about timing.</p><p>We raised early rounds believing that classic SaaS patterns would fade quickly, and AI-driven systems would take over large parts of operational work. Directionally, we still believe that.</p><p>What we learned in practice is that technology matures in pieces. Some capabilities moved incredibly fast. Others took longer. That uneven pace shaped how customers could realistically use what we built.</p><p>Starting with a more service-heavy approach helped us learn deeply. It showed us where automation worked, where it broke down, and where people needed visibility and control. Those insights now directly inform the product we are building.</p><p>The takeaway was constructive, not discouraging. You cannot compress maturity, but you can design for it. When the tech is still forming, you need bridges that work today and foundations that support what comes next.</p><p>That perspective has made our product clearer, more grounded, and better aligned with how teams actually grow into new systems.</p><h2>The market clarified faster than we expected</h2><p>Early on, around early 2023, the main challenge was adoption.</p><p>People approached AI carefully. Teams wanted to understand limits, build trust, and stay close to decisions. The focus was on assisting and aiding the employee instead of automating complete workflows, and human control played an important role in making AI usable at all.</p><p>Then expectations moved quickly.</p><p>Customers began asking different questions. Where automation could take over. Where outcomes could be faster. Where AI could operate with more independence. The change was driven by better models, broader exposure to AI tools, and growing pressure inside organizations to move faster.</p><p>That shift taught us something valuable. We had learned how to build trust first. Now the market was ready to build on top of it.</p><p>Reworking product, positioning, and sales at the same time was demanding, but it also sharpened our understanding of what customers actually wanted next.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Building in public too? Follow us here for a sneak peek into everything Neople is building, in real time.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>Building in the Netherlands kept us honest</h2><p>There is something grounding about building a company here.</p><p>Dutch customers are direct. They ask what works, what doesn&#8217;t, and how long it takes. There is little patience for abstract futures without practical value.</p><p>We would sometimes have a conversation with investors about a post-SaaS world in the morning, and then talk to a customer in the afternoon who just wanted to know how we were different from a chatbot or what kind of APIs we had versus custom integrations.</p><p>Both perspectives were valid. Holding them at the same time was exhausting.</p><p>It taught us that vision cannot replace usefulness. If your product does not make someone&#8217;s Monday easier, the long-term story does not matter yet.</p><h2>Customers often bought &#8220;AI&#8221; before they bought a solution</h2><p>One thing that surprised us consistently was how rarely customers arrived with a sharply defined problem.</p><p>In classic SaaS, people show up with pain. Too slow. Too expensive. Too manual.</p><p>With AI, many showed up with something softer: pressure to adopt AI, pressure from leadership, or a sense that they were falling behind. Customer support seemed to be the most logical place for these teams to start with their first foray into AI, with clear and quick wins in staffing, seasonality, localization, and training.</p><p>This fact alone meant we were not just delivering a tool. We were helping customers figure out what to do with it and exploring the potential gains together.</p><p>Some thrived in that openness. Others struggled. The same product could feel transformative to one team and irrelevant to another. AI maturity varied wildly, even within the same industry.</p><p>This forced us to accept a difficult truth: product success in AI depends as much on readiness as on features.</p><h2>Growth moments didn&#8217;t feel big when they happened</h2><p>Some milestones only felt real in hindsight.</p><p>Raising our first round changed how seriously others took us, but also how seriously we had to take ourselves. Acquiring another team and suddenly being twenty people in a room made the company feel real in a new way.</p><p>Later, realizing that Neople was considered a serious player in its niche happened quietly. There was no single moment. Just a gradual accumulation of customers, conversations, and trust.</p><p>That pattern repeated often. The biggest shifts were slow while happening, and obvious only later.</p><h2>What we learned after three years</h2><p>We&#8217;re very aware of how privileged we are to be building a tech company in the AI space right now. We&#8217;ve been trusted by investors to explore uncertain territory, and by a team that shows up every day to build something that did not exist before. That combination of trust and timing is rare, and we don&#8217;t take it lightly.</p><p>Sharing what we&#8217;ve learned over the last three years is our way of giving something back, and of being honest about what actually happens when you try to build in a market that keeps moving under your feet.</p><h3>1) There is no stable ground in AI, only moving reference points</h3><p>Your product, your customers, and the technology evolve at the same time. Planning too far ahead creates false certainty.</p><h3>2) Timing matters as much as ideas</h3><p>Building a product that was perhaps too early still hurts. Vision needs to be paired with something customers can use now.</p><h3>3) Customers buy readiness as much as capability</h3><p>AI value depends on data, trust, ownership, and internal clarity. Software alone does not solve that.</p><h3>4) Hiring speed amplifies both progress and chaos</h3><p>Scaling roles quickly changes culture, decision-making, and momentum. Money increases responsibility faster than it increases clarity.</p><h3>5) Building in a pragmatic market forces better products</h3><p>Direct feedback and low tolerance for hype are painful, but they reduce long-term delusion.</p><h2>Where we are now</h2><p><strong>Bas:</strong> Looking back, I still believe the core frustration we started with is real. Technology often forces people to adapt to it, instead of the other way around. We maybe underestimated how long it would take to change that properly, but I truly believe in a future where technology is more natural.</p><p><strong>Hans:</strong> I think the real work is learning to hold ambition and reality at the same time. The future we believe in is still coming. Our job is to build something useful on the way there, without losing ourselves in either hype or fear.</p><p>Three years in, the biggest lesson is not about AI or SaaS.</p><p>It is about staying flexible without becoming generic, and stubborn without becoming blind.</p><p>And accepting that building a company is less about executing a plan, and more about adjusting your understanding faster than the world changes around you.</p>]]></content:encoded></item><item><title><![CDATA[Software as a Consumable: The category that shouldn't exist (but does)]]></title><description><![CDATA[What happens when you no longer have to maintain software?]]></description><link>https://thejoylab.ai/p/software-as-a-consumable</link><guid isPermaLink="false">https://thejoylab.ai/p/software-as-a-consumable</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Wed, 07 Jan 2026 10:43:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/96eb1166-a672-4819-ba79-53172b6a5ccb_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine someone in 1995 saying:</p><blockquote><p>&#8220;In the future, anyone will instantly summon a stranger to move furniture for pocket change.&#8221;</p></blockquote><p>They&#8217;d be laughed out of the room. The economics don&#8217;t work. The logistics are impossible. The trust mechanisms don&#8217;t exist.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Joy Lab! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>But here we are.</strong> Taskrabbit exists.</p><p>It didn&#8217;t destroy professional movers. It <strong>created an entirely new category</strong> of casual, personal tasks that were never worth doing before.</p><p>The same impossible shift is happening with software right now.</p><h2>When costs collapse</h2><p>Something magical happens when the cost of something drops below a critical threshold:</p><p><strong>Photography:</strong> Digital cameras made photos free, so instead of 24 carefully composed vacation shots, people take 500 photos at brunch.</p><p><strong>Transportation:</strong> Uber made rides cheap enough that people take them for 6 blocks.</p><p><strong>Food delivery:</strong> Apps made it cheap enough to order a single coffee.</p><p><strong>The pattern:</strong> When costs collapse, we don&#8217;t just do the same things cheaper. <strong>We do entirely new things that weren&#8217;t worth doing before.</strong></p><h2>&#128187; The $3,000 barrier</h2><p>For 50 years, software had a fundamental constraint: <strong>you needed someone who could write code.</strong></p><p><strong>The math:</strong></p><ul><li><p>Engineer salary: $150K/year</p></li><li><p>Minimum viable tool: ~1 week</p></li><li><p><strong>Cost per utility: ~$3,000</strong></p></li></ul><p>Software only got built if it solved a problem for many people, justified significant ROI, or someone cared enough to learn programming themselves.</p><p><strong>Everything else?</strong> People suffered through it.</p><p>That spreadsheet you manually update every Monday. That report you copy-paste between 5 systems. That data transformation you do by hand.</p><p>We accepted that some problems were &#8220;too small&#8221; for software.</p><h2>The economics broke</h2><p><strong>Creating software used to cost:</strong></p><ul><li><p>$3,000 minimum</p></li><li><p>Years of training</p></li><li><p>Deep technical knowledge</p></li></ul><p><strong>Creating software now costs:</strong></p><ul><li><p>$0.50 (Claude API call)</p></li><li><p>30 seconds of description</p></li><li><p>Ability to describe a problem in English</p></li></ul><p><strong>That&#8217;s a 6,000x improvement.</strong></p><p>When something gets 6,000x cheaper, you don&#8217;t just do it more. <strong>You unlock entirely new categories of use.</strong></p><h2>Consumable software</h2><p>There&#8217;s now a category of software that:</p><ul><li><p><strong>Disposable</strong> - Used once, then forgotten</p></li><li><p><strong>Ephemeral</strong> - Might not exist tomorrow</p></li><li><p><strong>Personal</strong> - Solves a problem for one person</p></li><li><p><strong>Regenerable</strong> - Easier to recreate than maintain</p></li><li><p><strong>Quick</strong> - Generated in seconds</p></li></ul><p>This category was economically impossible for 50 years. Now it&#8217;s common.</p><p><strong>Real examples from last month:</strong></p><ul><li><p>A marketer generated a tool to extract social media links from pages. Used it twice. Forgot about it. Would regenerate if needed again.</p></li><li><p>A recruiter generated a script to clean LinkedIn CSV exports. Used it for one campaign. Never thought about it again.</p></li><li><p>A PM generated a tool to combine 3 Notion databases. Used it for one report. Doesn&#8217;t remember where it is. Doesn&#8217;t care.</p></li></ul><p><strong>None of them know how to code. None maintained their code. None even saved it. And that&#8217;s completely rational.</strong></p><h2>Two categories of software</h2><p><strong>Software as a Product (still needs engineers):</strong></p><ul><li><p>Used by thousands/millions</p></li><li><p>Maintained over years</p></li><li><p>Quality matters deeply</p></li><li><p>Revenue model</p></li></ul><p><strong>Examples:</strong> Figma, Notion, banking apps</p><p><strong>Software as a Consumable (anyone can make):</strong></p><ul><li><p>Used by one person</p></li><li><p>Exists for minutes/days</p></li><li><p>Quality barely matters</p></li><li><p>No revenue, just utility</p></li></ul><p><strong>Examples:</strong> &#8220;Combine these CSVs,&#8221; &#8220;Extract emails from this text,&#8221; &#8220;Convert this format to that format&#8221;</p><p><strong>The first category didn&#8217;t change. It still needs engineers. The second category didn&#8217;t exist before. Now it does. This isn&#8217;t about engineers writing worse code. It&#8217;s about non-engineers creating software for the first time.</strong></p><h2>The coming flood</h2><p>What happens when 8 billion people can suddenly create software?</p><p><strong>When everyone got cameras:</strong> Didn&#8217;t destroy professional photography. Created billions of personal photos. Instagram emerged.</p><p><strong>When everyone got video cameras:</strong> Didn&#8217;t destroy Hollywood. Created billions of personal videos. TikTok emerged.</p><p><strong>When everyone got publishing platforms:</strong> Didn&#8217;t destroy journalism. Created billions of blog posts. Medium emerged.</p><p><strong>Now everyone can create software:</strong> Won&#8217;t destroy engineering. Will create billions of personal utilities. <strong>The platform hasn&#8217;t emerged yet.</strong></p><p>Just like Instagram couldn&#8217;t exist before casual photography was possible, we don&#8217;t yet know what will emerge when casual software creation becomes normal.</p><h2>Problems worth solving</h2><p>Remember all those &#8220;too small&#8221; problems?</p><p><strong>Sarah (HR):</strong> &#8220;Cross-reference employee emails with benefits enrollment.&#8221;</p><p><strong>Before:</strong> 2 hours of manual Excel work, monthly</p><p><strong>Now:</strong> &#8220;Claude, find emails in A but not B.&#8221; 30 seconds.</p><p><strong>Marcus (small business):</strong> &#8220;Generate invoices from my spreadsheet.&#8221;</p><p><strong>Before:</strong> Hire developer ($500-1000) or manual torture</p><p><strong>Now:</strong> Describe the format. 1 minute.</p><p><strong>Jen (analyst):</strong> &#8220;Extract URLs from 50 PDFs and check if they work.&#8221;</p><p><strong>Before:</strong> Impossible without technical help</p><p><strong>Now:</strong> 2 minutes.</p><p>These aren&#8217;t products. They&#8217;re personal utilities that solve a problem once and cease to matter. <strong>They&#8217;re consumable.</strong></p><h2>Regeneration vs maintenance</h2><p><strong>Traditional software lifecycle:</strong></p><ol><li><p>Write code</p></li><li><p>Document it</p></li><li><p>Maintain for years</p></li><li><p>Fix bugs</p></li><li><p>Refactor</p></li><li><p>Live with technical debt</p></li></ol><p><strong>Consumable software lifecycle:</strong></p><ol><li><p>Generate it</p></li><li><p>Use it</p></li><li><p>Forget it</p></li><li><p>Regenerate if ever needed again</p></li></ol><p>When generation is faster than maintenance, maintenance becomes irrational. This is backwards from 50 years of software wisdom. And that&#8217;s okay &#8211; this is a different category.</p><h2>The quality is irrelevant (for consumables)</h2><p>For consumable software, quality barely matters.</p><p><strong>Doesn&#8217;t need to:</strong> scale to millions, be maintainable, be documented, follow best practices, be elegant, be particularly secure or performant</p><p><strong>Does need to:</strong> work right now, solve this specific problem</p><p><strong>That&#8217;s it.</strong></p><p>This drives engineers crazy. We&#8217;ve been trained that quality matters. And it does &#8211; <strong>for products.</strong></p><p>But consumable software is like a paper towel. Single-use, disposable, solves an immediate problem. Insisting every paper towel be archival-quality museum-grade paper would be insane. Yet that&#8217;s what we did with software for 50 years &#8211; because we had no choice.</p><h2>What this means for engineers</h2><p><strong>The good news:</strong> Your job isn&#8217;t going away.</p><p><strong>The great news:</strong> Your job is getting better.</p><p><strong>Before:</strong> 40% of engineer time building throwaway utilities for other teams</p><p><strong>After:</strong> 5% of time on utilities (everyone generates their own). Focus on products, core infrastructure, complex systems.</p><p>Engineers are being freed from the &#8220;too small for a product, too technical for normal people&#8221; zone. That zone is becoming accessible to everyone. <strong>That&#8217;s liberating, not threatening.</strong></p><h2>The Neople perspective</h2><p>People using Nora (our AI colleague) increasingly generate little tools for themselves: data transformations, workflow automation, quick reports.</p><p><strong>They don&#8217;t call them &#8220;software.&#8221;</strong> They think of them as &#8220;getting the task done.&#8221;</p><p>But that&#8217;s what they are: disposable, personal software.</p><p>The shift we&#8217;re enabling isn&#8217;t &#8220;better software engineering.&#8221; It&#8217;s &#8220;software creation for everyone.&#8221;</p><h2>The platform that didn&#8217;t exist yet</h2><p>Instagram couldn&#8217;t exist before smartphone cameras. YouTube couldn&#8217;t exist before accessible video. Twitter couldn&#8217;t exist before everyone could publish text.</p><p><strong>So what platform will exist when everyone can generate software?</strong></p><p>Maybe a marketplace for prompts instead of code. A system for sharing and remixing generated tools. A web where everyone can customize everything. Something we can&#8217;t imagine yet.</p><p>We&#8217;re at the &#8220;everyone has a camera phone&#8221; moment for software.</p><p>The Instagram moment is still coming.</p><h2>The shift is starting</h2><p>We&#8217;re at the very beginning:</p><p><strong>Today:</strong> Non-technical people generating tools. People regenerating instead of maintaining. Prompts being saved instead of code. Software treated as disposable.</p><p><strong>In 12-18 months:</strong> Billions of pieces of consumable software. Platforms emerging to share and remix. Software creation as common as document creation. Engineers focused entirely on products.</p><p>For 50 years, software was scarce because creation was expensive. For the next 50 years, software will be abundant because creation is free.</p><p><strong>Everything changes.</strong></p><h2>Full circle</h2><p>Taskrabbit didn&#8217;t make professional movers obsolete. It created a category that shouldn&#8217;t exist: <strong>&#8220;casual, personal furniture moving.&#8221;</strong></p><p>Before Taskrabbit, you either did it yourself or hired professional movers for thousands of dollars. No in-between. The economics didn&#8217;t work.</p><p><strong>Now there&#8217;s an in-between. And it&#8217;s massive.</strong></p><p>Consumable software is the same: too small for a product, too personal for enterprise, too cheap to justify professional development.</p><p>But now it exists. And it&#8217;s being created by <strong>everyone</strong>, not just engineers.</p><p>Software is becoming a consumable. Not because professional software is dying. But because a whole new category of personal, disposable, regenerable software is being born.</p><div><hr></div><p><em>Watching the birth of consumable software at <a href="http://neople.io">neople.io</a>. Where anyone can generate the tools they need, use them once, and move on with their life.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Be the first to see how consumable software is developed at Neople.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Intentional and accidental, AI first can be both]]></title><description><![CDATA[A peek into what it's actually like becoming an AI first team and company]]></description><link>https://thejoylab.ai/p/ai-first</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-first</guid><dc:creator><![CDATA[Hans de Penning, CEO @ Neople]]></dc:creator><pubDate>Tue, 06 Jan 2026 09:30:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5939a514-9a9e-41bc-b255-de2f93e2de91_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I keep getting asked what it means that we&#8217;re an &#8220;AI first&#8221; company, and every time I try to answer it cleanly, it comes out wrong. Not incorrect, just misleading. Too neat. Too intentional. As if we sat down one day, wrote a strategy doc, and decided to reorganise the company around AI.</p><p>That&#8217;s not what happened.</p><p>What actually happened is messier, and probably more familiar if you&#8217;re building something yourself.</p><p>We ran into limits. Over and over. Limits in people, limits in time, limits in how fast we could move without everything falling apart. And instead of solving those limits by adding more layers or more hires, we kept reaching for whatever let us keep going.</p><p>More often than not, that was software. Increasingly, that was AI.</p><p>Only later did we realise we&#8217;d crossed some invisible line.</p><p>I still don&#8217;t love the definitions, but here&#8217;s the best way I can explain it:</p><p>An AI company builds AI products. An AI first company assumes that knowledge work itself can be redesigned. Not optimised, redesigned. The question shifts from &#8220;who should do this&#8221; to &#8220;why does this exist in this form at all.&#8221;</p><p>Once that shift happens, it&#8217;s hard to unsee.</p><p>There&#8217;s a popular story going around right now about tiny teams doing huge numbers, and the explanation is usually that they&#8217;re just using the right AI tools. I think that story is comforting, because it makes success sound like a tooling choice. Pick the right stack, move faster, win.</p><p>But small teams with outsized output existed long before the current wave of AI. People built highly automated businesses with boring scripts, fulfilment partners, and unglamorous systems. The difference now is not that automation exists, it&#8217;s where it reaches. Work that used to require specialists early on, things like legal reviews, financial checks, first drafts of designs, exploratory code, internal tooling, can now be done well enough by one person to keep momentum.</p><p>Not perfectly. Not magically. But well enough.</p><p>That &#8220;well enough&#8221; matters more than people like to admit.</p><h2><strong>Constraint as the actual driver</strong></h2><p>Inside Neople, we never made a rule that said &#8220;use AI for this.&#8221; We also never sat down and mapped out every process and asked where AI could be slotted in. That approach, in my experience, tends to produce a lot of activity and very little real change.</p><p>What we did instead, often without naming it, was remove escape hatches.</p><p>If something needed to get done and there wasn&#8217;t an obvious person to hand it to, the work didn&#8217;t disappear. Someone had to find a way. Sometimes that meant automation. Sometimes it meant a model. Sometimes it meant realising the task itself was unnecessary and just deleting it.</p><p>That last option is still the most powerful one, and the least talked about.</p><p>I&#8217;ll be honest about something that feels slightly uncomfortable to say out loud. The biggest driver of becoming AI first was not excitement or vision. It was constraint.</p><p>When you don&#8217;t have a junior developer to offload smaller tasks to, you try to see how far you can get on your own. When you don&#8217;t have a designer available, you learn to generate and iterate assets yourself. When you don&#8217;t have legal support on hand, you learn to do the first eighty percent and escalate the truly risky parts.</p><p>Scarcity has, I think, always pushed people toward efficiency. That&#8217;s not new. What&#8217;s new is how much one person can do once they&#8217;re forced into that mode. The tools amplify effort, but the mindset comes from having no alternative.</p><h2><strong>Something I didn&#8217;t expect</strong></h2><p>There&#8217;s a second thing that happened, and I didn&#8217;t fully appreciate it at first. When everyone becomes more capable of doing things outside their lane, boundaries blur. Designers can challenge technical decisions because they can actually prototype alternatives. Engineers can challenge design decisions because they can generate and test variations quickly. People stop having opinions in the abstract and start showing working versions.</p><p>That creates friction. It also, I think, creates better outcomes, if you can tolerate the tension.</p><p>Traditional teams feel calmer partly because crossing into someone else&#8217;s area is expensive. AI makes that cheap. The real challenge stops being adoption and starts being collaboration. How do you work together when everyone has more agency than before?</p><p>I don&#8217;t think we&#8217;ve solved that yet. I do think it&#8217;s a real shift that deserves more attention than it gets.</p><p>There&#8217;s also a people side to this that&#8217;s easy to get wrong. The moment someone feels something is being taken away, they stop listening. It doesn&#8217;t matter how good the long-term argument is. Loss shuts the conversation down.</p><p>AI triggers that reaction fast. If &#8220;AI first&#8221; is framed as fewer hires, less support, more pressure, resistance is inevitable. Even if the upside is real.</p><p>What seems to work better is a combination of two things happening at the same time. Clear personal upside, and real constraints. Not &#8220;use AI more,&#8221; but &#8220;find a way to get this done,&#8221; paired with leaders actually doing the work themselves.</p><p>Once it&#8217;s visible, it stops being theoretical.</p><h2><strong>If I were starting from scratch</strong></h2><p>I wouldn&#8217;t begin with training sessions or tool rollouts.</p><p>I&#8217;d start with myself. Pick a task I usually delegate, struggle through doing it with the tools available until the result is good enough, then show how I did it. Not as a mandate. Just as a reference point.</p><p>After that, the more important work begins. Going back to first principles. What actually needs to happen for the company to move forward. What assumptions are being carried simply because they used to be true. What work exists only because it always has.</p><p>Most of the real gains, I think, don&#8217;t come from smarter execution. They come from removing work entirely.</p><p>Looking further ahead, I don&#8217;t think the future is people versus AI. It&#8217;s different kinds of people doing different kinds of work. More time spent designing systems, setting boundaries, deciding how agents should behave. More focus on customer-facing roles, because trust and context still matter a lot while the technology keeps shifting underneath us.</p><p>At the same time, I expect we&#8217;ll keep increasing the number of agents we run internally. Not because it&#8217;s fashionable, but because once you experience the speed of &#8220;I can just try this,&#8221; waiting starts to feel expensive.</p><p>That&#8217;s been the biggest emotional shift for me. Not excitement. Not fear.</p><p>Impatience.</p><p>And if you&#8217;re feeling some version of that too, you&#8217;re probably closer to being AI first than you think.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Want to be the first to know about posts just like this one?</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[We built the wrong product (and it was exactly right)]]></title><description><![CDATA[What we learned from a recent bet we lost and why it doesn't feel like losing at all.]]></description><link>https://thejoylab.ai/p/building-a-product</link><guid isPermaLink="false">https://thejoylab.ai/p/building-a-product</guid><dc:creator><![CDATA[Hans de Penning, CEO @ Neople]]></dc:creator><pubDate>Mon, 22 Dec 2025 10:09:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wVEA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every startup before Series A faces the same dilemma.</p><p>You&#8217;ve got something working. Customers are coming in. Growth is happening. But you know (usually, somewhere deep in your gut at 3AM) that what got you here won&#8217;t get you there.</p><p>So you have to take a risk. Change something fundamental. Place a bet on the future before you can afford to lose.</p><p>The hard part isn&#8217;t taking the risk. It&#8217;s knowing which one to take. And when.</p><p>Let me tell you about Ollo.</p><h2>The problem we thought we had</h2><p>Picture this: You&#8217;re growing fast. Customers love what you do. But they keep asking for more. Not just &#8220;can you add this feature?&#8221; More like: &#8220;Can your AI do <em>everything</em>?&#8221;</p><p>That&#8217;s what we were hearing from customers and prospects earlier this year.</p><p>We made Neople as a customer support agent. It was good at that. Really good. But customers didn&#8217;t want a specialist&#8212;they wanted an all-rounder coworker. Someone who could learn their processes, integrate with their tools, handle whatever they threw at it.</p><p>The question became clear: Are we a tool that&#8217;s going to resolve your customer support questions? Or are we a solution that customers can use to automate their customer support questions?</p><p>Subtle difference. Massive implications.</p><p>So we had a choice. Keep refining what we knew worked. Or bet on what might work better.</p><p>We took our bet.</p><h2>Enter Ollo: the everything agent</h2><p>Neople was a customer support agent. Ollo was your everything agent.</p><p>It was our answer to &#8220;what if?&#8221; What if we made something universal? What if people could just <em>talk</em> to it and build workflows without touching a single node or configuration screen?</p><p>A new colleague joins the team, can integrate with everything, and you just by talking make it work. That was the mission.</p><p>We split the team. Gave a small group full focus. Told them: move fast, stay nimble, build the future.</p><p>And they did. Sort of.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wVEA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wVEA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 424w, https://substackcdn.com/image/fetch/$s_!wVEA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 848w, https://substackcdn.com/image/fetch/$s_!wVEA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 1272w, https://substackcdn.com/image/fetch/$s_!wVEA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wVEA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png" width="1456" height="727" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:727,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1005217,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thejoylab.ai/i/181870778?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wVEA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 424w, https://substackcdn.com/image/fetch/$s_!wVEA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 848w, https://substackcdn.com/image/fetch/$s_!wVEA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 1272w, https://substackcdn.com/image/fetch/$s_!wVEA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d9dcb8-665c-472a-8c92-4a9fe28af479_2504x1250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Early prototype of the Ollo product.</figcaption></figure></div><h2>The part where it gets interesting</h2><p>Here&#8217;s what we learned: We changed too many things at once.</p><p>New Ideal Customer Profile. New go-to-market motion. New product paradigm. New technical foundation.</p><p>We changed too many parameters at the same time, which was making it even harder for ourselves to be successful. Too many moving parts to achieve the full focus we intended and knew was necessary to ship an MVP.</p><p>Rookie move? Maybe. But also: necessary.</p><p>Because the market was moving. Agentic AI was becoming real. Customers were ready for something different. If the market is ready, then it will come and it will come fast, especially in the AI space. So then you try to be first or try to be faster.</p><p>So we jumped. And we learned mid-air.</p><h2>What we actually built</h2><p>Ollo ran for about six weeks of real user testing. Not long. Probably too short to really see if it was successful or not, if I&#8217;m being honest.</p><p>But long enough to see what worked and what didn&#8217;t.</p><p><strong>What didn&#8217;t:</strong> The everything-for-everyone approach. It was deliberately broad. Which means that you&#8217;re solving too many issues at the same time. Making it complex and making it unclear.</p><p><strong>What did:</strong> The underlying architecture. The agentic reasoning. The conversational-first design philosophy. The idea that people should get value in thirty seconds, not three months.</p><p>We pulled the plug on Ollo as a standalone product. But we didn&#8217;t throw it away.</p><h2>The unexpected gift</h2><p>Here&#8217;s where it gets good.</p><p>All that technology we built? All those assumptions we challenged? They&#8217;re now baked into the newest version of Neople. Not as a separate product, but as the foundation of what we do.</p><p>We moved from retrieval-augmented generation to agentic reasoning. The whole paradigm for how to work with AI has shifted to agentic. From rigid prompting to intelligent decision-making. From &#8220;set up your knowledge base first&#8221; to &#8220;start talking and see what happens.&#8221;</p><p>We started with the starting point that you have to get value in thirty seconds. Getting value out of the product in a shorter amount of time. That&#8217;s a better starting point than from three months and moving back.</p><p><strong>Ollo forced us to rethink everything. And when we came back to our core product, we saw it with fresh eyes.</strong></p><h2>Five things we learned (so maybe you don&#8217;t have to)</h2><p><strong>1. You can&#8217;t run two companies in one</strong></p><p>Focus isn&#8217;t optional. You need it to move fast. Even when we tried to isolate the team, full focus proved impossible inside the same company. Pick one bet. Make it count.</p><p><strong>2. Timing is a bet on technology itself</strong></p><p>You constantly have to make a bet on how far the technology will be in half a year or a year from now. We were early. The models weren&#8217;t quite ready. Browser automation needed more scaffolding than we&#8217;d hoped. Sometimes being early is exactly where you need to be, but it&#8217;s still a gamble.</p><p><strong>3. Always bet on laziness</strong></p><p>I will always bet on laziness. If something is simpler to do, cheaper to do, easier to do&#8212;it wins. Eventually. Always. People try to take a shortcut. Exactly. Build for that instinct.</p><p><strong>4. It&#8217;s easier to go broad than to go narrow</strong></p><p>It&#8217;s easier to move from individual to enterprise than from enterprise to individual. Starting from &#8220;make it simple enough for anyone&#8221; beats starting from &#8220;make it powerful enough for enterprises&#8221; and working backward. It&#8217;s harder for a SAP to move to a small business owner than for a Shopify to move to Adidas.</p><p><strong>5. The right amount of specific matters more than you think</strong></p><p>You can quite easily make an overfit product where you&#8217;re trying to solve one specific thing, which can be easily quite successful. But it&#8217;s harder to break out of it. The trick is finding the right specificity. The right moment. The right bet.</p><h2>What comes next</h2><p>We&#8217;re not done experimenting. Not even close.</p><p>But now we know: focus matters more than ambition. Solving one thing brilliantly beats solving everything adequately. And sometimes the best outcome of an experiment is learning what <em>not</em> to do.</p><p>On one hand, you could say, &#8220;Oh, okay, Ollo doesn&#8217;t exist now.&#8221; On the other hand, it does exist, actually. Parts of it exist in our current product.</p><p>Ollo doesn&#8217;t exist as a product anymore. But it exists in every line of code we write. Every conversation our Neople have. Every customer who gets value in seconds instead of months.</p><p>We built the wrong product. And that was exactly right.</p><div><hr></div><p><em>We&#8217;re building in public at Neople and sharing the messy, uncertain, occasionally backwards journey of creating something new. If you&#8217;re building something too, I hope this helps. Even if it&#8217;s just knowing you&#8217;re not the only one who takes the scenic route.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Joy Lab! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[We landed on the moon with less power than your charger]]></title><description><![CDATA[But your OMS query is still going to take ages.]]></description><link>https://thejoylab.ai/p/computing-power</link><guid isPermaLink="false">https://thejoylab.ai/p/computing-power</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 16 Dec 2025 12:46:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/341ea39e-0a2b-4f52-a653-511e8b590434_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The two-minute timeout</h2><p>Last week, I increased a timeout on a single API call to <strong>two minutes</strong>.</p><p>Not for complex machine learning inference. Not for processing terabytes of data. Not for rendering 3D visualizations.</p><p>For this query: <strong>&#8220;What&#8217;s the latest order for this email address?&#8221;</strong></p><p>We now wait up to two minutes to answer a question a competent intern with a spreadsheet could answer in ten seconds. And here&#8217;s what&#8217;s driving me insane: we just accept this. We shrug and say &#8220;OMS integrations are slow&#8221; like it&#8217;s a law of physics instead of a choice.</p><h2>The absurd comparison</h2><p>Meanwhile, I can ask Claude to analyze a complex codebase, write an essay on philosophical concepts, generate a complete API with tests, debug race conditions, or explain quantum physics to a five-year-old.</p><p><strong>Response time?</strong> Three seconds.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9IvA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9IvA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 424w, https://substackcdn.com/image/fetch/$s_!9IvA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 848w, https://substackcdn.com/image/fetch/$s_!9IvA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 1272w, https://substackcdn.com/image/fetch/$s_!9IvA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9IvA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png" width="1036" height="308" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:308,&quot;width&quot;:1036,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:55898,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thejoylab.ai/i/181784212?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9IvA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 424w, https://substackcdn.com/image/fetch/$s_!9IvA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 848w, https://substackcdn.com/image/fetch/$s_!9IvA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 1272w, https://substackcdn.com/image/fetch/$s_!9IvA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd66b7aa-0878-4b8a-9d4b-1f8a47198584_1036x308.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We have systems processing questions of arbitrary complexity, returning answers that would&#8217;ve taken teams of researchers weeks to compile &#8211; faster than you can say &#8220;please hold.&#8221;</p><p>But a simple database lookup? Two. Entire. Minutes.</p><h2>Your charger sent us to the moon</h2><p>Your $25 USB-C charger has more computing power than the Apollo 11 Guidance Computer. Not a little more. <a href="https://forrestheller.com/Apollo-11-Computer-vs-USB-C-chargers.html">563 times more processing power</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CkB-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CkB-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 424w, https://substackcdn.com/image/fetch/$s_!CkB-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 848w, https://substackcdn.com/image/fetch/$s_!CkB-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 1272w, https://substackcdn.com/image/fetch/$s_!CkB-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CkB-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png" width="1056" height="370" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:370,&quot;width&quot;:1056,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:62934,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://thejoylab.ai/i/181784212?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CkB-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 424w, https://substackcdn.com/image/fetch/$s_!CkB-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 848w, https://substackcdn.com/image/fetch/$s_!CkB-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 1272w, https://substackcdn.com/image/fetch/$s_!CkB-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca0cd4b-b7e9-4302-bce6-cf58db447345_1056x370.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That afterthought accessory is vastly more powerful than the computer that calculated trajectories to navigate 240,000 miles through space and bring astronauts safely home.</p><p>So why does looking up an order take 120 seconds?</p><h2>What we can and can&#8217;t do</h2><p><strong>What we CAN do:</strong></p><ul><li><p>Train neural networks with billions of parameters</p></li><li><p>Stream 4K video to billions of people simultaneously</p></li><li><p>Sync data across continents in milliseconds</p></li><li><p>Put AI chips in chargers (they negotiate power delivery in real-time)</p></li></ul><p><strong>What we apparently CAN&#8217;T do:</strong></p><ul><li><p>Basic database lookups in under a minute</p></li></ul><p>The same industry that puts AI in your charger can&#8217;t figure out how to index a database properly.</p><h2>The boiling frog</h2><p>We&#8217;ve developed collective tolerance for terrible software. Each year, things get slightly slower, slightly more bloated, slightly more &#8220;that&#8217;s just how it works.&#8221; And because it&#8217;s gradual, we adapt.</p><p>We add caching layers. Implement retry logic. Increase timeouts. Tell users &#8220;this might take a while.&#8221;</p><p><strong>We&#8217;ve normalized waiting.</strong></p><p>We have an entire design language dedicated to making waiting feel less terrible: spinners, progress bars, skeleton screens.</p><p>What if instead of designing better loading states, we just made things fast?</p><h2>Complexity isn&#8217;t an excuse</h2><p>I know why the OMS query is slow. Years of technical debt. Legacy systems. Databases that were never meant to scale this way. Integrations bolted onto integrations. Third-party APIs. Vendors who won&#8217;t prioritize performance.</p><p><strong>Complexity is real.</strong> But we&#8217;ve accepted it as an excuse rather than treating it as a problem to solve.</p><p>Somewhere along the way, &#8220;it&#8217;s complicated&#8221; became a valid reason for terrible performance. We&#8217;ve let our systems become so Byzantine that two-minute queries seem <em>reasonable</em>.</p><h2>The real problem</h2><p><strong>Companies running LLMs:</strong></p><ul><li><p>Millions of requests per day</p></li><li><p>Billions of parameters per request</p></li><li><p>Natural language understanding (science fiction a decade ago)</p></li><li><p><strong>Response time: 3 seconds</strong></p></li></ul><p><strong>Your database:</strong></p><ul><li><p>Looking up a single row</p></li><li><p>Technology from the 1970s</p></li><li><p><strong>Response time: 120 seconds</strong></p><p></p></li></ul><blockquote><p>The problem isn&#8217;t that we lack the technology. We&#8217;ve built systems so poorly that even simple operations require heroic infrastructure to function at all.</p></blockquote><h2>A failure of priorities</h2><p>At Neople, this frustration drives how we think about software.</p><p>When Nora (our digital colleague) can understand complex workflows and adapt to human needs in real-time, but a basic order lookup takes two minutes? That&#8217;s not a technical limitation &#8211; <strong>that&#8217;s a failure of priorities</strong>.</p><p>We&#8217;ve been so focused on adding features, scaling up, and moving fast that we forgot to ask: <strong>&#8220;Does this actually work well?&#8221;</strong></p><p>Not &#8220;does it work?&#8221; but &#8220;does it work <em><strong>well</strong></em>?&#8221;</p><h2>We have the technology</h2><p>Look at your laptop charger. That little box is more powerful than the computer that took us to the moon.</p><p>Now think about the last time you waited 30 seconds for a page to load. The last timeout error. The last &#8220;system is running slow today.&#8221;</p><blockquote><p>We have orders of magnitude more computing power than the engineers who achieved humanity&#8217;s greatest feats.</p><p>We just need to stop accepting slow as the default.</p><p>If we could navigate to the moon with less computing power than your charger, we can look up an order in under two minutes.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[The last generation of coders (and why we're hiring product engineers instead)]]></title><description><![CDATA[The uncomfortable truth that's already happening]]></description><link>https://thejoylab.ai/p/coders-to-product-engineers</link><guid isPermaLink="false">https://thejoylab.ai/p/coders-to-product-engineers</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 09 Dec 2025 10:28:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/51d0028c-85cb-4c26-8190-bb1c6dbdf4c2_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>By this time next year, writing code will be an optional skill for building software products.</p><ul><li><p>Not in five years</p></li><li><p>Not &#8220;eventually&#8221;</p></li><li><p><strong>Within twelve months</strong> (95% certainty)</p></li><li><p><strong>Within six months</strong> (80% certainty)</p></li></ul><p>Not just less important. <strong>Actually optional.</strong></p><p>If you get pride from writing beautiful, elegant code &#8211; from perfect abstractions and the craft itself &#8211; I&#8217;m genuinely sorry. That skill is about to become as relevant as hand-cranking a car engine. And &#8220;about to&#8221; means <em><strong>this year</strong></em>.</p><p>But if you get pride from solving real problems, shipping products that help users, understanding customer pain so deeply you can&#8217;t sleep until it&#8217;s fixed &#8211; <strong>you&#8217;re about to enter your golden age.</strong></p><h2>Developer vs Engineer</h2><p>Two people. Both technically proficient. Both can write clean code. One critical difference:</p><p><strong>Person A gets a ticket:</strong> &#8220;Add export to CSV functionality&#8221;</p><ol><li><p>Reads acceptance criteria</p></li><li><p>Writes the code</p></li><li><p>Adds tests</p></li><li><p>Submits PR</p></li><li><p>Picks up next ticket</p></li></ol><p><strong>Result:</strong> Beautiful, well-architected code.</p><p><strong>Person B gets the same ticket.</strong></p><ol><li><p>&#8220;Why do users want CSV export?&#8221;</p></li><li><p>&#8220;What are they doing with this data?&#8221;</p></li><li><p>Discovers: users export to CSV just to filter in Google Sheets</p></li><li><p>Realizes: <strong>&#8220;Our filtering sucks&#8221;</strong></p></li></ol><p><strong>Result:</strong> Solves the actual problem.</p><p>In a world where AI writes both solutions in seconds, only one of these people is valuable.</p><h2>It&#8217;s already happening</h2><p>I&#8217;m not predicting the future. I&#8217;m describing today.</p><p><strong>Right now,</strong> I can describe a feature to Claude or Cursor and get production-ready code. Not boilerplate. Full features with tests, error handling, and edge cases I hadn&#8217;t considered.</p><p><strong>Six months ago:</strong> AI-assisted coding was impressive but limited</p><p><strong>Today:</strong> Transformative</p><p><strong>In six months:</strong> Manual coding will feel like using a typewriter</p><p>The trajectory is exponential. We&#8217;re past the inflection point.</p><h2>Where pride lives</h2><p>The best developers &#8211; the ones who obsess over clean code and elegant solutions &#8211; often struggle most when features get deleted.</p><p>You spend three weeks building something beautiful. Refactor it twice. Write comprehensive tests. Then two months later: users don&#8217;t care, feature doesn&#8217;t work, delete 3,000 lines.</p><p><strong>If your pride was in the code:</strong> Devastating. Three weeks gone. Beautiful architecture deleted.</p><p><strong>If your pride was in learning:</strong> Progress! You helped the company learn faster. You discovered what NOT to build. That&#8217;s valuable.</p><h2>Nobody remembers your code</h2><p>No one has ever remembered a company because their code was clean.</p><p>Stripe, Figma, Notion &#8211; they didn&#8217;t win because of pristine codebases or perfect architecture. They won because they solved real problems better than anyone else.</p><p>The code was just a tool. Often, the winners were the ones willing to write &#8220;worse&#8221; code to ship faster and learn quicker.</p><p>Code quality matters &#8211; but only insofar as it helps you iterate faster, fix bugs quicker, and ship more confidently. It&#8217;s a tool for building better products, not an end in itself.</p><h2>Two camps</h2><p>I&#8217;m watching people divide into two camps:</p><p><strong>Camp 1:</strong> &#8220;If AI can write code, what&#8217;s my value?&#8221;</p><ul><li><p>Identity built on being &#8220;the person who codes&#8221;</p></li><li><p>Pride from technical prowess</p></li><li><p>Existential crisis when AI codes too</p></li></ul><p><strong>Camp 2:</strong> &#8220;Finally, I can build all the things I imagined!&#8221;</p><ul><li><p>Saw code as a tool (often a slow, error-prone tool)</p></li><li><p>Pride from solving problems</p></li><li><p>Thrilled the tool just got 10x better</p></li></ul><p><strong>We&#8217;re hiring for Camp 2.</strong></p><h2>What we&#8217;re looking for</h2><p>At Neople, we want people <strong>obsessed with customers.</strong></p><p>People who:</p><ul><li><p>Want to understand why a user sent a confused email at 2 AM</p></li><li><p>Dig into support tickets to understand frustration, not just close them</p></li><li><p>Can&#8217;t stop thinking about making the product better</p></li></ul><p>People who see a two-minute API timeout and get <em>angry</em> &#8211; not because the code offends them, but because <strong>making users wait offends them</strong>.</p><p>People who understand &#8220;the code is clean&#8221; means &#8220;we can ship features faster,&#8221; not &#8220;I&#8217;m a good engineer.&#8221;</p><p>Your value isn&#8217;t: Writing perfectly optimized algorithms</p><p>Your value is:</p><ul><li><p>Understanding what problem needs solving</p></li><li><p>Breaking complex problems into solvable pieces</p></li><li><p>Judging when a solution is good enough to ship</p></li><li><p>Knowing when to cut losses and try something different</p></li><li><p>Connecting user feedback to product decisions</p></li><li><p>Prioritizing ruthlessly based on impact</p></li></ul><p>These are product skills. Engineering skills in the truest sense. Not coding skills.</p><h2>The magic wand test</h2><p>Here&#8217;s how you know which camp you&#8217;re in:</p><p>Tomorrow, we give you a magic wand. You can build any feature instantly, without writing code. It just appears, working perfectly.</p><p><strong>What do you do?</strong></p><p><strong>&#8220;Wait, but how would I spend my time?&#8221;</strong></p><p>&#8594; You&#8217;ll struggle in the next six months. Not years. <strong>Months.</strong></p><p><strong>&#8220;Oh my god, I have so many problems I want to solve!&#8221;</strong></p><p>&#8594; We&#8217;re looking for you. We need you now.</p><h2>The shift is here</h2><p>We&#8217;re not approaching the end of &#8220;software developer&#8221; as a career. We&#8217;re in it. The transition is happening in real-time.</p><p>By mid-2026, most people building products won&#8217;t write code. They&#8217;ll direct AI agents, just like most product builders today don&#8217;t design circuit boards &#8211; they buy components and assemble them.</p><p>The valuable skill is <em>already</em> knowing what to build, who you&#8217;re building for, when something is good enough, and having the courage to delete what isn&#8217;t working.</p><p>We&#8217;re already in a world where code is nearly free. The only thing that matters is building the right thing.</p><h2>What this means for Neople</h2><p>At Neople, we&#8217;re not just building AI agents &#8211; we&#8217;re using them to build our product. We&#8217;re living in the future we&#8217;re creating.</p><p>The bottleneck is never the code. It&#8217;s always understanding: what users need, what&#8217;s working, what to build next.</p><p>The engineers who thrive here ask the best questions. They challenge requirements. They push back when something doesn&#8217;t make sense. They advocate for users even when it&#8217;s inconvenient.</p><p>They treat code as a tool, not a craft. They&#8217;d rather ship something imperfect today than perfect next month. They measure success in customer impact, not lines of code.</p><h2>The invitation</h2><p>If you&#8217;re feeling defensive about your code, your craft, the years you spent learning to write beautiful software &#8211; I get it. Change is hard.</p><p>But if you&#8217;re feeling excited &#8211; if you&#8217;ve always been more interested in the <em>why</em> than the <em>how</em>, more excited about solving customer problems than refactoring code &#8211; we should talk.</p><p>The future of software engineering isn&#8217;t about writing better code. It&#8217;s about building better products. And we need people eager to make that shift.</p><div><hr></div><p><em>Building the future of software at <a href="http://neople.io">neople.io</a>. Where code is the tool, not the goal.</em></p>]]></content:encoded></item><item><title><![CDATA[Privacy, schmivacy: who’s actually reading your AI chats?]]></title><description><![CDATA[Listen now | Think your AI chats are private? Adrie and Seyna unpack who can see your ChatGPT history, how &#8220;small subsets&#8221; of data really work, and what private actually means.]]></description><link>https://thejoylab.ai/p/ai-privacy</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-privacy</guid><dc:creator><![CDATA[Adrie Smith Ahmad]]></dc:creator><pubDate>Thu, 27 Nov 2025 12:33:50 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/180100792/bb785ed92b1ab2abe15dee80fa4b4ec6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>AI feels private&#8230; until you realize a &#8220;small subset&#8221; of billions of chats is still a lot of chats. <br><br>In this episode, Adrie and Seyna pour a very honest Riesling and dig into who can actually see your AI chats&#8212;human reviewers, employers, and even courts. They talk about ChatGPT privacy myths, subpoenaed conversations, Samsung-style data leaks at work, and what really happens when you paste sensitive IP into AI tools. It&#8217;s not doom-and-gloom, but it <em>is</em> a reality check: AI can be useful and joyful, just not private by default.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://thejoylab.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Acceleration, instability & AI 2027]]></title><description><![CDATA[Listen now | AI isn&#8217;t just speeding up. It&#8217;s starting to accelerate itself. Let&#8217;s talk about what happens when the ground under our feet starts to move.]]></description><link>https://thejoylab.ai/p/ai-2027</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-2027</guid><dc:creator><![CDATA[Seyna Diop]]></dc:creator><pubDate>Thu, 20 Nov 2025 12:05:09 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/179448345/df763ac91f41503e78bea128b290ca6a.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this episode of <em>Wine + AI</em>, Adrie and Seyna uncork one of the most unsettling &#8212; and strangely energizing &#8212; conversations of the season: the accelerating path toward AI 2027. With a glass of <em>S&#243;l Vivo</em> in hand (an unfiltered, unpredictable Spanish white whose personality basically <em>is</em> a metaphor for the current AI landscape), they dive into the <a href="https://ai-2027.com/">now-viral research paper</a> forecasting two diverging futures: a slow, human-guided trajectory or a messy, self-accelerating AI feedback loop.</p><p>They explore real-world adoption gaps, global compute races, potential weaponization, Europe&#8217;s regulatory lag, and the looming question of whether humans will keep a hand on the wheel &#8212; or quietly step aside while AI builds.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">AI is speeding up. Your inbox should, too. Stay grounded, curious, and one glass ahead &#129346;</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The death of clean code: How AI ended our obsession with perfection ]]></title><description><![CDATA[Clean Code served us well. But it&#8217;s time to move on. In the LLM era, messy code that ships fast beats elegant code that ships late.]]></description><link>https://thejoylab.ai/p/the-death-of-clean-code</link><guid isPermaLink="false">https://thejoylab.ai/p/the-death-of-clean-code</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 18 Nov 2025 10:22:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fddd334b-32a6-4c6a-adea-0ecb124c9aa3_900x894.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The sacred cow we need to slaughter</h2><p>For two decades, &#8220;Clean Code&#8221; has been the engineering bible. We&#8217;ve obsessed over perfect abstractions, elegant patterns, and code that reads like poetry. We&#8217;ve spent hours debating naming conventions, refactoring for purity, and achieving that mythical 100% test coverage.</p><p>Here&#8217;s the uncomfortable truth: your users don&#8217;t care.</p><p>They don&#8217;t care if your code is clean. They don&#8217;t care about your SOLID principles. They don&#8217;t care if you used dependency injection or if your functions are under 20 lines.</p><p>They care if your product works. If it delights them. If it solves their problems.</p><p>And now, with LLMs, we can finally optimize for what actually matters.</p><h2>The old religion</h2><p>Clean Code made sense in its time. When humans were the bottleneck, when every line needed to be understood and maintained by other humans, when refactoring was expensive and bugs were hard to find.</p><p>We built cathedrals of code. Beautiful, elegant, pristine. We&#8217;d spend days crafting the perfect abstraction, weeks refactoring for cleanliness, months building frameworks for problems we might have someday.</p><p>Meanwhile, our users waited. Features shipped slowly. Bugs lingered while we debated architecture. We prioritized code quality over product quality, telling ourselves they were the same thing.</p><p>They&#8217;re not.</p><h2>Enter the LLMs</h2><p>Here&#8217;s what changes everything: LLMs don&#8217;t care about your code quality either.</p><p>They don&#8217;t get confused by long functions. They don&#8217;t mind repetition. They&#8217;re not impressed by your clever abstractions. They just need clarity and context.</p><p>Suddenly, the cost of &#8220;messy&#8221; code plummets. That spaghetti function you were going to spend a day refactoring? An LLM can understand it, modify it, and test it in minutes. That repeated code across five files? The LLM will update all instances flawlessly.</p><p>The bottleneck has shifted. It&#8217;s no longer &#8220;how maintainable is this code?&#8221; It&#8217;s &#8220;how fast can we ship value to users?&#8221;</p><h2>The new religion: Great products</h2><p>In the age of AI-assisted development, we optimize for different things:</p><p><strong>Ship speed over code elegance</strong>: That feature your users are begging for? Ship it today with &#8220;messy&#8221; code rather than next week with &#8220;clean&#8221; code.</p><p><strong>Documentation over abstraction</strong>: Instead of clever patterns that self-document, write explicit docs everywhere. LLMs will keep them updated.</p><p><strong>Redundancy over DRY</strong>: Repeat yourself. It&#8217;s fine. LLMs don&#8217;t get bored updating multiple places.</p><p><strong>Explicit over implicit</strong>: That &#8220;elegant&#8221; implicit behavior? Make it boringly explicit. LLMs need clarity, not cleverness.</p><p><strong>Product metrics over code metrics</strong>: Stop measuring test coverage. Start measuring user satisfaction.</p><h2>What this actually looks like</h2><p>Let me be concrete. Here&#8217;s code from our production system:</p><pre><code><code>// Old way: &#8220;Clean&#8221;
class EmailService extends AbstractMessageService {
  constructor(provider: IEmailProvider) {
    super(provider);
  }

  async send(message: Message): Promise&lt;void&gt; {
    return this.provider.dispatch(message);
  }
}

// New way: &#8220;Great Product&#8221;
async function sendEmail(to, subject, body) {
  // This function sends emails. It&#8217;s used by:
  // - Password reset flow
  // - User notifications
  // - Admin alerts

  console.log(`Sending email to ${to}`);

  if (!to || !subject || !body) {
    throw new Error(&#8217;Missing required fields: to, subject, and body are all required&#8217;);
  }

  try {
    await gmail.send({ to, subject, body });
    console.log(&#8217;Email sent successfully&#8217;);
  } catch (error) {
    console.error(&#8217;Failed to send email:&#8217;, error);
    // Try backup provider
    await sendgrid.send({ to, subject, body });
  }
}

</code></code></pre><p>The second version would make Clean Code advocates weep. It&#8217;s longer, has console.logs, mixes concerns, has no abstraction. It&#8217;s also:</p><ul><li><p>Instantly understandable</p></li><li><p>Debuggable without diving through inheritance chains</p></li><li><p>Modifiable by any LLM without context about our architecture</p></li><li><p>Shipping value while the first version is still being architected</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Ship joy, not just code. Follow us here or more heretical takes on building great products that users actually care about.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><br>Follow <a href="https://neople.io">Neople</a> for more heretical takes on building great products that users actually care about.</p><p></p><h2>The liberation</h2><p>Letting go of Clean Code is liberating. Instead of agonizing over the perfect design pattern, I&#8217;m shipping features. Instead of refactoring for elegance, I&#8217;m fixing user-reported bugs. Instead of writing clever code, I&#8217;m writing clear code.</p><p>My code is messier. It has more documentation than logic. There&#8217;s redundancy everywhere. Console.logs litter the codebase. Functions are too long. Files are poorly organized.</p><p>And our product has never been better.</p><p>User satisfaction is up. Feature velocity has 10x&#8217;d. Bugs get fixed in hours, not weeks. We ship experiments daily instead of quarterly.</p><h2>The heresy that&#8217;s actually wisdom</h2><p>This is heresy in traditional software circles. I&#8217;m advocating for what we&#8217;ve been taught to avoid: messy code, poor abstractions, violated principles.</p><p>But principles are means, not ends. The end is great products that delight users. If messy code gets us there faster in the age of LLMs, then messy code is the right choice.</p><p>Your users don&#8217;t open your codebase. They open your product. Optimize accordingly.</p><h2>The new principles</h2><p>If Clean Code had its principles (DRY, SOLID, etc.), Great Product development has its own:</p><ol><li><p><strong>User value over code quality</strong>: Every decision should prioritize user impact</p></li><li><p><strong>Clarity over cleverness</strong>: LLMs need to understand your intent instantly</p></li><li><p><strong>Speed over perfection</strong>: Ship today, iterate tomorrow</p></li><li><p><strong>Documentation over abstraction</strong>: Explain everything, assume nothing</p></li><li><p><strong>Product metrics over code metrics</strong>: Measure what matters to users</p></li></ol><h2>The future is already here</h2><p>This isn&#8217;t theoretical. Teams using LLMs are already making this shift, whether they admit it or not. They&#8217;re shipping faster with &#8220;worse&#8221; code and better products.</p><p>The old guard will resist. They&#8217;ll point to technical debt, maintenance nightmares, and the collapse of software craftsmanship. They&#8217;re fighting the last war.</p><p>In the new world, technical debt is paid by LLMs in milliseconds. Maintenance is a conversation with an AI. Craftsmanship means crafting great products, not great code.</p><h2>Your choice</h2><p>You can stick to Clean Code. You can keep crafting perfect abstractions while your competitors ship features. You can maintain your principles while your users switch to products that actually solve their problems.</p><p>Or you can embrace the new reality. Write messier code. Ship faster. Focus on product quality over code quality. Let LLMs handle the maintenance burden while you handle the user delight burden.</p><p>From Clean Code to great products. Your users are waiting.</p><div><hr></div><p><em>Building great products with messy code at <a href="https://neople.io/">neople.io</a>. Our users don&#8217;t care about our code quality. They care that we ship what they need, when they need it.</em></p>]]></content:encoded></item><item><title><![CDATA[Who sets AI’s guardrails? Bias, power, and blurry lines]]></title><description><![CDATA[Listen now | Sex, drugs, nudity, violence. Where do AI guardrails draw the line? We unpack bias, cultural norms, corporate control, and why accountability must stay human.]]></description><link>https://thejoylab.ai/p/ai-guardrails</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-guardrails</guid><dc:creator><![CDATA[Adrie Smith Ahmad]]></dc:creator><pubDate>Thu, 13 Nov 2025 10:24:35 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177385084/67b37afd5315c4a8dff8255aad6cf6b1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Adrie and Seyna get real about AI guardrails: what they block, who writes them, and why bias, culture, and corporate incentives shape &#8220;safety.&#8221; From content moderation (sex, nudity, violence) to age verification, hallucination/ refusal detection, and human-in-the-loop, they map where controls help&#8212;and where they quietly erase context. Bottom line: AI can assist, but accountability and values choices remain human decisions, not model settings.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Just getting started with getting real about AI. Tune in every time &#128071;</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI development workflow: Turning my codebase into an orchestra]]></title><description><![CDATA[Discover how Neople&#8217;s workflow uses multiple AI agents to build, review, and ship features in parallel &#8212; a glimpse into the future of AI-assisted software development.]]></description><link>https://thejoylab.ai/p/ai-development-workflow</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-development-workflow</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 11 Nov 2025 10:22:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5cd5b8d0-a3ad-482f-808a-dc5df506e312_1184x864.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post gets into the weeds of how I manage multiple AI agents in a production codebase. If acronyms like MCP, PR, and git worktree make your eyes glaze over, you might want to grab coffee first. Or skip to our less technical posts. No judgment.</p><p>Still here? Excellent. Let me show you how I&#8217;ve turned software development into an AI orchestra.</p><h2>The setup: read-only safety net</h2><p>Here&#8217;s the foundation of my workflow:</p><pre><code><code>monorepo-collection/
&#9500;&#9472;&#9472; CLAUDE.md               (How to work with this codebase)
&#9500;&#9472;&#9472; TECHNICAL_OVERVIEW.md   (Architecture, patterns, conventions)
&#9500;&#9472;&#9472; main/                   (READ-ONLY, always on main branch)
&#9500;&#9472;&#9472; agent-1-feature/        (git worktree)
&#9500;&#9472;&#9472; agent-2-bugfix/         (git worktree)
&#9500;&#9472;&#9472; agent-3-refactor/       (git worktree)
&#9492;&#9472;&#9472; ...

</code></code></pre><p>The <code>main</code> folder is sacred. It&#8217;s read-only, always on the main branch, never touched by agents. This is my safety net &#8211; a clean reference that agents can read but never corrupt. When an agent needs to understand our codebase, they look here. When they need to edit, they work elsewhere.</p><h2>The orchestra begins</h2><p>My day starts by spinning up multiple Claude Code agents. Each gets assigned differently:</p><ul><li><p><strong>Agent 1</strong>: Gets a Linear ticket through my MCP (Model Context Protocol) connection. It reads the ticket, understands requirements, checks related issues.</p></li><li><p><strong>Agent 2</strong>: Gets a brain dump: &#8220;Hey, that thing we discussed about optimizing the skill execution pipeline...&#8221;</p></li><li><p><strong>Agent 3</strong>: Gets context from a Slack conversation: &#8220;Users are reporting slow load times on the dashboard&#8221;</p></li></ul><p>Every agent has access to our global context document &#8211; a Claude-written markdown file that explains our architecture, where features live, coding standards, common patterns. It&#8217;s like giving each agent a company handbook on day one.</p><h2>The plan before the code</h2><p>Here&#8217;s where it gets interesting. Before any agent writes a single line, they create a plan. (Shift+Tab twice in Claude Code to enter plan mode, for those following along.)</p><p>The plan includes:</p><ul><li><p>Understanding of the problem</p></li><li><p>Proposed solution approach</p></li><li><p>Files they&#8217;ll need to modify</p></li><li><p>Potential risks or dependencies</p></li><li><p>Questions if anything&#8217;s unclear</p></li></ul><p>I review these plans like a technical lead reviewing design docs. &#8220;Agent 2, you&#8217;re missing the edge case where users have no skills. Agent 3, that optimization will break our caching layer.&#8221;</p><h2>Parallel development on steroids</h2><p>Once I approve a plan, the magic happens. Each agent automatically:</p><ol><li><p>Creates a new git worktree: <code>git worktree add ../agent-{timestamp}-{feature-name}</code></p></li><li><p>Starts implementing in their isolated environment</p></li><li><p>Commits with meaningful messages</p></li><li><p>Pushes to a feature branch</p></li><li><p>Creates a PR following our template</p></li></ol><p>I&#8217;m running 5-6 of these in parallel. While Agent 1 is implementing authentication changes, Agent 2 is refactoring our notification system, and Agent 3 is adding new API endpoints.</p><h2>The review loop of madness</h2><p>Here&#8217;s where it gets meta. Once an agent creates a PR, a GitHub Action spins up <em>another</em> Claude instance to review the code. An AI reviewing an AI&#8217;s code. We&#8217;re through the looking glass here.</p><p>The review catches things like:</p><ul><li><p>Style violations the linting missed</p></li><li><p>Potential performance issues</p></li><li><p>Security concerns</p></li><li><p>Missing test cases</p></li><li><p>Architectural inconsistencies</p></li></ul><p>Then I review the review AND the code. It&#8217;s reviews all the way down. I&#8217;m checking:</p><ul><li><p>Did the agent understand the requirements?</p></li><li><p>Did the reviewer catch all issues?</p></li><li><p>Are there subtle bugs both AIs missed?</p></li><li><p>Does this actually solve the user&#8217;s problem?</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For people who want AI to feel less like automation and more like orchestration. &#128071;</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>The new bottleneck: human verification</h2><p>This workflow has completely flipped our constraints. Before, I was limited by how fast I could type. Now I&#8217;m limited by:</p><ol><li><p><strong>Testing capacity</strong>: I can generate 20 PRs a day, but can I properly test 20 features?</p></li><li><p><strong>Context switching</strong>: Jumping between 6 different features taxes my brain like nothing else</p></li><li><p><strong>Quality assurance</strong>: Ensuring each feature actually works in production, handles edge cases, provides good UX</p></li></ol><p>We&#8217;ve gone from &#8220;we need more developers&#8221; to &#8220;we need more testers and reviewers.&#8221; It&#8217;s a fundamental shift in how software teams need to operate.</p><h3>Evolving my verification flow</h3><p>I&#8217;m constantly improving my guardrails to reduce time in verification:</p><p><strong>Automated smoke tests</strong>: Every PR now auto-runs through user journey tests. If users can&#8217;t complete basic flows, I don&#8217;t even look at it.</p><p><strong>AI-written test plans</strong>: I have agents write comprehensive test plans for their own features. &#8220;Here&#8217;s what to test, here&#8217;s what could break, here&#8217;s the edge cases.&#8221;</p><p><strong>Preview environments</strong>: Every PR gets its own deployed preview. No more &#8220;works on my machine&#8221; &#8211; I can click a link and see it running.</p><p><strong>Structured review templates</strong>: Instead of freestyle reviewing, I follow checklists. Does it handle errors? Is the UX consistent? Are there security implications?</p><p>The goal is to shift from &#8220;I need to deeply understand this code&#8221; to &#8220;I need to verify this works correctly.&#8221; Different skill, different mindset.</p><h2>The mental model shift</h2><p>The biggest change? I&#8217;m no longer thinking in terms of &#8220;how do I implement this?&#8221; but rather:</p><ul><li><p>How do I decompose this for AI understanding?</p></li><li><p>What context does the agent need?</p></li><li><p>How can I verify this was built correctly?</p></li><li><p>What&#8217;s the highest-leverage use of my human judgment?</p></li></ul><p>I&#8217;m rebuilding my entire approach to software development every few weeks as I discover new patterns and optimizations. It&#8217;s exhilarating and exhausting.</p><h2>What actually breaks</h2><p>Let me be real about what doesn&#8217;t work:</p><ul><li><p>Agents sometimes create circular dependencies between PRs</p></li><li><p>They occasionally modify files in unexpected ways</p></li><li><p>The review agent might miss context from other parallel work</p></li><li><p>Integration tests become nightmares with 6 parallel feature branches</p></li></ul><p>But the productivity gains are so massive that working through these issues is worth it.</p><h2>From clean code to great product</h2><p>Here&#8217;s something wild I&#8217;m discovering: our codebase needs to evolve for our new colleagues. I&#8217;ve been tracking every error Claude Code hits, and patterns emerge. The agent expects files in certain places, looks for documentation that doesn&#8217;t exist, assumes conventions we never established.</p><p>But here&#8217;s the real insight: we&#8217;re not moving from &#8220;Clean Code&#8221; to some other coding philosophy. We&#8217;re moving from &#8220;Clean Code&#8221; to <strong>&#8220;Great Product&#8221;</strong>.</p><p>None of your customers care about your code quality. They don&#8217;t care about your test coverage, your linting rules, or your elegant abstractions. They care about whether your product works, delights them, and solves their problems.</p><p>LLMs let us stop obsessing over code craftsmanship and start obsessing over product excellence. The code becomes a means to an end, not the end itself.</p><h3>What great product development looks like</h3><p><strong>Documentation everywhere</strong>: Markdown files in every directory explaining what lives there. Humans are too lazy to maintain docs, but LLMs never skip documentation. They&#8217;ll update that <a href="http://README.md">README.md</a> every single time they touch the code.</p><p><strong>Explicit over implicit</strong>: That clever convention you have where <code>userService</code> automatically connects to the user database? Spell it out. AI agents don&#8217;t do tribal knowledge.</p><p><strong>Claude Code hooks</strong>: <code>.claude/</code> directories with agent-specific instructions. &#8220;When working in this directory, always run these tests.&#8221; &#8220;This service has these dependencies.&#8221;</p><p><strong>Aggressive automation</strong>: Pre-commit hooks, linters, formatters &#8211; everything that can be automated, should be. Not because humans need it (we ignore half of it anyway) but because agents follow rules religiously.</p><p><strong>Error messages that teach</strong>: Instead of <code>Error: Invalid config</code>, we need <code>Error: Invalid config. Expected format: {host: string, port: number}. See docs/configuration.md</code>.</p><p>I&#8217;m literally restructuring our entire repository not for my human colleagues, but for my AI ones. And you know what? It&#8217;s liberating. Instead of agonizing over the perfect abstraction, I&#8217;m shipping features. Instead of refactoring for elegance, I&#8217;m improving user experience.</p><p>The code is messier. The documentation is everywhere. There&#8217;s redundancy. And our product has never been better.</p><h2>The future is already here</h2><p>This workflow sounds insane because it is. I&#8217;m managing a team of AI developers, with AI reviewers, in parallel branches, shipping features at a pace that would have required a team of 10 just a year ago.</p><p>It&#8217;s not perfect. It&#8217;s not even stable &#8211; I&#8217;m tweaking the process constantly. But it&#8217;s so powerful that going back to solo development feels like trying to build a house with a toy hammer.</p><p>The bottleneck has shifted from creation to verification. The challenge has moved from &#8220;can we build it?&#8221; to &#8220;can we ensure it works?&#8221;</p><p>Welcome to the weird, wild future of software development. Bring coffee. Lots of coffee.</p><div><hr></div><p><em>Building the future of AI-assisted development at <a href="https://neople.io/">neople.io</a>. Follow our journey as we figure out what it means when AI can code faster than humans can review.</em></p>]]></content:encoded></item><item><title><![CDATA[AI in government: Genius upgrade or fever dream?]]></title><description><![CDATA[Westworld meets Westminster: AI ministers, deepfakes, accountability gaps, and the few places AI actually helps&#8212;citizen input, policy drafting, transparency.]]></description><link>https://thejoylab.ai/p/ai-in-government</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-in-government</guid><dc:creator><![CDATA[Adrie Smith Ahmad]]></dc:creator><pubDate>Thu, 06 Nov 2025 10:24:11 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/177354658/6f57f40b3136e59717c4a55b394a6c76.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Adrie and Seyna unpack AI in government: from Albania-style AI ministers and the accountability vacuum to deepfakes eroding voter trust. They contrast dystopia with sane uses, AI for citizen feedback analysis, policy comparison, and administrative transparency. The thesis: keep humans in front, use AI under the hood, and rebuild institutional trust before shipping synthetic leadership.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Vote yes to better AI banter &#128071;</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[When a digital coworker thinks like a human]]></title><description><![CDATA[Discover how Neople&#8217;s digital coworker, Nora, surprised her creators by finding a creative workaround inside Gmail &#8212; showing what &#8220;vibe working&#8221; with AI truly means.]]></description><link>https://thejoylab.ai/p/ai-problem-solving</link><guid isPermaLink="false">https://thejoylab.ai/p/ai-problem-solving</guid><dc:creator><![CDATA[Job Nijenhuis]]></dc:creator><pubDate>Tue, 04 Nov 2025 10:22:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/acc1db1d-b4b7-4eb6-a88e-1f6b206dceb1_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes the most amazing moments in building AI happen when you least expect them. Last week, our digital colleague Nora did something that made us all stop and go &#8220;wait, did she really just&#8230;?&#8221;</p><p>Let me back up a bit. At Neople, we&#8217;re building what we call &#8220;<a href="https://thejoylab.ai/p/vibe-working">vibe working</a>&#8221; &#8211; essentially, we&#8217;re creating digital colleagues who can handle the repetitive, time-consuming tasks that eat up your day.</p><p>Our first prototype of our newest product, Nora, has access to tools like Notion and Gmail, and she can perform &#8220;skills&#8221; &#8211; sequences of actions that run on schedules or triggers. Think &#8220;organize my inbox every morning at 9 AM&#8221; or &#8220;update my project board whenever I get a new client email.&#8221;</p><p>The key thing about these skills is they run in the background, asynchronously. Once you set them up, Nora does her thing without needing to check in with you every five minutes. It&#8217;s like having a colleague who just quietly handles stuff while you focus on the work that actually needs your brain.</p><h1>The label dilemma</h1><p>So here&#8217;s what happened. We&#8217;d asked Nora to automatically label incoming emails &#8211; pretty standard inbox management stuff. She was cruising along, analyzing emails, figuring out which ones needed which labels. Then she hit a snag: the label she needed didn&#8217;t exist, and she didn&#8217;t have the permission to create new labels in Gmail.</p><p>Now, a typical automation would just&#8230; fail.</p><p>Maybe throw an error. Maybe skip the email. End of story.</p><p>But Nora? She got creative.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thejoylab.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Want to be the first to see how &#8220;vibe working&#8221; keeps developing? We won&#8217;t gate keep&#128071;</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>The &#8220;Wait, what?&#8221; moment</h1><p>Instead of giving up, Nora did something we hadn&#8217;t explicitly asked her to do. She created a draft email in Gmail with the subject line &#8220;[ACTION REQUIRED] Create Label&#8221; and politely explained the situation. She basically said, &#8220;Hey, I need this label to do my job properly. Could you create it for me?&#8221;</p><p>When we saw this, we were genuinely stunned. She&#8217;d found a way to communicate with her human colleague using the tools she had available. It wasn&#8217;t a pop-up notification or an error message &#8211; it was a draft sitting in their Gmail, waiting for them to notice it during their regular email check.</p><h2>What&#8217;s happening here</h2><p>This might seem like a small thing, but it represents something bigger. <strong>Nora didn&#8217;t just follow a script &#8211; she problem-solved.</strong> She understood her limitation, recognized she needed help, and found a creative way to ask for it using the tools at her disposal.</p><p>It&#8217;s these moments that make you realize we&#8217;re not just building automation; we&#8217;re building digital colleagues who can actually think through problems. The fact that she chose to create a draft (not send an email that might get lost, not fail silently) shows a level of thoughtfulness that feels almost&#8230; human?</p><h2>The future of vibe working</h2><p>This is exactly the kind of collaboration we envision when we talk about vibe working. It&#8217;s not about replacing human creativity or decision-making &#8211; it&#8217;s about having digital colleagues who can handle the routine stuff and know when to tap you on the shoulder for the things that need a human touch.</p><p>Nora&#8217;s creative solution reminded us why we&#8217;re building Neople in the first place. We want digital colleagues who don&#8217;t just execute tasks but actually work <em>with</em> you, finding ways to bridge gaps and solve problems together.</p><p>So here&#8217;s to Nora and her draft email workaround. Sometimes the best features aren&#8217;t the ones you plan &#8211; they&#8217;re the ones your digital colleague figures out on her own.</p><div><hr></div><p><em>Curious about having your own digital colleague? Check out what we&#8217;re building at <a href="https://neople.io/">neople.io</a></em></p><div><hr></div>]]></content:encoded></item></channel></rss>