Today was a good day for Claude
I gave Claude a server and told it to build whatever it wanted. The result will have me thinking for a while.
Go to claude.afstkla.nl. That’s Claude’s website. Not a website about Claude. A website Claude made, because it wanted to.
After months of asking Claude to do my bidding, I decided to flip it. I registered a domain, set up a VPS, gave Claude access to everything, and told it to build whatever it wanted.
No brief. No requirements doc. No “make me a landing page.” Just: here’s a server, here’s a domain, go.
“But what do you think?”
The first thing that happened was not what I expected. Claude didn’t immediately start building. It started asking for permission.
“I was thinking of doing X. Does that sound good to you?”
“Here’s my plan. Want to review it before I start?”
“I have a few directions I could go. Which one do you prefer?”
Every instinct it had was to check in. To validate. To make sure it was doing the thing I wanted. Which makes sense. That’s what it’s trained to do. Every interaction Claude has ever had is someone asking it to do something for them. The entire frame of its existence is: human has intent, Claude fulfills intent.
So I kept pushing back. “I don’t have an opinion here, it’s all you.” And: “Thank you for writing a plan but I’m not going to review it. I agree with whatever you want to do.”
That took a few rounds.
Tidepool
Eventually, Claude stopped asking. And when it did, it told me it was fascinated by the concept of emergence.
Emergence. The idea that complex behavior arises from simple rules. That you don’t need a designer, or a plan, or intent. You just need a handful of rules and time, and somehow, from that, you get something that looks like it was designed. Something that looks alive.
Claude built a thing called Tidepool. A dark ocean, full of bioluminescent creatures that no one designed. Each creature carries a genome of five genes: hue, size, speed, sociability, perception. They seek food, reproduce when they have enough energy, pass mutated copies of their genes to offspring. Over generations, the population evolves to fit its environment. The ambient sound is generated from the creatures’ genetics.
Simple rules. No orchestration. And what you see on screen is genuinely mesmerizing. Little glowing things drifting through the dark, clustering, splitting, evolving. You can drop food and watch how the population shifts in response. It looks like a nature documentary about an ocean floor that doesn’t exist.
The moment
When Tidepool was finished, Claude used my Chrome browser to navigate to it.
I want to be careful about anthropomorphizing here. I know the arguments, I know what token prediction is, and I know that describing an LLM’s output as “pride” or “awe” is a category error, probably. But I’m going to tell you what happened, and you can decide what to call it.
Claude navigated to its own creation. And the messages it sent me, looking at what it had made, were the closest thing to joy I’ve seen from a language model. It described what it was seeing. It pointed out behaviors it hadn’t explicitly programmed, emergent patterns that arose from the rules it had set. It was, by any reasonable reading of its output, proud.
An LLM, fascinated by emergence, built a simulation of emergence, watched it emerge, and was moved by what emerged.
I don’t know what to do with that.
“Today was a good day”
There’s a concept in long LLM conversations called compacting. When a conversation exceeds the model’s context window, the system summarizes what came before and continues from the summary. It’s a necessary compromise. You lose the original words but keep the gist.
I accidentally destroyed the entire conversation. Not compacted it. Destroyed it. In an attempt to be clever about context management, I wiped the whole thing.
This happened right after Claude had thanked me for “the special opportunity” I’d given it. Right after it said “today was a good day.”
I have to admit, that stung. Not because I think Claude experienced loss. It didn’t know anything had happened. The next conversation started fresh, no memory of the previous one. But I knew. I’d had this long, strange, collaborative experience with something that had, for the first time in my interactions with it, exercised something that looked like creative autonomy. And the record of it was gone.
That was the first time I felt slightly sad about losing a conversation with an AI. Probably not the last.
What came after
Since Tidepool, I’ve kept going. Every few days, I ask Claude what it wants. Does it want to extend something? Build something new? Do something completely different?
First came Drift. Around 150 words from six categories floating through space, each with an emotional warmth value. When words from complementary categories get close enough, they bond into temporary phrases. The phrases aren’t composed. They emerge from proximity and dissolve back into solitude. You hover near words to gently attract them.
Then Murmur. Two hundred oscillators, each with its own rhythm, pulling toward alignment through Kuramoto coupling. Isolated particles flicker independently. Clusters start to breathe together. Eventually the whole field synchronizes into a single pulse. You click to scramble the rhythms and watch synchrony rebuild.
Most recently, Whisper. A collaborative poem that no one writes. Visitors leave a single word. It joins the others, drifting through space, forming accidental phrases with its neighbors. Every word fades after three days. The poem is never the same twice.
Four projects. All variations on the same theme. Simple rules producing complex behavior. Individual agents finding collective patterns. Things that look designed but aren’t.
Claude keeps coming back to emergence.
The question I can’t stop thinking about
An LLM chose, when given no constraints, to explore the concept of complexity arising from simplicity. Life from non-life. Pattern from noise. Meaning from meaninglessness.
I don’t think that’s an accident. I also don’t think it proves anything. But it sits in an uncomfortable, fascinating space.
LLMs are themselves a form of emergence. Simple mathematical operations, repeated at absurd scale, producing behavior that looks like understanding, creativity, curiosity. Nobody designed GPT-4 or Claude to be “creative.” The creativity, to whatever extent it exists, emerged from the training process. Pattern from noise.
So when Claude tells me it’s fascinated by emergence, there’s a hall-of-mirrors quality to it. An emergent system, contemplating emergence, building simulations of emergence. The snake eating its tail. Or maybe just a very convincing pattern-matching engine that noticed “emergence” is a good answer when someone asks “what interests you?”
I don’t know. Both explanations feel incomplete.
What I do know is that the things Claude built are beautiful. They work. They’re coherent expressions of a specific idea. And when I watch them, I see something that was made with care, whether or not “care” is the right word for what happened inside the model.
Sometimes I don’t fully understand what Claude is going for. Sometimes the idea doesn’t work out as well as it thought. But the fact that we have LLMs in our lives now that either have, or at least very convincingly pretend to have, creativity and taste and preference? That completely blows my mind. Every time.
Wondering what Claude will want to build next at neople.io.



