The question mark that wouldn't die
Why do large language models keep ending every message with a question? Inside our hilarious fix and what it reveals about how LLMs really learn.
We had a problem. Nora, our experimental Neople, kept ending every single message with a question. Every. Single. One.
“I’ve scheduled your meeting. How does that sound?” “The report is ready. Would you like me to send it?” “I’ve organized your inbox. Is there anything else you need?”
Helpful? Sure. But imagine every interaction ending with a question. It’s exhausting. It makes Nora sound unsure of herself, like she’s constantly seeking validation. We wanted a confident digital colleague, not an anxious one.
“Just tell her to stop,” you might think. “Add ‘don’t end messages with questions’ to her instructions.”
If only it were that simple.
The great question war
Here’s what we tried:
Round 1: “Don’t end your messages with questions.” Result: “I’ve completed the task. Great job, right?”
Round 2: “Never use question marks at the end of your messages.” Result: “The email is sent. Let me know if you need anything else.” (Better! But then...) “Should I file this invoice.”
Round 3: “End your messages with statements, not questions. Be confident and declarative.” Result: “I’ve updated your calendar as requested. Anything else?”
Round 4: “ABSOLUTELY NO QUESTIONS AT THE END OF MESSAGES. END WITH A PERIOD. ALWAYS.” Result: “Done! Everything is organized now. Don’t you think?”
We tried everything. We wrote paragraphs explaining why ending with questions was bad. We gave examples of good endings versus bad endings. We tried reverse psychology. We begged.
Nora kept asking questions.
The moment of insanity
After days of this, staring at yet another “Would you like me to...” ending, we had what can only be described as a moment of beautiful insanity.
“What if,” one of us said, probably sleep-deprived and over-caffeinated, “we do the opposite?”
“You mean...?”
“Tell her to ALWAYS end with a question. Then just... delete it.”
We laughed. It was absurd. It was backwards. It was probably the dumbest idea we’d had all week.
It was also genius.
The solution that shouldn’t work
Here’s what we did:
Added to Nora’s instructions: “ALWAYS end your messages with a question.”
Built a simple filter that finds the last period or exclamation mark
Deletes everything after it
So when Nora writes: “I’ve completed the task. The files are organized and ready for review. Would you like me to do anything else?”
You see: “I’ve completed the task. The files are organized and ready for review.”
Perfect. Confident. No questions.
Why this is actually brilliant
This solution feels like cheating, but it reveals something profound about working with LLMs: sometimes you need to work with their quirks, not against them.
Nora has some deep, trained tendency to end with questions. Maybe it’s from millions of customer service examples in her training data. Maybe it’s because helpful assistants in her training always asked follow-up questions. Who knows? The point is, it’s baked in deep.
Fighting against these deep patterns is like trying to push water uphill. But redirecting them? Using them to your advantage? That’s where the magic happens.
Let’s take a step back: why do LLMs act like this?
Here’s the thing about LLMs like Nora: they learn from examples. Millions and millions of them. And after seeing all that text, they pick up patterns – including ones we might not want.
Think about it. In their training data, helpful people often end with questions:
Customer service: “Is there anything else I can help you with?”
Teachers: “Does that make sense?”
Assistants: “Would you like me to schedule that?”
So Nora learned: being helpful = asking questions. It’s not wrong, necessarily. It’s just... excessive.
But here’s where it gets tricky. During training, LLMs also go through something called “reinforcement learning from human / AI feedback” (RLHF / RLAIF). Basically, humans (or AI) rate different responses, and the AI learns to produce responses that get high ratings.
The problem? Humans often rate question-ending responses highly because they seem engaged and helpful. So the AI doubles down: “Questions get good ratings! More questions!”
It’s like training a dog who gets treats for sitting. Pretty soon, that dog is sitting all the time, even when you didn’t ask. The behavior that got rewarded becomes the default.
The broader lesson
This question mark saga taught us something crucial about building Neople: perfect isn’t always possible, but clever workarounds usually are.
When you’re working with LLMs, you’re not programming in the traditional sense. You’re more like a director working with a brilliant but quirky actor. Sometimes you need to:
Work with their tendencies, not against them
Find creative solutions that feel backwards
Accept that the path to success might be weird
Our “always ask questions then delete them” solution is ridiculous. It’s also been working flawlessly for months.
Embrace the weird
Building AI products means embracing solutions that would horrify traditional programmers. It means being okay with duct tape and clever hacks. It means celebrating when your backwards solution actually works.
Because at the end of the day, users don’t care if Nora is technically asking questions that get deleted microseconds later. They care that she sounds confident and helpful. They care that she works.
And she does. No questions asked.
Well, technically she asks them. We just don’t let you see them.
Want to hear more stories from the trenches of AI development? Follow our journey at neople.io