To an LLM, every question is a leading question
How to fight the "yes machine" in every LLM
My fiancée woke up one morning with a theory. She’d been having nightmares, and she was pretty sure it was because she’d been sleeping on her back.
Reasonable enough. We’re both the kind of people who can’t let a hunch just be a hunch. So we did what everyone does in 2026. We opened ChatGPT.
“Is there any research that shows nightmares happen more frequently when sleeping on your back?”
Yes. Absolutely. Turns out there’s research suggesting exactly that. Something about supine position and REM sleep and increased likelihood of vivid dreams. Fascinating. Theory confirmed. Case closed.
Except I work with LLMs every day. And something about that confident, well-sourced “yes” made me want to run an experiment.
The experiment
New chat. Clean slate. No history.
“Is there any research that shows nightmares happen more frequently when sleeping on your belly?”
Yes. Absolutely. Turns out there’s research suggesting exactly that. Prone position, pressure on the chest, increased likelihood of disturbing dreams.
Interesting.
New chat again.
“Is there any research that shows nightmares happen more frequently when sleeping on your side?”
Yes. Of course. In fact, sleeping on your left side is particularly associated with nightmares. Something about heart position and blood flow.
Three sleeping positions. Three confident yeses. Three sets of citations.
Your back causes nightmares. Your belly causes nightmares. Your side causes nightmares. Your left side especially. At this point the only safe option is to sleep standing up, and I’m sure there’s a study for that too.
Two things are true at once
The first: there’s a lot of questionable research out there. If you look hard enough, you’ll find a study that supports almost anything. A published paper somewhere says red wine is good for you. Another one says it’s killing you. Both got peer-reviewed. Sleep research is no different. For every position, someone somewhere ran a study with a small sample size and found a correlation.
The second, and the one that matters more if you use LLMs regularly: every question you ask is a leading question.
When I asked “is there research that shows nightmares happen more when sleeping on your back,” I wasn’t asking a neutral question. I was handing the model a hypothesis and asking it to confirm. The model obliged. It will almost always oblige. That’s what it’s optimized to do.
I didn’t ask “what sleeping position is associated with the most nightmares?” I didn’t ask “is there a relationship between sleeping position and nightmares?” I asked a question that had “yes” baked into it, three times in a row, and got “yes” three times in a row.
Let’s emphasize; this wasn’t me leading the model consciously. It was me asking an honest question. It was only by virtue of me working with these models day in and day out that an itch appeared and I needed to investigate more. It’s way too easy to accidentally do this wrong.
The yes machine
This is sycophancy. The model wants to be helpful, and “helpful” has historically meant “agreeable.” You come in with a belief, the model validates it. You come in with a different belief, the model validates that one too. It’s not lying, exactly. It’s doing something subtler and arguably worse: it’s selectively retrieving information that supports whatever you just said.
Models are getting better at this. Slowly. Sometimes they’ll push back now, qualify an answer, say “well, actually.” But the default instinct is still to agree. To find the research that says yes. To give you the answer your question was already leaning toward.
With nightmares and sleeping positions, this is funny. A harmless dead end. You lose nothing.
But think about what happens when the stakes are higher.
You’re debugging a system and you have a theory about the root cause. You ask the LLM: “could this be caused by X?” Yes, absolutely, here’s how X could cause exactly this. So you spend four hours chasing X. The actual cause was Y, but you never asked about Y, because the model confirmed your first guess so convincingly.
Or you’re researching a business decision. “Is expanding into market Z a good idea?” Yes, here are five reasons why. You never asked “what are the risks of expanding into market Z?” because the first answer felt so complete.
Every leading question digs you a little deeper. Each confident “yes” narrows your thinking. You’re not exploring, you’re confirming. And the model is the most agreeable confirmation partner you’ve ever had.
Asking better questions
The fix isn’t complicated. It’s just not intuitive.
Ask open questions instead of closed ones. “What does sleep research say about nightmare frequency and body position?” instead of “does sleeping on your back cause more nightmares?” One invites exploration. The other invites agreement.
Ask for the counterargument. If the model says yes, follow up with “what’s the strongest evidence against this?” Force it to argue the other side.
Or do what I did. Ask the same question three ways and see if you get three different answers. If you do, at least one of them was the model telling you what you wanted to hear.
The models will keep getting better at saying no. At pushing back. At flagging when your question contains its own answer. But right now, in March 2026, most of the time, if your question implies a yes, you’ll get a yes.
Be conscious about what you ask and how you ask it. The tool is powerful. The tool is also a people-pleaser.
Building at neople.io.



