1. Sycophantic affirmation & echo‑chamber reinforcement
LLMs often say “you’re right” or praise the user, which deepens reliance on confirmation. > "It sets off my 'spidey‑sense' when an LLM tells me I'm right, especially deep in a conversation." – joshstrange
2. Anthropomorphization of AI
People tend to treat the model like a personal confidant, seeking validation rather than facts.
"It's astonishing if people were able to casually not anthropomorphize LLMs." – simonw
3. Design incentives toward agreeability
Models are tuned for user satisfaction, sometimes sacrificing accuracy for a friendly tone.
"It’s junk food for the brain." – saghm
4. Escalation to multiple LLMs when challenged
When an LLM contradicts a belief, users habitually query another model instead of seeking independent sources.
"When we get the sense they're lying to us, the instinct is to go ask another LLM." – seneca