1. LLMs can’t reliably reproduce politically charged language – they “flinch” or soften loaded words
“We couldn't get it to work. No amount of fine‑tuning let the model actually say what Karoline Leavitt said on camera. It kept softening the charged word.” – llmmadness
2. LLMs excel at surface‑level grammar but lack true semantic understanding
“Because AI is not intelligent, it doesn’t “know” what it previously output even a token ago.” – dvt
3. Human feedback (RLHF) and fine‑tuning deliberately shape model output, creating a “lever” that steers word probabilities
“At scale, it’s a lever: a distribution that reliably deflates some words and inflates others is the mechanism you’d build if you wanted to shape what a billion users read without them noticing.” – like_any_other