1. LLMs can imitate results but not the process that produced them
“The most interesting is the realization that if the LLM's input is only the output of a professional (human), then by definition the LLM cannot mimic the process the (human) professional applied to get from whatever input they had to produce the output.” – drbig
“An LLM can always output steps, but it doesn’t mean they are true; they are great at making up bullshit.” – latexr
“The LLM does not model text at this meta‑level. It can only use those texts as examples, it cannot apply what is written there to its generation process.” – tovej
2. “Learning” is a misnomer; LLMs are fitting parameters, not acquiring knowledge
“It all went south when we started to call it 'learning' instead of 'fitting parameters'.” – KeplerBoy
“Fitting is still too nice of a word choice… I suggest ‘randomly adjusting parameters while trying to make things better’.” – fxtentacle
“That isn’t learning, it can read things in its context, and generate materials… it doesn’t change the model weights.” – RichardLake
3. Legal, ethical, and cultural concerns over using deceased authors’ personas in AI tools
“The real issue seems more about transparency and consent around how the models are trained and how author personas are being used.” – BrtByte
“Digital necrophilia. The living ones are the ones that are going to have to make the objections here.” – jacquesm
“It feels wrong to spare nobody, not even dead writer/people.” – bayindirh
These three threads—process vs output, learning vs fitting, and the legal/ethical fallout—dominate the discussion.