1. Enforceability & policy design
Many argue that a blanket “no‑AI” rule is un‑enforceable because it’s hard to tell whether a PR was written by a human or a model.
“The LLM ban is unenforceable, they must know this.” – khalic
“If the submitter is prepared to explain the code and vouch for its quality then that might reasonably fall under “don’t ask, don’t tell.”” – pjc50
2. Review burden & code quality
The core practical problem is that LLM‑generated PRs can look superficially correct but hide subtle bugs, inflating the reviewers’ workload.
“The problem is the increased review burden on OSS maintainers.” – ptnpzwqd
“LLM output is either (a) uncopyrightable or (b) considered a derivative work… we have a legal problem.” – pjc50
3. Legal / ethical concerns
Copyright, licensing, and the moral framing of AI contributions dominate the debate.
“If LLM output is a derivative work of the source that was used to train the model, then you have a legal problem.” – pjc50
“The policy is virtue‑signalling.” – subjectsigma
4. Community culture & contribution dynamics
Opinions diverge on whether banning AI harms open‑source participation, gatekeeping, or skill development.
“Drive‑by PRs are a net burden on maintainers long before LLMs started writing code.” – swiftcoder
“I would rather use code that is flawed while written by a human, versus code that has been generated by a LLM.” – lpcvoid
These four themes capture the main currents of the discussion.