1. AI‑generated “slop” is a real, growing problem
Many participants see a flood of low‑quality, AI‑generated pull requests that waste maintainers’ time.
“The problem with AI slop is not so much that it's from AI, but that it's pretty much always completely unreadable and unmaintainable code.” – zozbot234
“I think this is a DoS attack on maintainers’ attention.” – mjr00
2. A Web‑of‑Trust style vouching system is proposed as a solution
The discussion centers on a lightweight vouch/denounce list that lets core maintainers gate access to contributors.
“It’s basically a vouching system… you can vouch for someone who does good work, then you get a little boost too.” – ashton314
“The idea is to stop the flood of AI slop coming from particular accounts, and the means to accomplish that is denouncing those accounts.” – mjr00
3. Practical and cultural concerns about implementing such a system
Participants warn that the system could create new barriers, be gamed, or erode the inclusive spirit of open source.
“If you’re not contributing to something already and are not in the network with other contributors, you might be a SME… I get that AI is creating a ton of toil to maintainers but this is not the solution.” – Halan
“It could become a weapon against wrong‑thinkers… denounced by someone who is denounced by trusted people should carry little weight.” – acjohnson55
“The same problem are reputation‑farmers… the solution would be for GitHub to implement a system to punish bad PRs.” – HiPhish
These three themes—AI slop, trust‑based filtering, and the practical/cultural trade‑offs—drive the bulk of the discussion.