3 Most Prevalent Themes
-
AI-Generated "Slop" Overwhelming Security Programs The primary driver of the discussion is the flood of low-quality, AI-generated vulnerability reports. Users express frustration that these reports are often nonsensical, misinformed, and waste valuable maintainer time. Many commenters believe this is a deliberate tactic to exploit bug bounty systems.
- mirekrusin: "Some (most?) are llm chat copy paste addressing non existing users in conversations like [0] - what a waste of time."
- golem14: "I looked at two reports, and I canβt tell if the reports are directly from an ai or some very junior student not really understanding security. LLms to me sound generally more convincing."
- plastic041: "In the second report, Daniel greeted the slopper very kindly and tried to start a conversation with them. But the slopper calls him by the completely wrong name. ... It must have been extremely tiring."
-
Proposed Solutions: Entry Fees and Financial Disincentives A prominent theme is the suggestion to combat slop by introducing a financial cost for submitting bug reports. The idea is that a small entry fee, reimbursed only for valid reports, would deter bad-faith actors. This sparked debate about the practicality and fairness of such a system.
- dlcarrier: "An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick."
- fredrikholm: "When Notion first came out, it was snappy and easy to use. Creating a page being essentially free of effort, you very quickly had thousands of them, mostly useless. Confluence ... is offlessly slow. The thought of adding a page is sufficiently demoralizing that it's easier to update an existing page ... Consequently, there's some ~20 pages even in large companies." (Using an analogy to argue that "trivial inconveniences" curb low-effort contributions).
- bawolff: "Bug bounties often involve a lot of risk for submitters. ... A pay to enter would increase that risk."
-
Debate Over LLMs' Role in Both Creating and Solving the Problem Users are grappling with the paradoxical role of AI. While LLMs are the source of the spam, there is a parallel discussion about whether they could also be used to filter it. This led to arguments about the reliability of AI for judgment tasks and the risk of AI being used to generate increasingly deceptive spam.
- colechristensen: "I prompted Opus 4.5 'Tell me the reasons why this report is stupid' on one of the example slop reports and it returned a list of pretty good answers. ... If you give them the right structure I've found LLMs to be much better at judging things than creating them."
- imiric: "Trusting the output of an LLM to determine the veracity of a piece of text is a baffilingly bad idea."
- f311a: "How would it work if LLMs provide incorrect reports in the first place? ... The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated."