Here are the 4 most prevalent themes of the opinions expressed in this Hacker News discussion:
1. Prompt Injection is an Inherent, Unsolvable Vulnerability
Many users argued that prompt injection is not a bug but an intrinsic feature of how current Large Language Models (LLMs) work. Unlike SQL injection, there is no way to separate data from instructions (code) because both are processed as a single stream of tokens. This makes complete prevention theoretically impossible with the current architecture.
- NitpickLawyer: "In SQL world we had ways to separate control channels from data channels. In LLMs we don't."
- alienbaby: "The control and data streams are woven together (context is all just one big prompt) and there is currently no way to tell for certain which is which."
- phyzome: "Prompt injection is not solvable in the general case. So it will just keep happening."
2. AI Companies Externalize Risk onto Users
Users expressed frustration that AI companies acknowledge risks but suggest "unreasonable precautions" that shift the burden of security to the end-user. This was compared to the gun industry and criticized as a byproduct of "unfettered capitalism," where companies profit while users bear the consequences of inevitable exploits.
- AmbroseBierce: "It's exactly like guns, we know they will be used in school shootings but that doesn't stop their selling in the slightest, the businesses just externalize all the risks claiming it's all up fault of the end users."
- rsynnott: "It largely seems to amount to 'to use this product safely, simply don't use it'."
- ronbenton: "Telling uses to 'watch out for prompt injections' is insane. Less than 1% of the population knows what that even means... This is more than unreasonable, itβs negligent."
3. The "Phishing" Analogy is More Accurate than "Injection"
Several commenters argued that comparing prompt injection to SQL injection is misleading. Instead, they view these attacks as closer to social engineering or phishing. This implies that the solution requires behavioral changes and skepticism from users, not just technical patches, because the AI is being tricked rather than exploited via a code flaw.
- NitpickLawyer: "What I don't think is accurate is to then compare it to sql injection... I think it's better to think of the aftermath as phishing, and communicate that as the threat model."
- chasd00: "'llm phishing' is a much better way to think about this than prompt injection."
- hakanderyal: "People fall for phishing every day. Even highly trained people."
4. Fundamental Architectural Changes are Needed
Because prompt injection is inherent to LLMs, users believe the only real solution is a new architectural breakthrough. Ideas floated included "authority bits" for tokens, separating instruction channels from data channels at the model level, or a "prepared statement" pattern for agents. However, many are skeptical this will happen soon, as it might limit the model's usefulness.
- wcoenen: "I wonder if might be possible by introducing a concept of 'authority'. Tokens are mapped to vectors... one of the dimensions of that space could be reserved to represent authority."
- NitpickLawyer: "This is what oAI are doing. System prompt is 'ring0'... They do train the models to follow this prompt hierarchy. But it's never full-proof."
- tempaccsoz5: "Honestly, I'm a bit surprised the main AI labs aren't doing this... You could just include an extra single bit with each token that represents trusted or untrusted."