Here is a summary of the 7 most prevalent themes in the Hacker News discussion regarding the author’s experience with an Anthropic account ban:
1. Lack of Recourse and Non-Existent Customer Support
Users universally expressed frustration with the inability to appeal bans or receive human assistance from Anthropic, contrasting this with the high price of the service. * "I didn't even get to send 1 prompt to Claude and my 'account has been disabled after an automatic review of your recent activities'... Still to this day don't know why I was banned." (properbrew) * "You are never gonna hear back from Anthropic, they don't have any support." (falloutx) * "This has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore." (lazyfanatic42)
2. Speculation on the Cause: AI-to-AI Interaction and Prompt Injection
The primary theory discussed for the ban was the author’s workflow of having one instance of Claude modify the CLAUDE.md file for another instance, which may have triggered safety heuristics regarding prompt injection or jailbreaking.
* "My guess is that this likely tripped the 'Prompt Injection' heuristics that the non-disabled organization has." (Author via properbrew)
* "It wasn't circular. TFA explains how the author was always in the loop... but relaying the mistake to the first instance... was done manually by the author." (layer8)
* "They were probably using an unapproved harness, which are now banned." (schnebbau)
3. Confusion Over the Author's Writing Style
A significant portion of the discussion focused on the author's use of ironic terminology (referring to himself as a "disabled organization"), which many commenters found incoherent or distracting, while others defended it as a stylistic choice. * "It bears all the hallmarks of AI writing: length, repetition, lack of structure, and silly metaphors." (oasisbob) * "I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing." (cortesoft) * "The absurd language is meant to highlight the absurdity they feel over the vague terms in their sparse communication with anthropic." (mattnewton)
4. Reliability of Local vs. Cloud LLMs
Many users argued that while cloud models like Claude are SOTA, relying on them carries the risk of arbitrary bans, prompting a shift toward running models locally despite the performance gap. * "They'll never be SOTA level, but at least they'll keep chugging along." (properbrew) * "I had done that yet... I was also trying local models I could run on my own MacBook Air." (codazoda) * "Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more." (blindriver)
5. Comparisons to Competitors (OpenAI, Gemini, Grok)
Users compared Anthropic's strict moderation and lack of support to other major AI providers, with some citing Grok's lack of safety or OpenAI's similar impersonal nature. * "I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all." (preinheimer) * "Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI." (properbrew) * "If this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS." (Author via dragonwriter)
6. Technical Feasibility of the Workflow
Commenters debated whether the author's specific technical setup (two agents communicating via a file) was a valid use case or an abuse of the system, with some admitting they do similar things while others found it unnecessary. * "If you want to take a look of the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available here." (ribosometronome) * "I can't even understand what they're trying to communicate... There is, without a doubt, more to this story than is being relayed." (Aurornis) * "Just use a different email or something." (anothereng)
7. The Risk of Corporate Dependence
Underlying the entire thread is the broader theme that relying on a single SaaS provider for critical workflow tools is risky due to opaque terms of service and automated enforcement. * "You're in their shop (Opus 4.5) and they can kick you out without cause." (red_hare) * "Companies will simply give some kind of standard answer, that is legally 'cover our butts' and be done with it." (benjiro) * "There should be a law that prevents companies from simply banning you, especially when it's an important company." (blindriver)