Project ideas from Hacker News discussions.

I was banned from Claude for scaffolding a Claude.md file?

📝 Discussion Summary (Click to expand)

Here is a summary of the 7 most prevalent themes in the Hacker News discussion regarding the author’s experience with an Anthropic account ban:

1. Lack of Recourse and Non-Existent Customer Support

Users universally expressed frustration with the inability to appeal bans or receive human assistance from Anthropic, contrasting this with the high price of the service. * "I didn't even get to send 1 prompt to Claude and my 'account has been disabled after an automatic review of your recent activities'... Still to this day don't know why I was banned." (properbrew) * "You are never gonna hear back from Anthropic, they don't have any support." (falloutx) * "This has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore." (lazyfanatic42)

2. Speculation on the Cause: AI-to-AI Interaction and Prompt Injection

The primary theory discussed for the ban was the author’s workflow of having one instance of Claude modify the CLAUDE.md file for another instance, which may have triggered safety heuristics regarding prompt injection or jailbreaking. * "My guess is that this likely tripped the 'Prompt Injection' heuristics that the non-disabled organization has." (Author via properbrew) * "It wasn't circular. TFA explains how the author was always in the loop... but relaying the mistake to the first instance... was done manually by the author." (layer8) * "They were probably using an unapproved harness, which are now banned." (schnebbau)

3. Confusion Over the Author's Writing Style

A significant portion of the discussion focused on the author's use of ironic terminology (referring to himself as a "disabled organization"), which many commenters found incoherent or distracting, while others defended it as a stylistic choice. * "It bears all the hallmarks of AI writing: length, repetition, lack of structure, and silly metaphors." (oasisbob) * "I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing." (cortesoft) * "The absurd language is meant to highlight the absurdity they feel over the vague terms in their sparse communication with anthropic." (mattnewton)

4. Reliability of Local vs. Cloud LLMs

Many users argued that while cloud models like Claude are SOTA, relying on them carries the risk of arbitrary bans, prompting a shift toward running models locally despite the performance gap. * "They'll never be SOTA level, but at least they'll keep chugging along." (properbrew) * "I had done that yet... I was also trying local models I could run on my own MacBook Air." (codazoda) * "Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more." (blindriver)

5. Comparisons to Competitors (OpenAI, Gemini, Grok)

Users compared Anthropic's strict moderation and lack of support to other major AI providers, with some citing Grok's lack of safety or OpenAI's similar impersonal nature. * "I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all." (preinheimer) * "Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI." (properbrew) * "If this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS." (Author via dragonwriter)

6. Technical Feasibility of the Workflow

Commenters debated whether the author's specific technical setup (two agents communicating via a file) was a valid use case or an abuse of the system, with some admitting they do similar things while others found it unnecessary. * "If you want to take a look of the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available here." (ribosometronome) * "I can't even understand what they're trying to communicate... There is, without a doubt, more to this story than is being relayed." (Aurornis) * "Just use a different email or something." (anothereng)

7. The Risk of Corporate Dependence

Underlying the entire thread is the broader theme that relying on a single SaaS provider for critical workflow tools is risky due to opaque terms of service and automated enforcement. * "You're in their shop (Opus 4.5) and they can kick you out without cause." (red_hare) * "Companies will simply give some kind of standard answer, that is legally 'cover our butts' and be done with it." (benjiro) * "There should be a law that prevents companies from simply banning you, especially when it's an important company." (blindriver)


🚀 Project Ideas

Account Ban Forensics & Appeal Tool

Summary

  • [Helps users understand and appeal automated account bans by analyzing their recent usage patterns against known trigger heuristics.]
  • [Core value proposition: Provides actionable insights and draft appeal letters for SaaS bans, moving users from confusion to a structured appeal process.]

Details

Key Value
Target Audience Developers and power users of AI/LLM platforms (Claude, OpenAI, etc.) who have been unexpectedly banned.
Core Feature Parses user activity logs (if accessible) or prompts the user for recent actions to generate a probability report of what triggered the ban and a tailored appeal email.
Tech Stack Local script (Python/Node.js) or browser extension; uses local models for privacy.
Difficulty Low
Monetization Hobby (Open source)

Notes

  • [Addresses the core frustration of "banned with no recourse" expressed by properbrew and the author.]
  • [Directly responds to pixl97's call for a system to force companies to provide exact reasons for bans by giving users a tool to demand clarity.]
  • [Utility: High. It democratizes the ability to challenge opaque moderation decisions.]

Local Multi-Agent Orchestration Framework

Summary

  • [A self-hosted tool to run multiple local LLM instances that can iterate on code and prompts circularly without hitting cloud provider rate limits or safety filters.]
  • [Core value proposition: Replicate the "Claude A updates Claude B's CLAUDE.md" workflow locally, ensuring privacy and unrestricted experimentation.]

Details

Key Value
Target Audience AI researchers, prompt engineers, and developers working on complex coding tasks who fear cloud bans.
Core Feature Manages context windows between distinct local LLM processes, allowing for structured feedback loops (Agent A reviews Agent B's output).
Tech Stack Python, Ollama or llama.cpp, Docker.
Difficulty Medium
Monetization Hobby (Open source)

Notes

  • [Solves the specific technical pain point described in the blog post: the author was banned for a circular prompt setup that Cloud providers flag as prompt injection.]
  • [Validates properbrew's sentiment that "local LLMs... will keep chugging along" and codazoda's desire for a "workable CLI with tool and MCP support."]

"Vibe Code" Distillation & Cleanup Tool

Summary

  • [A CLI tool that uses a local LLM to analyze AI-generated "slop" codebases and refactor them into clean, documented, and maintainable structures.]
  • [Core value proposition: Turns the "vibe coding" output (which f311a and others criticize as unusable slop) into production-ready software.]

Details

Key Value
Target Audience Non-developers using AI to build apps and developers inheriting AI-generated codebases.
Core Feature Scans codebase, identifies anti-patterns, generates architectural diagrams, and refactors code incrementally with user approval.
Tech Stack Go or Rust (for speed), Local LLM integration.
Difficulty Medium
Monetization Revenue-ready (Freemium: basic analysis free, automated refactoring requires subscription)

Notes

  • [Addresses the skepticism from f311a and mikkupikku regarding the quality of AI-generated code ("slop").]
  • [Provides a bridge between rapid prototyping (oasisbob's "going wild with ideas") and software maintainability.]

SaaS Account Risk Simulator

Summary

  • [A web tool that analyzes your usage patterns against a crowd-sourced database of ToS violations to predict ban probability.]
  • [Core value proposition: Warns users before they get banned by flagging high-risk behaviors like "all caps prompts," "circular agent loops," or "rapid iteration."]

Details

Key Value
Target Audience Power users of strict platforms (Anthropic, Google, Stripe).
Core Feature Input your usage metadata; get a risk score and "safe mode" recommendations.
Tech Stack Web app (React), Backend (Node.js), Database (SQLite/Postgres).
Difficulty Low
Monetization Hobby (Ad-supported or donation ware)

Notes

  • [Tackles the anxiety expressed by Aurornis regarding the unpredictability of "Risk Department Maoism."]
  • [Helps users navigate the "black box" moderation mentioned by preinheimer by crowdsourcing heuristics.]

Ethics-Free AI Chat Platform

Summary

  • [A hosted chat interface that connects to uncensored/open-weight models (like GLM or local forks) with zero content filtering (excluding illegal material).]
  • [Core value proposition: A stable alternative for users who feel squeezed out by the "safety over accuracy" policies of major labs.]

Details

Key Value
Target Audience Researchers, jailbreakers, and users frustrated by "lobotomized" models.
Core Feature Web UI + API that explicitly disclaims safety filters, prioritizing model capability and user freedom.
Tech Stack VPS hosting, Open source model weights, FastAPI.
Difficulty Medium
Monetization Revenue-ready (Pay-per-token API or flat monthly hosting fee)

Notes

  • [Appeals to users like properbrew who explicitly switched to local LLMs due to cloud bans.]
  • [Directly counters munk-a's criticism of Grok being lobotomized for "emotional safety," offering a truly unfiltered alternative.]

Context-Aware LLM Session Recovery

Summary

  • [A CLI wrapper that automatically snapshots LLM session states (including context windows) to local files, allowing for instant recovery if the connection drops or the session times out.]
  • [Core value proposition: Prevents the loss of work when using flaky cloud tools, addressing the "hanging up" and "unresponsive" issues mentioned by users.]

Details

Key Value
Target Audience Users of Claude Code, Cursor, and other terminal-based AI agents.
Core Feature Watchdog process that saves context to disk every N seconds; command to "resurrect" a dead session with full history.
Tech Stack Bash/Python wrapper, tmux or screen.
Difficulty Low
Monetization Hobby (Open source)

Notes

  • [Solves the frustration expressed by bastard_op regarding sessions becoming "entirely flaky" and unresponsive.]
  • [Improves the workflow for non-developers (hecanjog) who might struggle to manually recover complex project states.]

Human-in-the-Loop Support Escalation Bot

Summary

  • [A Discord bot or Slack integration that drafts formal escalation tickets to SaaS support teams using legal and ToS terminology to force a human response.]
  • [Core value proposition: Automates the "fight" against non-responsive support departments by generating compliant, high-pressure support tickets.]

Details

Key Value
Target Audience Customers of large tech companies with non-existent support.
Core Feature Analyzes the user's complaint, references specific ToS clauses, and formats the output for email or support ticket submission.
Tech Stack Discord.py/Slack API, Local LLM.
Difficulty Low
Monetization Hobby (Open source)

Notes

  • [Targets the systemic issue of "no real support" highlighted by lazyfanatic42 and the author of the blog post.]
  • [Provides a practical tool for epolanski's concern about companies using "corporate lingo" to avoid giving real feedback.]

Read Later