Project ideas from Hacker News discussions.

How I use Claude Code: Separation of planning and execution

📝 Discussion Summary (Click to expand)

Three dominant themes in the discussion

# Theme Key points & representative quotes
1 Planning + specification is essential before any code is written • “never let Claude write code until you’ve reviewed and approved a written plan” – RHSeeger
• “I always work towards an approved plan before I let it write code” – RHSeeger
• “The annotation cycle is the key insight for me. Treating the plan as a living doc you iterate on before touching any code” – dennisjoseph
2 Structured workflows and tooling (tickets, plan files, agents) keep the process organized • “I use a ticket system basically like ticket__.md where I let the agent create the ticket from a chat” – zitrusfrucht
• “I let the agent create the ticket from a chat, correct and annotate it afterwards and send it back” – zitrusfrucht
• “I use a specific format in the /plan command, by using the ME: prefix” – srid
• “I use a ticket system… this workflow helps me keeping track of what has been done over time” – zitrusfrucht
3 Human oversight and cost‑efficiency remain critical; AI is a tool, not a replacement • “I think it is far more work than just writing the code yourself” – jamesmcq
• “I had to manually test that it worked, and it did. I then needed to review the code before making a PR” – shepherdjerred
• “I burned through $10 on Claude in less than an hour” – raw_anon_1111
• “I’m still skeptical that LLMs can produce maintainable, secure, performant code without heavy human review” – jamesmcq

These three themes—rigorous pre‑coding planning, disciplined workflow tooling, and the continued need for human judgment and cost awareness—capture the core of the conversation.


🚀 Project Ideas

AI‑Planning & Annotation Editor

Summary

  • Provides a single‑pane editor where AI‑generated plan files (plan.md) can be edited, annotated, and versioned in real time.
  • Keeps the plan in sync with the AI agent, automatically re‑generating sections that were changed or marked for review.
  • Core value: eliminates the “plan‑then‑copy‑paste” friction and ensures the AI always sees the latest human feedback.
Key Value
Target Audience Developers using Claude Code, Cursor, or any plan‑mode LLM.
Core Feature Live plan editing with inline TODO/REVIEW tags that trigger AI re‑generation.
Tech Stack React + Monaco Editor, Node.js backend, WebSocket sync, Git integration.
Difficulty Medium
Monetization Revenue‑ready: $5/month per user, optional enterprise plan.

Notes

  • HN users like “zitrusfrucht” and “gbnwl” already use text‑based plan files; this tool turns that into a collaborative UI.
  • The ability to annotate and have the AI re‑plan on the fly addresses the frustration of “plan drift” and “spec drift” mentioned by many commenters.

Persistent Project Context Manager

Summary

  • Builds and maintains a live, queryable index of a codebase (files, symbols, dependencies) that updates automatically on every commit.
  • Exposes a simple API for LLMs to request only the relevant parts of the code, reducing token usage and context‑rot.
  • Core value: solves the pain of having to “re‑read the codebase” every time an AI agent runs.
Key Value
Target Audience Teams that use AI agents for large monorepos or multi‑repo projects.
Core Feature Incremental code‑base graph + search API, integrated with Git hooks.
Tech Stack Rust for performance, SQLite for storage, REST/GraphQL API.
Difficulty High
Monetization Revenue‑ready: $10/month per repo, free tier for open‑source.

Notes

  • Addresses “koevet” and “jsmith99” concerns about repeating code‑base assessment.
  • Provides a foundation for other AI tools (planning, review, testing) to stay in sync with reality.

AI Agent Orchestration Platform

Summary

  • A lightweight framework that lets you define multi‑step AI workflows: research → plan → review → implement → test → PR.
  • Supports parallel sub‑agents, automatic retry on failures, and fine‑grained permission manifests.
  • Core value: removes the manual “spin up a new shell, run the agent, copy the output” loop.
Key Value
Target Audience DevOps teams, AI‑first product managers.
Core Feature Declarative workflow DSL + built‑in agent adapters (Claude, Gemini, OpenAI).
Tech Stack Go for concurrency, Docker for isolation, YAML/JSON workflow files.
Difficulty Medium
Monetization Hobby (open‑source) with optional paid “Enterprise Agent Suite”.

Notes

  • Responds to “kaydub” and “fourthark” who want parallel, multi‑agent execution.
  • The permission manifest feature tackles the “least privilege” security concerns raised by “wangzhongwang”.

AI‑Driven Audit Log Generator

Summary

  • Generates audit‑logging code and UI for any backend framework, based on a high‑level spec.
  • Includes compliance templates (GDPR, SOC‑2) and automatic schema migration handling.
  • Core value: solves the “audit logging” pain point that many commenters struggled with.
Key Value
Target Audience Backend engineers, compliance officers.
Core Feature Spec‑to‑code generator + migration planner + UI scaffolder.
Tech Stack Python (FastAPI), SQLAlchemy, Jinja2 templates.
Difficulty Medium
Monetization Revenue‑ready: $50/month per project, add‑on for custom compliance.

Notes

  • “shepherdjerred” highlighted the time saved; this tool automates that process.
  • The compliance templates address the “audit log is different” frustration.

Token Budget Optimizer for AI Coding

Summary

  • Monitors token usage across multiple AI providers, predicts cost, and suggests optimal provider/plan combinations.
  • Provides a dashboard and alerts when a session is about to hit a token limit.
  • Core value: tackles the “token limits” and “cost” frustrations expressed by many users.
Key Value
Target Audience Individual developers, small teams.
Core Feature Real‑time token accounting, provider‑agnostic cost modeling.
Tech Stack TypeScript, Node.js, Redis for state, Grafana for dashboards.
Difficulty Medium
Monetization Revenue‑ready: $2/month per user, enterprise tier.

Notes

  • Directly addresses “imron” and “raw_anon_1111” concerns about paying for 20x plans.
  • Helps teams stay within budget while still using powerful models.

Secure AI Agent Sandbox

Summary

  • A sandboxed environment that enforces declarative permission manifests for each AI task (read/write/execute).
  • Integrates with CI/CD pipelines to automatically run agents in isolation before merging.
  • Core value: satisfies the “NIS2” and “security” concerns raised by “fendy3002” and “qudat”.
Key Value
Target Audience Regulated industries, security‑focused teams.
Core Feature Permission manifest language + runtime enforcement, audit logs.
Tech Stack Go, gVisor, Open Policy Agent (OPA).
Difficulty High
Monetization Revenue‑ready: $200/month per sandbox, add‑on for compliance reports.

Notes

  • Provides a practical solution to the “cannot give AI real access” problem.
  • The audit logs also double as evidence for compliance reviews.

AI Code Review & Test Generation Service

Summary

  • A service that takes a PR, runs an AI agent to generate a detailed review, auto‑creates unit tests, and produces a summary report.
  • Integrates with GitHub/GitLab via webhooks; can be triggered manually or automatically on PR creation.
  • Core value: reduces the manual review burden and ensures tests are generated for every change.
Key Value
Target Audience Teams that want to outsource part of the review process.
Core Feature AI‑driven review + test scaffolding + summary.
Tech Stack Python, FastAPI, OpenAI/Claude API, GitHub Actions.
Difficulty Medium
Monetization Hobby (open‑source) with optional paid “Premium Review” add‑on.

Notes

  • Addresses the “review” loop that many commenters mention (e.g., “kaydub”, “fourthark”).
  • The generated tests help mitigate the “missing tests” frustration highlighted by “girvo” and “jamesmcq”.

Read Later