Project ideas from Hacker News discussions.

Vibe coding and agentic engineering are getting closer than I'd like

📝 Discussion Summary (Click to expand)

7 Prevalent Themes from the Discussion

# Theme Supporting Quote
1 Code must match the reviewer’s personal vision I think I'm just too opinionated to go there. If I see something that works fine, but isn’t the way I'd do it, it doesn't matter if a human or an LLM wrote it I'm still in there making it match my vision.” — singpolyma3
2 Changing working code just because it disagrees with your vision is usually discouraged Organizations usually are not looking for employees who change things that work fine, just because it disagrees with the “vision” of one employee.” — jstummbillig
3 AI can’t cure laziness; quality still depends on disciplined humans Even the most toxic places I've worked that kind of behavior would totally get you canned.” — jf22
4 Future codebases risk becoming unmaintainable slop generated by LLMs People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read almost any of it and is no longer possible for anyone to maintain neither with nor without LLMs.” — QuantumNomad_
5 Vibe‑coding treats LLMs as a rubber‑duck, but the output still needs human steering The difference between writing assembly code and Ruby code is much smaller than the difference between programming and vibe coding.” — Daishiman
6 Humans remain accountable for AI‑generated code, especially regarding security If I get pwned because my AI agent wrote code that had a security vulnerability, none of my users are going to accept the excuse that I used AI and it’s a brave new world. I will get the blame, not Anthropic or OpenAI or Google but me.” — user34283
7 Productivity gains shift effort toward architecture and oversight I literally do pay $20 a month to have a plumber service on call… I’m spending about 10 to 30 hours less time a week in the mechanical parts of writing and refactoring code, researching how to plumb components together… All of those hours are time that can now be spent doing “careful consideration”.” — cortesoft

🚀 Project Ideas

ArchitectAI

Summary

  • AI‑driven architecture blueprint generator that validates design against best practices and flags anti‑patterns before code is written.
  • Core value: Turns vague product specs into maintainable, scalable system designs, reducing architectural debt.

Details

Key Value
Target Audience Senior engineers, solution architects, and tech leads building new services
Core Feature Generate, evaluate and iterate on system diagrams and component contracts using LLMs; enforce style and modularity rules
Tech Stack LLM (Claude/CodeLlama), Graphviz/Diagram.js, React front‑end, PostgreSQL for audit logs
Difficulty High
Monetization Revenue-ready: $19/mo per user or $299/yr for teams

Notes

  • HN commenters repeatedly stress that “the hardest part is figuring out the architecture” and that LLMs can help if they produce auditable designs.
  • Potential to integrate with existing CI/CD pipelines for automated design gating.

QualityPulse

Summary

  • Continuous AI code‑quality monitor that tags every line of AI‑generated code with reviewer credentials and tracks review completion rates.
  • Core value: Provides accountability for AI‑generated code, making it easy to enforce review policies and generate compliance reports.

Details| Key | Value |

|-----|-------| | Target Audience | Engineering managers, security officers, and compliance teams in regulated industries | | Core Feature | Automatic tagging of LLM‑generated files, enforce mandatory human‑review checklist, generate dashboard of uncovered lines | | Tech Stack | LLM (GPT‑4), Python backend, FastAPI, ElasticSearch, Grafana | | Difficulty | Medium | | Monetization | Revenue-ready: $0.05 per 1k lines analyzed (tiered pricing) |

Notes

  • Users like suzzer99 and hirvi74 lament “imposing ideals” and want a way to track who approved AI code; QualityPulse answers that need.
  • Could spark discussion on governance of AI‑written artifacts.

RefactorLoop

Summary

  • CLI/SDK that wraps LLM APIs to automatically propose and apply incremental refactorings, generate unit tests, and enforce linting rules on pull requests.
  • Core value: Turns every PR into a self‑guarded refactor, reducing manual review workload while improving code health.

Details

Key Value
Target Audience DevOps engineers, maintainers of large open‑source projects, and CI/CD integrators
Core Feature File‑level LLM suggestions, auto‑generated test suites, approval gates for failing lint
Tech Stack Node.js, LLM wrappers (OpenAI, Claude), GitHub Actions, Docker
Difficulty Medium
Monetization Hobby

Notes- Commenters such as Daishiman celebrate “refactor‑as‑a‑service” and note that “the best engineers are those who can iterate quickly”; RefactorLoop codifies that workflow.

  • Opens discussion on shifting reviewer responsibilities.

PromptVault

Summary

  • Collaborative marketplace of vetted, high‑quality LLM prompts for code generation, testing, and architecture reviews, with version control and usage analytics.
  • Core value: Provides a curated repository of proven prompts, eliminating trial‑and‑error and ensuring consistent output quality.

Details

Key Value
Target Audience Individual developers, small teams, and education platforms
Core Feature Searchable prompt library, rating system, CI integration to auto‑apply prompts, monetized premium bundles
Tech Stack Python (FastAPI), React UI, PostgreSQL, Redis cache
Difficulty Low
Monetization Revenue-ready: $5/mo per user for premium access

Notes

  • Inspired by discussions on “why do we keep repeating the same bad prompts?” – a community‑driven solution would be welcomed.
  • Could become a “npm for prompts,” fostering open discussion and reuse.

CommitGovernor

Summary

  • Git‑hook server that uses AI to evaluate each commit’s stylistic and architectural impact, automatically rejecting or commenting on PRs that violate team‑defined coding ideals.
  • Core value: Enforces a shared code philosophy without manual gatekeeping, reducing “opinion fights” while preserving quality standards.

Details| Key | Value |

|-----|-------| | Target Audience | Open‑source maintainers, internal dev teams, and coding‑standard enforcers | | Core Feature | AI‑driven diff analysis, rule engine configurable via YAML, automatic review comments, integration with GitHub/GitLab | | Tech Stack | Go, LLMs (local Llama 3), Redis, Webhook endpoints | | Difficulty | High | | Monetization | Hobby |

Notes- Mirrors jcgrillo’s frustration about “prick about quality” and suzzer99’s desire for “different responsibility areas”; CommitGovernor automates that boundary.

  • Sparks conversation about automated enforcement vs human moderation.

AuditTrailAI

Summary

  • End‑to‑end logging platform that records every AI‑generated code snippet, the prompt used, reviewer decisions, and test outcomes, making the entire AI‑augmented development process auditable.
  • Core value: Provides traceability for compliance, debugging, and knowledge transfer, addressing concerns about “no one will know who wrote it”.

Details

Key Value
Target Audience Security teams, auditors, and regulated enterprises
Core Feature Immutable logs, searchable UI, alerting on un‑reviewed AI code, export to PDF/CSV for audits
Tech Stack Elixir/Phoenix, PostgreSQL, ElasticSearch, React
Difficulty Medium
Monetization Revenue-ready: $30/mo per reviewer seat

Notes

  • Directly addresses quantumnomad’s dread of “billions of lines of code no one can maintain” – AuditTrailAI makes the mess traceable.
  • Could generate lively HN debate on governance of AI‑authored artifacts.

VibeEngine

Summary

  • A lightweight, browser‑based IDE that blends graphical UI design (sketches → wireframes) with AI‑powered code synthesis, letting users “draw” desired behavior and receive clean, test‑covered code.
  • Core value: Lowers the barrier for non‑programmers to create functional tools while still producing maintainable code.

Details

Key Value
Target Audience Product designers, startup founders, educators, and hobbyists
Core Feature Drag‑and‑drop interface → AI generates code, auto‑adds tests, publishes to GitHub Pages
Tech Stack WebAssembly, React, LLM (Claude), Firebase backend
Difficulty Low
Monetization Hobby

Notes

  • Resonates with users like cortical and the “vivid” desire for “what the hell we were thinking” – VibeEngine offers a concrete sandbox to experiment safely.
  • Opens discussion about the future of “no‑code” vs “vibe‑code” and their technical debt implications.

Read Later