Project ideas from Hacker News discussions.

If you're going to vibe code, why not do it in C?

๐Ÿ“ Discussion Summary (Click to expand)

The Hacker News discussion revolves primarily around the implications and feasibility of using Large Language Models (LLMs) for coding, often termed "vibe coding."

Here are the three most prevalent themes:

1. The Necessity of Static Checks and Language Safety for LLM-Generated Code

A significant portion of the discussion centered on the idea that LLMs are prone to generating unsafe or incorrect code, making languages with strong safety guarantees crucial. Rust was frequently brought up as an example because its robust type system and required checks prevent errors at compile time, which helps mitigate LLM hallucinations.

Supporting Quotes: * Regarding the risk of LLMs generating unsafe code: "Without checks and feedback, LLMs can easily generate unsafe code." from user "stared". * Defending high-guardrail languages: "I would never vibe code in any other environment than one in which many/most logical errors are by definition impossible to compile." from user "xandrius". * Contrasting with lower-level languages: "Rust has lots of checks that C and assembly don't, and AI benefits from those checks." from user "9rx".

2. Skepticism Regarding the Robustness and Reliability of "Vibe Coded" Software

Many users strongly disagreed with the premise that "vibe coding" (generating complex systems primarily through LLM prompts without deep understanding) produces reliable software. Skeptics pointed out that practical, maintainable systems require more than superficial fluency.

Supporting Quotes: * Directly refuting the idea that vibe coding works for complex systems: "No, it absolutely doesn't. We've seen so much vibe coded slop that it's very clear that vibe coding produces a hot mess which no self respecting person would call acceptable." from user "bigstrat2003". * Highlighting the necessity of human expertise for real-world application: "Anyone that suggests that AI can produce a solid application on its own is a fraud." from user "barrister". * Connecting the LLM output quality to training data: "If you don't know much about how it works [the LLM], you are doomed." from user "hamzaawan".

3. Discussion on Designing Ideal Languages Specifically for AI Coders

The conversation frequently drifted toward the hypotheticalโ€”if LLMs are the primary mechanism writing code, should we still be using languages designed for human ergonomics (like C, Rust, or Python)? Users speculated on what a language optimized for an AI audience would look like.

Supporting Quotes: * The core question posed by the original author: "If we designed a programming language with the idea that it would be primarily or exclusively vibe coded, what would that language look like?" from user "sramsay". * Suggesting features for an AI-centric language: "The key to a language for LLMs is: make sure all the context is local, and explicit." from user "ModernMech". * Advocating for formal languages: "The thing that would really make sense is a proved language like Coq or Promela. You can then really just leave the implementation to the AI." from user "sebstefan".


๐Ÿš€ Project Ideas

LLM-Assisted Formal Specification Engine (VeriCode)

Summary

  • A tool that addresses the desire for rigorous, testable code generation by enabling LLMs to draft specifications (preconditions, postconditions, invariants) that are then enforced using formal methods tools like TLA+ or even advanced Rust features (like exhaustive match statements derived from specifications).
  • Solves the pain point that LLMs hallucinate code riddled with correctness issues, by forcing the AI to articulate the intent formally before implementation. Core Value: Bridging the gap between "vibe coding" and formal correctness required for robust, commercially viable systems.

Details

Key Value
Target Audience Senior developers, systems architects, and teams building safety-critical or complex state-management software (who value Haskell/Rust type guarantees).
Core Feature Interactive prompt interface where the LLM generates formal specification languages (e.g., Eiffel contracts, TLA+ spec fragments, or heavily annotated Rust trait definitions) based on a high-level goal.
Tech Stack Python/FastAPI backend, TypeScript/React frontend, integration with existing formal verification tools (TLA+ parser/checker, or leveraging Rust's contracts or advanced type system features).
Difficulty High (Requires expertise in both LLM prompt engineering for specification languages and formal verification tooling integration).
Monetization Hobby

Notes

  • Why HN commenters would love it: Directly addresses the debate about safety checks ("We need as many checks as possible - and ideally ones that come for free"). It also caters to those advocating for languages like Haskell or Lean, but makes the verification step leverageable by current LLMs.
  • Potential for discussion or practical utility: Extremely high. It turns the "LLM writes specs" idea, which many agree on, into a structured, verifiable artifact, satisfying the demand for formal rigor without requiring humans to write the underlying low-level implementation logic.

Contextual Language Translator for Legacy/Niche Systems (Polyglot Bridge)

Summary

  • A specialized translation service/tool designed to automatically convert high-level, well-specified LLM output (like idiomatic Rust or Haskell pseudocode) into less common, highly constrained, or legacy targets like clean C, specific Assembly subsets, or even ancient proprietary languages (like Lingo, mentioned by a user).
  • Addresses the niche need for interfacing with legacy infrastructure and the desire to "vibe code" in modern languages while deploying in environments where only ancient/low-level code is supported. Core Value: Preserving the high semantic intent of LLM-assisted code while fitting into low-level constraints.

Details

Key Value
Target Audience Developers maintaining legacy codebases, embedded systems engineers, or those needing to translate modern patterns into constrained environments (like the user translating Shockwave Lingo).
Core Feature Source-to-Target translation pipeline that prioritizes semantic preservation and explicit resource handling (no "vibe" memory management) based on an initial formal specification layer.
Tech Stack Rust (for high-performance parsing/AST manipulation), LLM (for initial concept generation), custom compiler/transpiler stages targeting C, Assembly, or proprietary DSLs.
Difficulty High (Requires deep knowledge of multiple target runtimes and memory models, even if the LLM handles the initial source logic).
Monetization Hobby

Notes

  • Why HN commenters would love it: Directly attempts to resolve the C vs. Rust debate by assuming the intermediate LLM output is safe/blessed, and then forcing it through the explicit machine logic of C/Assembly, as desired by some. It also directly addresses the user who mentioned translating Lingo.
  • Potential for discussion or practical utility: High. It positions LLMs as excellent translators between well-defined paradigms, not just natural language to code.

LLM Requirement Refinement Agent (Ticket Architect)

Summary

  • A service that sits between Product Managers (PMs) and developers, specifically focusing on taking vague user stories or vague stakeholder input and iteratively refining them into highly detailed, unambiguous specifications suitable for direct consumption by an implementation-focused LLM (or a very focused human developer).
  • Solves the core complaint that requirements are often the biggest bottleneck ("The hardest part is understanding the requirements... PMs will dodge you"). Core Value: Automating the necessary 4+ rounds of requirement lobbying/clarification using AI agents operating over collaboration tools.

Details

Key Value
Target Audience Engineering managers, consultant developers, and software teams struggling with requirement ambiguity and stakeholder management.
Core Feature An agent integrated with Jira/Slack that initiates a structured dialogue cycle with stakeholders based on initial input, asking targeted questions about edge cases, error handling, and non-functional requirements suggested by its "DevEx knowledge" prompt overlay.
Tech Stack Multi-agent system built using frameworks like LangChain/Autogen, integration via Slack/Jira APIs, and robust prompt engineering around "DevEx best practices" and formal ticket structure.
Difficulty Medium (The core AI logic is Medium; robust, reliable API integration and adoption within a toxic organizational environment is High).
Monetization Hobby

Notes

  • Why HN commenters would love it: It directly addresses the frequent complaint that work is blocked by vague requirements ("I'd never be able to give my Jira tickets to an LLM because they're too damn vague"). It externalizes the difficult interpersonal skill of requirement extraction into a tool.
  • Potential for discussion or practical utility: Massive utility. If it works, it transforms the most common non-coding bottleneck in large organizations.