Project ideas from Hacker News discussions.

Kotlin creator's new language: a formal way to talk to LLMs instead of English

📝 Discussion Summary (Click to expand)

Three dominant themes in the discussion

# Theme Representative quotes
1 “Natural‑language vs. formal language” – debate over whether English or a new formal spec should drive code generation. “We built LLMs so that you can express your ideas in English and no longer need to code.”lich_king
“English is really too verbose and imprecise for coding, so we developed a programming language you can use instead.”lich_king
2 Tooling & spec‑to‑code workflow – skepticism about the practicality of mapping high‑level specs to code and the challenges of determinism and context. “This doesn't really make much sense to me. Models aren't deterministic – every time you would try to re‑apply you'd likely get different output.”the_duke
“Every non‑trivial codebase would be made of hundreds of specs that interact and influence each other – very hard to read all specs that influence functionality and keep it coherent.”the_duke
3 LLM limitations & context bottleneck – focus on how current models struggle with context understanding rather than prompt ambiguity, and the need for correctness‑preserving transformations. “The problem with formal prompting languages is they assume the bottleneck is ambiguity in the prompt. In my experience building agents, the bottleneck is actually the model's context understanding.”tonipotato
“I want to see an LLM combined with correctness preserving transforms.”amelius

These three threads capture the core concerns: whether to rely on natural language or a formal spec, how to build reliable tooling around it, and what the real limits of LLMs are in practice.


🚀 Project Ideas

Generating project ideas…

Spec2Code: Natural‑Language Spec to Code Generator

Summary

  • Turns plain‑English or markdown‑style specifications into runnable code in multiple languages.
  • Provides a clear syntax, side‑by‑side diff, and version control integration to eliminate ambiguity in spec languages like CodeSpeak.

Details

Key Value
Target Audience Backend developers, low‑code enthusiasts, and teams adopting spec‑driven development.
Core Feature Live spec editor → auto‑generated code skeletons, diff view, and Git integration.
Tech Stack React + Monaco Editor, Node.js + OpenAI API, Docker for sandboxed compilation, GitHub Actions for CI.
Difficulty Medium
Monetization Revenue‑ready: freemium with paid tiers for enterprise GitHub integration and multi‑language support.

Notes

  • HN commenters lament the lack of syntax: “I tried looking through some of the spec samples, and it was not clear what the 'language' was or that there was any syntax.” (matthewkayin)
  • The side‑by‑side diff feature directly addresses “Instead of using tabs, it would be much better to show the comparison side by side.” (cesarvarela)
  • By generating code deterministically, it mitigates the “Models aren’t deterministic” concern raised by the_duke.

RefactorGuard: LLM‑Powered, Correctness‑Preserving Refactoring

Summary

  • Uses LLMs to suggest refactorings while guaranteeing logic preservation through automated unit test generation and formal verification.
  • Solves the frustration of “if you refactor a program, make it do anything but keep the logic of the program intact.” (amelius)

Details

Key Value
Target Audience Software engineers, QA teams, and open‑source maintainers.
Core Feature LLM‑driven refactor suggestions → auto‑generated test suites → formal verification (e.g., using Z3 or Dafny).
Tech Stack Python, OpenAI API, PyTest, Z3 SMT solver, VS Code extension.
Difficulty High
Monetization Revenue‑ready: per‑refactor licensing for enterprises, open‑source core.

Notes

  • Addresses the “correctness preserving transforms” demand: “I want to see an LLM combined with correctness preserving transforms.” (amelius)
  • Provides deterministic output, countering the “Models aren’t deterministic” critique from the_duke.
  • Encourages adoption of formal specs, aligning with “We built LLMs so that you can express your ideas in English and no longer need to code.” (lich_king).

PromptContext Manager: Context‑Aware LLM Prompting Tool

Summary

  • Visualizes and manages the LLM context window, automatically truncates or prioritizes content, and ensures consistent results across sessions.
  • Addresses the bottleneck of context understanding highlighted by tonipotato.

Details

Key Value
Target Audience AI researchers, prompt engineers, and developers building agents.
Core Feature Context window analyzer, priority tagging, auto‑summarization, and a “context history” panel.
Tech Stack Chrome extension + React, Node.js backend, OpenAI API, LangChain for context handling.
Difficulty Medium
Monetization Hobby (open‑source) with optional paid analytics add‑on.

Notes

  • Directly responds to “The problem with formal prompting languages is they assume the bottleneck is ambiguity in the prompt… the bottleneck is actually the model's context understanding.” (tonipotato)
  • Provides a practical utility for teams that struggle with “Same precise prompt, wildly different results depending on what else is in the context window.” (tonipotato)
  • Encourages reproducible agent behavior, a pain point for many HN users experimenting with LLM agents.

Read Later