Project ideas from Hacker News discussions.

Kotlin creator's new language: talk to LLMs in specs, not English

📝 Discussion Summary (Click to expand)

1. Spec‑driven development is the core idea, but its formality is debated

“We built LLMs so that you can express your ideas in English and no longer need to code.” – lich_king
“The idea is this would be a kind of IL for natural language queries.” – kevin_thibedeau
“It is a formal ‘way’ aka like using json or xml like tons of people are already doing.” – koolala

2. Determinism and consistency of LLM‑generated code are a major concern

“Models aren’t deterministic – every time you would try to re‑apply you’ll likely get different output.” – the_duke
“If the spec is so complete that it covers everything, you might as well write the code.” – tomtomtom777
“The entire thing about determinism is a red herring… prompt instability doesn’t matter because… the code does not matter if the spec is formal enough.” – vidarh

3. Human‑coding vs LLM‑coding: job impact and trust

“We built LLMs so that you can express your ideas in English and no longer need to code.” – lich_king (repeated)
“We’re looking to eliminate the need for humans to touch code, but we’re not there yet.” – abreslav
“If you’re going to eliminate our job, we need to be sure the output is 100 % deterministic.” – newsoftheday

4. Practicality and skepticism of CodeSpeak/“English‑like” languages

“It looks like a tool that just turns code into specs, and it’s not clear how it’s better than existing wrappers.” – pshirshov
“The tool severely limits the configurability of the agentic generation process.” – abreslav
“I don’t think this is the gotcha you think it is… it’s just a code‑generator wrapper.” – paxys

These four themes capture the bulk of the discussion: the promise of spec‑driven, English‑like interfaces; the technical hurdle of deterministic LLM output; the debate over whether LLMs will replace human coders; and the ongoing skepticism about whether CodeSpeak actually offers a meaningful improvement over current tooling.


🚀 Project Ideas

SpecSync CLI

Summary

  • Keeps a Markdown‑based spec file and the corresponding code file in lockstep by generating a minimal LLM prompt that applies only the diff between them.
  • Eliminates spec drift, reduces manual review, and guarantees deterministic incremental updates.

Details

Key Value
Target Audience Teams using spec‑driven development (e.g., CodeSpeak, Blackbird, Kiro)
Core Feature Diff‑based LLM prompt generator that updates code to match spec changes
Tech Stack Rust (CLI), OpenAI/Anthropic API, Git hooks, JSON diff library
Difficulty Medium
Monetization Revenue‑ready: $9/month per repo (free tier 5 repos)

Notes

  • HN commenters like “spec drift” and “need for a diff‑based update” (e.g., “specs often drift from the implementation”).
  • Provides a practical workflow: specsync build → LLM generates patch → specsync apply → code updated.
  • Encourages version control of specs and code side‑by‑side review.

PromptLint

Summary

  • Validates, formats, and converts natural‑language prompts/specs into a lightweight formal DSL (e.g., CodeSpeak‑lite) with linting rules.
  • Reduces ambiguity, ensures consistency, and improves LLM reliability.

Details

Key Value
Target Audience Prompt engineers, LLM developers, spec writers
Core Feature Syntax‑aware linter, auto‑formatter, and DSL‑to‑LLM prompt generator
Tech Stack Node.js, TypeScript, ESLint‑style rule engine, OpenAI API
Difficulty Medium
Monetization Revenue‑ready: $5/month per user (free tier 3 users)

Notes

  • Addresses comments about “no clear syntax” and “ambiguity of intent” (e.g., “the language was not clear”).
  • Integrates with IDEs (VS Code extension) for real‑time feedback.
  • Enables reproducible prompts and easier onboarding for non‑technical stakeholders.

KnowledgeVault

Summary

  • A knowledge‑graph service that ingests proprietary docs, codebases, and domain data, then exposes a query API for LLMs to retrieve context‑aware facts.
  • Cuts hallucinations and improves domain‑specific accuracy.

Details

Key Value
Target Audience Enterprise teams, AI‑powered product developers
Core Feature Automated ingestion → embeddings → graph indexing → LLM‑friendly query endpoint
Tech Stack Python, Neo4j, Sentence‑Transformers, FastAPI, Docker
Difficulty High
Monetization Revenue‑ready: $50/month per org + $0.01 per query

Notes

  • Responds to concerns like “LLMs lack domain knowledge” and “esoteric knowledge ratio”.
  • Allows teams to keep a living knowledge base that LLMs can reference in prompts, reducing the need to re‑teach the model.
  • Supports incremental updates and versioning of knowledge artifacts.

TokenBudget Dashboard

Summary

  • Monitors LLM token usage, context window consumption, and cost in real time; offers pruning suggestions and budgeting alerts.
  • Helps teams keep token budgets predictable and avoid runaway costs.

Details

Key Value
Target Audience DevOps, AI ops, cost‑conscious developers
Core Feature Token counter, context‑window visualizer, cost estimator, CI/CD integration
Tech Stack Go, Grafana, Prometheus, OpenAI API, GitHub Actions
Difficulty Medium
Monetization Hobby (open source)

Notes

  • Addresses the “token budgeting” pain point highlighted by “budgeted distributed system” and “token use per task step”.
  • Provides actionable insights: which prompts consume most tokens, how to split context, and when to truncate.
  • Integrates with existing CI pipelines to enforce token limits before deployment.

Read Later