Three dominant themes in the discussion
| # | Theme | Representative quotes |
|---|---|---|
| 1 | “Natural‑language vs. formal language” – debate over whether English or a new formal spec should drive code generation. | “We built LLMs so that you can express your ideas in English and no longer need to code.” – lich_king “English is really too verbose and imprecise for coding, so we developed a programming language you can use instead.” – lich_king |
| 2 | Tooling & spec‑to‑code workflow – skepticism about the practicality of mapping high‑level specs to code and the challenges of determinism and context. | “This doesn't really make much sense to me. Models aren't deterministic – every time you would try to re‑apply you'd likely get different output.” – the_duke “Every non‑trivial codebase would be made of hundreds of specs that interact and influence each other – very hard to read all specs that influence functionality and keep it coherent.” – the_duke |
| 3 | LLM limitations & context bottleneck – focus on how current models struggle with context understanding rather than prompt ambiguity, and the need for correctness‑preserving transformations. | “The problem with formal prompting languages is they assume the bottleneck is ambiguity in the prompt. In my experience building agents, the bottleneck is actually the model's context understanding.” – tonipotato “I want to see an LLM combined with correctness preserving transforms.” – amelius |
These three threads capture the core concerns: whether to rely on natural language or a formal spec, how to build reliable tooling around it, and what the real limits of LLMs are in practice.