Four key take‑aways from the discussion
| # | Theme | What people said |
|---|---|---|
| 1 | LLMs still need human guidance and are error‑prone | “It knows what it is, it's a very well known symbol. But translating that knowledge to code is something else.” – marginalia_nu “You have to do this when you ask it to do something it has never seen before.” – tartoran “The model stumbles when asked to invent procedural geometry… you need a strict format and a rendering test.” – hrmtst93837 |
| 2 | Prompt design & planning are the real skill | “Give yourself permission to play. Understand basic concepts… then pick a hard coding problem… and ask the agent to write it for you.” – mmaunder “If you tell it the code is slow… you need to give it a concrete metric and ask for a fix.” – bryanrasmussen “Bad input > bad output.” – graphememes |
| 3 | Agents are more than just the raw LLM | “LLM + tools + memory + orchestration = agent.” – xlth “You’re not using an LLM if you’re using Claude Code, Cursor, etc. You’re using an agent.” – consumer451 “The confusion comes from product names blurring these boundaries.” – xlth |
| 4 | Hype vs reality – human skill still matters | “You can’t fire an LLM for producing bad code. If you could, you would have to fire them all.” – jqpabc123 “There are no shortcuts to developing skill and taste.” – nprateem “LLMs are great at autocomplete, but when they produce the bulk of a project the burden on humans is huge.” – grey‑area |
These four themes capture the bulk of the conversation: the current limits of code‑generation LLMs, the importance of careful prompting, the distinction between a plain model and an agent, and the realistic expectations for human‑AI collaboration.