1. LLMs still make mistakes – you have to watch them
“If you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side.” – nadis
“I have to review the diff… 90 % of the problems it finds can be discounted.” – teaearlgraycold
2. Speed‑up vs. tech‑debt trade‑off
“The mistakes have changed a lot – they are not simple syntax errors anymore, they are subtle conceptual errors.” – nadis
“I’ve seen the model create a new, bigger bug… the model can just keep going and it ends up adding more tech‑debt.” – daxfohl
3. The role of the engineer is shifting
“LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.” – atonse
“I’m a builder, not a coder. I want to build, not write every line.” – mkozlows
4. Harnesses matter – Claude Code, Copilot, Cursor, etc.
“Claude Code is a CLI tool… it can do complete projects in a single command.” – spaceman_2020
“Copilot is not on par with CC or Cursor.” – maxdo
“The difference is the harness, not the model.” – nsingh2
5. Economic impact & skill devaluation
“The pay will drop because the barrier to entry is lower.” – iwontberude
“You’ll be paid less because the model can do the work.” – daxfohl
“The senior devs will be fine, but the juniors will have to up‑skill.” – riku_iki
6. Trust, accountability, and safety concerns
“You can’t hold an AI accountable the way you hold a human.” – coffeeaddict1
“If the model keeps pressing the big red button, it will do it.” – arthurcolle
“You need to verify the output; otherwise you’re just outsourcing the risk.” – handoflixue
These six themes capture the main threads of the discussion: the need for human oversight, the trade‑off between speed and quality, the changing nature of engineering work, the importance of the tooling stack, the economic implications for developers, and the lingering questions of trust and responsibility.