1. Nondeterminism and Unreliability of AI Tools
Users highlight AI's stochastic outputs as a core flaw, contrasting it with deterministic tools like compilers. "Non determinism of AI feels like a compiler which will on same input code spit out different executable on every run" (general1465). "It’s wild that programmers are willing to accept less determinism" (Q6T46nT668w6i3m). grim_io notes "Non determinism does not imply non correctness," but many see it as ritualistic debugging.
2. Variable Productivity Gains and Hype vs. Reality
Opinions split on boosts: optimists report 2x-10x speedups with workflows, skeptics call it a "mirage." "I'll be lucky if I finish 30% faster than if I just code the entire damn thing myself" (credit_guy). shepherdjerred shares successes via linters/tests/agents; superze laments "$200 on generations that now require significant manual rewrites." llmslave2 critiques marketing: "We're 6 months into 'AI will be writing 90% of code in three months'."
3. New Abstraction Layer and Steep Learning Curve
Mastering agents/prompts/workflows is essential but overwhelming, fueling "skill issue" feelings. "There's a new programmable layer of abstraction to master... agents, subagents, their prompts" (alexcos, quoting OP). nowittyusername: "Most people have not fully grasped how LLM's work... everyone is the grandma that has been handed a palm pilot." Successful users like shepherdjerred detail setups (e.g., "make at least 20 searches to online documentation").