1. LLMs Remix the Past, Lacking True Novelty
Many argue LLMs merely interpolate training data, producing "new" outputs via remixing rather than innovation. "LLMs are literally technology that can only reproduce the past" (crystal_revenge). Critics note limitations in novel concepts or theorem proving: "they really can't produce truly novel concepts" (crystal_revenge). Defenders counter with practical utility: "I've used them to create my own personalized text editor, perfectly tailored to what I actually want" (handoflixue).
2. Rapid Progress in Coding Agents Boosts Productivity
2025 marked breakthroughs in tools like Claude Code, shifting from hype to workflows. "2025 was a big year... LLM coding in 2024 sucked" (noodletheworld). Users report efficiency: "Claude Code helps me make a majority of changes to our codebase now... insane efficiency boost" (n2d4). Skeptics demand metrics: "Did more software ship in 2025 than in 2024?" (bandrami).
3. HN Skepticism Amid Hype, Environmental, and Societal Costs
Dismissal stems from unmet promises, slop, data centers' impacts. "HN leans snake oil" (syndacks); "severe impact on the environment... tax money being siphoned" (techpression). Nostalgia contrasts AI frenzy: "back in the day, when a year of progress was... syntactic sugar to Java" (waldrews). Optimists urge trials: "a huge number of people donβt understand how capable it is yet" (threethirtytwo).