1. AI claims are often overstated and need external validation
“Insanity: They also claimed ChatGPT solved novel erdös problems when that wasn’t the case.”
“emp17344: Some of these were initially hyped as novel solutions, and then were quietly downgraded after it was discovered the solutions weren’t actually novel.”
2. Who deserves credit when an LLM is involved?
“famouswaffles: The paper has all those prominent institutions who acknowledge the contribution so realistically, why would you be skeptical?”
“nozzlegear: …the difference between the helicopter and the pilot… the helicopter didn’t fly itself.”
3. LLMs are powerful tools but not autonomous discoverers
“outlace: The headline may make it seem like AI just discovered some new result in physics all on its own, but reading the post, humans started off trying to solve some problem, it got complex, GPT simplified it and found a solution with the simpler representation.”
“cpard: The “AI replaces humans in X” narrative is primarily a tool for driving attention and funding.”
4. The hype cycle and real‑world impact of AI in research
“dakolli: They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed.”
“stouset: …we are already at stage 3 for software development and arguably step 4.”
These four threads capture the bulk of the discussion: skepticism about novelty, authorship disputes, the tool‑versus‑autonomous debate, and the broader cultural/industrial implications of AI‑powered research.