1. Trust & verification are the biggest hurdles
Many commenters point out that an AI‑generated model can look right but still be wrong, and that the only way to catch that is by testing against the original Excel or by running unit tests.
“If they have the skills to verify the Excel model then they can apply the same approach to the numbers produced by the AI‑generated model, even if they can’t inspect it directly.” – taneq
“The Excel never had any tests and people just trusted it.” – theshrike79
2. Catastrophic failures are a real fear
The discussion repeatedly stresses that a single AI‑driven crash could have huge financial or regulatory consequences, and that blame is hard to assign.
“The scapegoating is different. Using an LLM makes them more culpable for the failure, because they should have known better than to use a tech that is well known to systematically lie.” – ktzar
“The only people I see talking about MCP are managers who don’t do anything but read LinkedIn posts and haven’t touched a text editor in years if ever.” – Gigachad
3. Productivity gains are offset by technical debt and quality issues
Users report impressive speedups on green‑field tasks, but the code produced often needs heavy cleanup, has hidden bugs, or introduces new debt.
“I’m trying to learn rust coming from python … the idea of deploying a rust project at my level of ability with an AI at the helm is terrifying.” – myfakebadcode
“The good thing is that it makes better code if modularity is strict as well.” – K0balt
4. Two distinct user archetypes emerge
Some treat AI as a junior‑assistant that they can review; others hand off entire problem‑solving to the model and ignore the underlying logic.
“People outsourcing thinking and entire skillset to it – they usually have very little clue in the topic.” – Phew
“I treat it like an advanced auto‑complete. That’s basically how people need to treat it.” – tom_m
5. Enterprise constraints and security are a major barrier
Large organisations struggle to integrate AI tools because of policy, lack of tooling, and the risk of “shadow AI” running un‑audited code.
“The Copilot button in Excel can’t access the excel file of the window it’s in.” – Havoc
“Non‑technical people with root access and agents that write code are a security nightmare.” – CISOs
These five themes capture the core concerns and observations that dominate the discussion.