1. Execution and Automated Testing Essential for AI Agents
Users stress that AI coding agents require environments to run code, execute tests, and self-debug to produce reliable results.
"without the ability to run tests the AI will really go off the rails super quick" (QuercusMax).
"having a template for creating a project that includes at least one passing test... helps a lot" (simonw).
"LLMs are very good at looking at a change set and finding untested paths" via reviewer agents (planckscnst).
2. Human Developer Tools Boost AI Productivity
Formatters, linters, debuggers, and tests aid humans and AI alike, with agents adapting via training data or prompts.
"Isnβt it funny how thatβs exactly the kind of stuff that helps a human developer be successful and productive, too?" (ManuelKiessling).
"anything that's helpful for human developers... will also help LLMs be more productive. For largely identical reasons" (formerly_proven).
"agents generate code that conforms to Black quite effectively" from project patterns (simonw).
3. Formal Verification Promising but Challenging for AI
AI could automate proofs, leveraging type systems (e.g., Haskell, Lean), but specs, ownership issues, and changing requirements limit mainstream use.
"formal verification tools... They're potentially a fantastic unlock for agents" (simonw).
"LLM agents tend to struggle with semi-complex ownership... reach for unnecessary/dangerous escape hatches" in Rust (formerly_proven).
"the biggest reason formal verification isn't used much... requirements are changing constantly" (Analemma_).