Four prevailing themes in the discussion
| # | Theme | Key points & representative quotes |
|---|---|---|
| 1 | Local open‑weight models (Qwen) are getting close to frontier performance | “I’ve been testing Qwen3.5‑35B‑A3B over the past couple of days and it’s a very impressive model.” – sosodev “I tried the new qwen model in Codex CLI and in Roo Code and I found it to be pretty bad.” – zoba |
| 2 | Tool‑calling and agentic coding still struggle; harness choice matters | “I have more luck with https://pi.dev than OpenCode.” – sosodev “I tried the new qwen model in Codex CLI… it just started writing all the files from scratch rather than using the vite CLI tool.” – zoba |
| 3 | Terminology is still fuzzy – “harness”, “orchestrator”, “agent” | “I use the term ‘harness’ for those – or just ‘coding agent’.” – simonw “Orchestrator means the system to run multiple agents together.” – nvader |
| 4 | Geopolitical and economic competition for AI talent and models | “We need local competition and headhunting to make it fly.” – ivan_gammel “China is giving their weights away for free, and not demanding that any government ban the rest.” – petcat “We need to rally behind Mistral.” – petcat |
These four threads capture the bulk of the conversation: the rapid rise of open‑weight models, the practical hurdles of tool‑use, the still‑evolving ecosystem vocabulary, and the broader geopolitical stakes surrounding AI development.