1. Speed vs. cost – the “fast‑but‑expensive” debate
Users are excited about the new Codex‑Spark speed, but many point out that the price tag is a deal‑breaker.
“/fast is insanely expensive.” – bearjaws
“I’m burning through $22 in 15 minutes.” – bearjaws
“It costs $200 for a month.” – wahnfrieden
2. Quality matters more than raw speed
A recurring sentiment is that a faster model is useless if it can’t produce correct, agent‑friendly code.
“I don’t want a faster, smaller model. I want a faster, better model.” – behnamoh
“The Codex model has no idea about how to call agents.” – behnamoh
“Speed is especially important when productionizing AI.” – jusgu
3. Hardware & scalability concerns
The discussion pivots to whether Cerebras, GPUs, TPUs, or other custom silicon can actually deliver the promised throughput without prohibitive cost or yield issues.
“Cerebras is a dinner‑plate sized chip… $1m+/chip.” – latchkey
“TPUs don’t have enough memory either, but they have really great interconnects.” – latchkey
“Cerebras will be substantially better for agentic workflows in terms of speed.” – the_duke
These three themes—pricing, model quality, and underlying hardware—drive the bulk of the conversation.