Three prevailing themes
| # | Theme | Supporting quotes |
|---|---|---|
| 1 | Fine‑tuning still matters for niche, cost‑sensitive or on‑device use cases | “Fine tuning is still useful for cost/latency‑sensitive applications…” – joefourier “The biggest case for fine tuning is probably that you can take small models, fine tune them for applications that require structured output, and then run cheap inference at scale.” – prettyblocks “I’m trying to embed a fine‑tuned tiny model into my C++ game so it can provide a narrative…” – throwaway6977 |
| 2 | Fine‑tuning is becoming less relevant for general‑purpose LLMs | “Fine tuning is a story that is nice to tell but that with modern LLMs makes less and less sense…” – antirez “With the current context windows… it’s much faster and reliable for them to use tools and find the information before replying.” – antirez “Wouldn’t a RAG make more sense for this use case?” – dotancohen |
| 3 | Real‑world companies are actively fine‑tuning models for specific products | “Cursor used online RL to get +28% approval rate…” – danielhanchen “Vercel used RFT for their AutoFix model…” – danielhanchen “Doordash uses LoRA, QLoRA for a “Generalized Attribute Extraction model”” – danielhanchen “NASA flood water detection…” – danielhanchen |
These three threads—cost‑efficiency for specialized tasks, the shift toward prompting/RAG for general models, and the growing list of industry adopters—capture the core of the discussion.