1. Open‑source sustainability &licensing
“I would love to keep it open source forever, but I can't promise that for now.” — santiago‑pl
The discussion repeatedly raises concerns about whether GoModel can stay truly open‑source, with users asking for a clear license commitment and worrying about future commercial restrictions.
2. Technical design: semantic caching and unified API
“GoModel embeds requests and does vector similarity lookup before proxying… Regarding the cache invalidation, there is no 'purging' involved – the model is part of the namespace (params_hash includes the LLM model, path, guardrails hash, etc). TTL takes care of the cleanup later.” — giorgi_pro
“Yes, we have an OpenAI‑compatible API and we develop GoModel with Postel’s law in mind … If they make a minor API‑level change, GoModel will handle it without any code changes.” — santiago‑pl
The project’s architecture emphasizes a flexible, standards‑based surface that can adapt to upstream API tweaks with minimal friction.
3. Practical value for multi‑provider routing & usage tracking
“I've been building an AI platform (HOCKS AI) where I route different tasks to different providers … The biggest pain point has been exactly what you describe: switching models without changing app code.” — goodkiwi
“Currently we have a unified concept of User‑Paths. Once you add a specific header OR assign User‑Path to an API key, you can track the usage based on this.” — santiago‑pl
Users appreciate the ability to switch LLM providers seamlessly and request richer telemetry (e.g., per‑user usage, cost breakdown).