Threedominant themes from the discussion
-
Rediscovering non‑embedding, “library‑style” search
Softwaredoug notes that people are returning to traditional, file‑system‑based semantic search that resembles how librarians organize shelves."The real thing I think people are rediscovering with file system based search is that there’s a type of semantic search that’s not embedding based retrieval." – softwaredoug
-
Agents can drive any retrieval backend (Lucene, ontological NLP, etc.) Morkalork demonstrates that letting an LLM interact with a Lucene index yields strong results, showing retrieval is not limited to vector‑DB pipelines.
"Doesn't have to be tho, I've had great success letting an agent loose on an Apache Lucene instance. Turns out LLMs are great at building queries." – morkalork
-
Practical and cost hurdles in real‑world deployments
Mandeeepj highlights the steep cost of sandbox environments, questioning the viability of $70k‑plus annual expenses. Meanwhile, pboulos points out that messy organizational structures make RAG adoption especially hard."even a minimal setup ... would put us north of $70,000 a year..." – mandeeepj
"From personal experience, getting RAG to work well in places where the structure of the organisation ... is far from hierarchical ... is a very hard task." – pboulos