Three dominant themes in the discussion
| Theme | Core insight | Representative quote |
|---|---|---|
| 1️⃣ Context‑rich knowledge bases – Users stress that a deliberately organized folder or naming system lets an LLM “see” meeting notes, prior designs, Slack threads, etc., turning the model into a true context‑engineering tool. | “there’s a real difference between prompting … and prompting an LLM that has access to your project folder with six months of meeting notes, three prior design docs, the Slack thread …” | WillAdams |
| 2️⃣ Privacy‑first personal fine‑tuning – Many want to fine‑tune locally but cannot share private data; they struggle to collect high‑quality session logs or synthetic examples without exposing sensitive content. | “I’d love to ‘send them to go looking for stuff for you’, but local models aren’t great at this today, so I’m stuck collecting data I can’t send to third parties.” | embedding‑shape |
| 3️⃣ Limits of LLM‑driven search vs. traditional indexing – While some view LLMs as query engines that can replace structured search, others note that without pre‑organized context (or metadata) they consume excess tokens and become unreliable, so human‑friendly folder layouts or vector stores remain useful. | “Why does AI need that folder structure? … Because at 52k files a flat list is a horrendous list to scroll through for a human.” | laurrowyn |
These three themes capture the community’s focus on (1) structuring context for LLMs, (2) enabling private personal fine‑tuning, and (3) balancing LLM‑based retrieval with conventional indexing approaches.