Project ideas from Hacker News discussions.

Show HN: A MitM proxy to see what your LLM tools are sending

📝 Discussion Summary (Click to expand)

1. The need for LLM‑intercept tooling and observability
Users are actively building or looking for simple proxies that capture every request, response, and token count so they can debug cost, optimize context windows, and store the data for later analysis.

“This is incredibly useful for understanding the black box of LLM API calls. The real‑time token tracking is game‑changing for debugging why certain prompts are so expensive and optimizing context window usage.” – asyncadventure
“I think you can actually later store this in a database and start querying and optimizing what is happening there.” – jmuncor

2. Security pitfalls of MITM‑style interceptors
Several comments point out that many DIY proxies disable TLS verification or expose a back‑door, making them dangerous if misused.

“This took all of 5 minutes to find reading through main.py on my phone.” – catlifeonmars
“I’m not sure you fully understand the implications of the misconfiguration of mitmproxy there. Effectively you provided an easily accessible front door for remote code execution on a user’s machine.” – catlifeonmars
“I think the main problem is when OP wrote: ‘I built this’ instead of ‘I prompted this’.” – 101008

3. Criticism of “vibe‑coded” open‑source projects
The community expresses frustration that many AI‑generated repos are released with little review, poor documentation, or hidden security flaws, eroding trust in the open‑source ecosystem.

“Vibe‑coded software is complete but never as good as you could make it, so the effort in reviewing it is mostly wasted.” – throwaway277432
“I don’t think you can get professionals to review code that you didn’t even bother typing yourself.” – lionkor
“The README is really annoying.” – CurleighBraces


🚀 Project Ideas

AIProxyHub: Secure, Token‑Aware Proxy for LLM Agents

Summary

  • Intercepts all HTTP(S) traffic from LLM client CLIs (Claude, Gemini, OpenAI, etc.) without manual coding.
  • Provides real‑time token usage, PII detection, and secure credential injection.
  • Core value: eliminates manual proxy setup, reduces token waste, enforces governance.

Details

Key Value
Target Audience Developers, DevOps, AI Ops teams using LLM CLIs.
Core Feature Transparent HTTP(S) proxy with token accounting, PII masking, credential sandboxing.
Tech Stack Go (proxy core), Rust (token counter), SQLite/Redis (storage), React + Tailwind (UI), OpenTelemetry exporter.
Difficulty Medium
Monetization Revenue‑ready: subscription + per‑token analytics tier.

Notes

  • LudwigNagasena said, “I had to vibe code a proxy to hide tokens from agents” – this tool removes that pain.
  • asyncadventure highlighted the need for “real‑time token tracking” – our dashboard delivers it.

Read Later