Project ideas from Hacker News discussions.

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering

📝 Discussion Summary (Click to expand)

Three prevailing themes in the discussion

# Theme Key points & supporting quotes
1 Robust function‑hashing for version tracking The project’s core claim is a normalized hash that survives recompilation and rebasing.
• “The core idea is a normalized function hashing system… not raw bytes or absolute addresses.” – xerzes
• “What does your function‑hashing system offer over ghidra’s built‑in FunctionID, or the bindiff plugin?” – Retr0id
• “Going off of only FunctionID will either have a lot of false positives or false negatives…” – chc4
2 LLM + MCP (or skill‑based) workflows for reverse engineering Users debate the trade‑offs between MCP tool calls and in‑context skill execution, noting speed, efficiency, and tool overload.
• “MCP is still valuable for connecting to external systems. But for reasoning… in‑context beats tool‑call round‑trips by orders of magnitude.” – DonHopkins
• “Skills can compose and iterate at the speed of light… while MCP forces serialization and waiting for carrier pigeons.” – DonHopkins
• “I made a skill for this functionality and let Codex plough through in agentic mode.” – wombat23
• “Tool stuffing degrades LLM tool use quality. 100+ tools is crazy.” – underlines
3 Security & validation concerns with MCP‑driven analysis When LLMs process untrusted binaries, injection attacks and output filtering become critical.
• “When an AI agent interacts with binary analysis tools, there are two injection vectors… tool output injection… indirect prompt injection via analyzed code.” – longtermop
• “Filtering the tool output before it reaches the model is a real gap in most setups.” – longtermop
• “Working on this problem at Aeris PromptShield – happy to share attack patterns we’ve seen if useful.” – longtermop

These themes capture the community’s focus on reliable function matching, the practicalities of integrating LLMs with MCP/skills, and the emerging need for secure, validated tool pipelines.


🚀 Project Ideas

FuncSync Cloud

Summary

  • Cloud‑based service that stores normalized function hashes and user annotations, automatically propagating names, types, and comments across binary versions.
  • Provides a lightweight Ghidra plugin that syncs local annotations with the cloud, eliminating manual re‑annotation after each patch.

Details

Key Value
Target Audience Reverse engineers, malware analysts, security researchers
Core Feature Automatic annotation sync via function hashing, web UI for diff/merge
Tech Stack Go/Node.js backend, PostgreSQL, Ghidra Java plugin, React frontend
Difficulty Medium
Monetization Revenue‑ready: $5/month per user

Notes

  • HN commenters lament losing annotations when binaries shift: “I spend hours annotating functions in version 1.07, then version 1.08 drops and every address has shifted — all your work invisible.”
  • The service solves this pain by keeping a registry of 154K+ hashes and 1,300+ annotations, as the original author demonstrated.
  • Practical utility: teams can share annotation databases, audit changes, and maintain consistency across CI pipelines.

MCP Tool Optimizer

Summary

  • AI‑driven tool‑selection layer that reduces a large MCP tool set (e.g., 110 Ghidra tools) to a minimal, context‑relevant subset before each LLM call.
  • Uses embeddings of tool descriptions and current analysis context to rank relevance, preventing context window overload.

Details

Key Value
Target Audience LLM‑powered reverse engineering workflows, AI researchers
Core Feature Dynamic tool filtering, relevance ranking, auto‑grouping
Tech Stack Python, LangChain, OpenAI embeddings, FastAPI
Difficulty Medium
Monetization Hobby (open‑source)

Notes

  • Users complained: “110 tools is a bit… much. … I can’t handle that many in the context window.”
  • By presenting only the top‑k tools, the optimizer keeps prompt size manageable and speeds up inference.
  • Encourages discussion on tool design and relevance scoring, a hot topic in the HN thread.

PromptShield MCP

Summary

  • Secure MCP server that sanitizes tool outputs before passing them to LLMs, mitigating tool‑output and indirect prompt injection attacks.
  • Includes a policy engine for custom filtering rules and audit logs.

Details

Key Value
Target Audience Security teams, malware analysts, AI developers
Core Feature Output sanitization, policy‑based filtering, audit trail
Tech Stack Rust, WASM sandbox, OpenAI API, SQLite
Difficulty High
Monetization Revenue‑ready: $10/month per instance

Notes

  • Longtermop highlighted a real gap: “tool output filtering before it reaches the model is a real gap in most setups.”
  • PromptShield protects against malicious binaries embedding prompt injections in comments or decompiled code.
  • Practical utility: can be deployed behind existing MCP servers, providing an extra security layer for untrusted binaries.

Read Later