Project ideas from Hacker News discussions.

Ask HN: Who is hiring? (December 2025)

📝 Discussion Summary (Click to expand)

The three most prevalent themes in the Hacker News discussion are:

1. High Demand for Engineering Talent Across Diverse Stacks and Geographies

The thread is overwhelmingly dominated by hiring announcements for various engineering roles (Backend, Full-Stack, Staff, ML/AI) spanning numerous technical stacks and locations.

  • Supporting Quote: Many companies explicitly state their core technologies: "Stack: Python (AI & core systems), PostgreSQL, TypeScript, GCP" from tomdickson (Tendavo), or "Our core software is commercial with a 'free as in beer' version. [...] Technologies and standards that you will work with: Modern Java, MySQL, PostgreSQL, Docker, Kubernetes, OAuth, SAML, OIDC" from mooreds (FusionAuth). Companies are hiring globally, with posts specifying, "We're looking for engineers who can ship quickly... stack: TypeScript, React, Next.js, GraphQL w/ Apollo, Node.js, Postgres & Redis" from WireScreenNYC (WireScreen), and others hiring internationally from Europe (twikeybelgium, Stellar) to North America.

2. Strong Emphasis on Post-Product-Market Fit (PMF) to Scale

A recurring theme among successful applicants or roles being hired for is that the company has already validated its market and is now focused on scaling operations, product, or infrastructure.

  • Supporting Quote: Multiple posters highlight this transition: "We have validated Product-Market Fit; your job is to scale it." (tomdickson at Tendavo). This sentiment is echoed by justin_sdx from SmarterDx: "We have PMF, and it's time to scale!" and allanjude (Klara Inc.) stating they are looking for a developer to join after a previous successful hiring round prompted by growth.

3. Applicant Response Times and Resume Spam Concerns

A significant secondary discussion thread emerged concerning the difficulty of receiving acknowledgments for applications, with users speculating that AI-generated resume submissions are flooding the inboxes of hiring managers.

  • Supporting Quote (Applicant Frustration): A user expressed deep frustration over the lack of basic acknowledgment: "please have the basic decency to send a simple acknowledgement or revert after receiving them." (dvcoolarun).
  • Supporting Quote (Hiring Manager Perspective): Hiring managers countered that the volume is overwhelming, partially due to automation: "Most resumes don't even get reviewed, let alone acknowledged. It's basically impossible to keep up with. Recruiters aren't ignoring people because they're cruel, they are being drowned by AI slop and resume mills." (seneca). One manager advised applicants against using AI tools in their submissions: "I’ve never responded to an AI generated application." (barrell).

🚀 Project Ideas

LLM Workflow Auditor & Validator (Tendavo/SmarterDx Value Prop)

Summary

  • A specialized tool designed to audit, validate, and debug complex, multi-step Large Language Model (LLM) workflows, particularly those involving sensitive data or high-stakes decisions (like public procurement or clinical AI).
  • Core value proposition is generating verifiable audit trails and confidence scores for AI-driven outputs, addressing 'how' an LLM reached a conclusion.

Details

Key Value
Target Audience Companies (like Tendavo, SmarterDx, Gladly) that are deeply investing in internal LLM agents and workflows to automate core business processes.
Core Feature Visual step-by-step visualization of agent execution paths, input/output logging for intermediate tool calls, and confidence scoring based on chain consistency and tool response verification.
Tech Stack Python (for workflow orchestration, perhaps leveraging LangChain/LlamaIndex principles internally), a modern web framework (e.g., FastAPI/Django/Node.js) for the UI, and potentially a time-series DB or graph DB to store execution traces.
Difficulty Medium/High (Requires deep understanding of agentic frameworks and designing robust logging/visualization infrastructure.)
Monetization Hobby

Notes

  • This directly addresses the need for agents and LLM workflows mentioned by tomdickson (Tendavo) and the need for clinical AI that "understands the nuances of clinical reasoning" mentioned by justin_sdx (SmarterDx). When automating core revenue processes, auditability is crucial.
  • It helps solve the "trust" issue inherent in complex AI systems by providing ground-truth debugging, which would appeal to any developer or PM scaling AI systems.

OpenZFS Data Integrity Monitor & Health Checker

Project Title

OpenZFS Data Integrity Monitor & Health Checker

Summary

  • A lightweight, cross-platform service/daemon that integrates deeply with OpenZFS properties to provide proactive, intelligent monitoring, alerting, and automated remediation advice for storage pools.
  • Core value proposition is providing infrastructure reliability (SRE-like tooling) specifically tailored for the complexity of OpenZFS internals, beyond basic health checks.

Details

Key Value
Target Audience System administrators, infrastructure teams, and companies running critical workloads on OpenZFS (like those mentioned by allanjude).
Core Feature Continuous background scrubbing monitoring, aggressive alerting on predictive drive failure metrics (based on telemetry/SMART data interpreted alongside ZFS pool status), and generating human-readable remediation steps for complex error states (e.g., "Run zpool scrub on vdev X immediately").
Tech Stack Go or Rust (for high-performance, low-overhead daemon), utilizing native system calls to interact with ZFS management tools/datasets, and potentially leveraging LLMs/internal logic for suggested fixes.
Difficulty Medium (Deep OS/filesystem knowledge required, but the core monitoring is procedural.)
Monetization Hobby

Notes

  • This directly targets the expertise highlighted by allanjude (OpenZFS Developer community). It acts as essential tooling for engineers focused on data integrity in storage systems.
  • It resonates with folks mentioning focus on systems and infrastructure reliability (gtirloni at Virtasant, cube2222 at Spacelift, WireScreenNYC talking about TB-scale data).

Anti-Spam/Anti-AI Application Screening Service

Project Title

Anti-Spam/Anti-AI Application Screening Service

Summary

  • A specialized SaaS or API service that audits incoming job applications (resumes/cover letters) against known patterns of low-effort, AI-generated spam, providing hiring managers with a "Humanity Score" or flagging.
  • Core value proposition is restoring signal-to-noise ratio in high-volume hiring channels by filtering out mass-generated, non-contextual submissions.

Details

Key Value
Target Audience Hiring managers and recruiters participating in high-volume threads like "Who is Hiring" (barrell, Aurornis, jaredsilver).
Core Feature Semantic similarity check against thousands of common LLM cover letter templates, stylistic analysis to detect generic writing, and cross-reference checks (if possible) to flag identical submissions across multiple employers/platforms.
Tech Stack Python (for NLP/ML analysis), maybe using vector databases (WireScreenNYC, YCharts mention Pinecone) for efficient similarity search against a corpus of "spam" examples.
Difficulty Medium (Requires iterative training/defining what constitutes "AI slop" vs. genuine effort.)
Monetization Hobby

Notes

  • This is a direct, meta-solution to the massive pain point expressed by multiple users frustrated by the signal-to-noise ratio in applications, such as barrell and Aurornis.
  • barrell notes: "I’ve never responded to an AI generated application." This product would automate the filter that hiring managers are currently applying manually, saving enormous time.