Project ideas from Hacker News discussions.

Trillions spent and big software projects are still failing

📝 Discussion Summary (Click to expand)

The discussion centers around the failure modes of large, complex software projects, especially within government or large enterprises. Here are the three most prevalent themes:

1. Extreme Complexity and Historical Baggage of Requirements

The difficulty of large systems is often rooted in highly specific, stubborn, and politically entrenched requirements, making simplification nearly impossible.

  • Supporting Quote: One user highlighted the core issue of the Phoenix project's complexity: "80K payroll rules!!!" This was further explained by another: "The main objective of this committee also includes simplifying the pay rules for public servants, in order to reduce the complexity of the development of Phoenix's replacement. This complexity of the current pay rules is a result of 'negotiated rules for pay and benefits over 60 years that are specific to each of over 80 occupational groups in the public service.'"

2. Lack of Accountability and Misalignment of Incentives in Management

A strong theme is the observation that decision-makers (management/executives) often avoid responsibility for project failures, while the technical implementers bear the brunt of the consequences.

  • Supporting Quote: A user noted this dynamic in project retrospectives: "After every single project, the org comes together to do a retrospective and ask 'What can devs do differently next time to keep this from happening again'. People leading the project take no action items, management doesn't hold themselves accountable at all..." Another user summarized the fundamental motivation: "People who desire infinite power only want it because it gives them the power to avoid consequences, not because they want both the power and the consequences."

3. Software Engineering Lacks Mature, Consistent Lessons Learned/Standards

Several participants lamented that software development, unlike older engineering fields (like hardware or construction), frequently reinvents the wheel, failing to codify and adhere to historical lessons, leading to repeated failures.

  • Supporting Quote: "While hardware folks study and learn from the successes and failures of past hardware, software folks do not. People do not regularly pull apart old systems for learning." This tendency is seen as leading to systemic issues: "Constantly rewriting the same stuff in endless cycles of new frameworks and languages gives an artificial sense of productivity and justifies its own existence."

🚀 Project Ideas

Rule-Set Interpreter & Simplification Engine (RSISE)

Summary

  • A tool designed to ingest, normalize, and visualize massive, contradictory, bespoke rule sets (like the 80,000 payroll rules mentioned) from legacy systems.
  • Core value proposition: Transforming opaque, politically entrenched business logic into structured, auditable, and ideally, simplifiable models for modernization efforts.

Details

Key Value
Target Audience Government modernization teams, compliance auditors, large enterprise IT project managers dealing with legacy core logic.
Core Feature Automated rule parsing (potentially leveraging LLMs supervised by domain experts) into a standardized intermediate representation (IR), coupled with dependency mapping and simulated outcome testing.
Tech Stack Python/Rust for core parsing and simulation; Graph database (e.g., Neo4j) for rule dependency visualization; Frontend using React/D3.js for interactive rule visualization.
Difficulty High
Monetization Hobby

Notes

  • "80K payroll rules!!! Without much prompt effort, I got about 350K Canada Federal Service employees..." This project directly addresses the complexity barrier that derails large modernization efforts by bringing transparency to the domain logic.
  • It directly supports the "committee" approach mentioned, giving them an objective technical basis for simplification discussions, rather than relying on decades of anecdotal knowledge.

Decoupled System Viability Simulator (DSVS)

Summary

  • A simulation and modeling service that verifies architectural constraints and trade-offs before a single line of code is written for a complex system replacement.
  • Core value proposition: Provides quantitative evidence for "smaller systems are better" advocates by simulating the coordination and dependency friction (diseconomies of scale) inherent in monolithic vs. distributed/modular replacements.

Details

Key Value
Target Audience Technical Leads, Architects, and Executive Sponsors who initiate large-scale infrastructure projects.
Core Feature Allows users to define architectural boundaries (modules, interfaces, dependencies) and run simulations based on historical performance data (e.g., coordination latency, release cadence stability) to predict overall project risk/coordination overhead.
Tech Stack Go/Java for performance; Simulation frameworks (e.g., discrete event simulation libraries); Configuration via declarative YAML schema.
Difficulty Medium/High
Monetization Hobby

Notes

  • Addresses the desire for modularity: "Multiple smaller systems is usually a better approach, where possible." and the critique that managers often view software with a "manufacturing mindset" (economies of scale).
  • This tool could quantify the point where coordination overhead outweighs the benefits of monolithic architecture, supporting informed design decisions.

Accountability Mapping & Retro Action Enforcer (AMRAE)

Summary

  • A lightweight project tool inserted after project phases (like Agile retrospectives) to track commitments, manager accountability, and the actual impact of technical/process decisions.
  • Core value proposition: Bridges the gap between project retrospectives and executive action, preventing the cycle where developers take pain while management accountability disappears. ("retrospective... ask 'What can devs do differently next time' ... People leading the project take no action items")

Details

Key Value
Target Audience Software developers, Scrum Masters, and Engineering Managers aiming to institutionalize learning from failed projects.
Core Feature Requires documenting decisions (e.g., "We will use external vendor X" or "Management resolves conflict Y") and links them to subsequent project KPIs (timeline slips, bug rates). It automatically flags decisions whose responsible party did not complete agreed-upon follow-up actions.
Tech Stack Lightweight full-stack framework (e.g., Next.js/Supabase) for quick setup, focusing heavily on auditing and data integrity.
Difficulty Low/Medium
Monetization Hobby

Notes

  • Directly tackles the systemic failure highlighted by users: "management doesn't hold themselves accountable at all." and the anecdote about action items being removed to constrain management.
  • It shifts the focus from "What can devs do differently?" to "What did decision-makers fail to implement?".