Project ideas from Hacker News discussions.

Codex, Opus, Gemini try to build Counter Strike

πŸ“ Discussion Summary (Click to expand)

The three most prevalent themes in the discussion are:

1. Skepticism Regarding the Real-World Utility and Quality of AI-Generated Code

Many users expressed doubt that the generated code, even if technically functional, meets professional standards or warrants the hype, viewing the output as "slop" or tutorial-level effort.

  • Supporting Quote: "The code and output is literal slop... it's not something that would ever work industrially" (Madmallard).
  • Supporting Quote: "This is the job a junior developer may deliver in their first weeks at a new job, so this is the way it should be treated as: good intentions, not really good quality." (XzAeRosho).

2. Concerns and Debates Over AI Code Copyright and Licensing Issues

A significant portion of the thread focused on an instance where generated shader code appeared to be verbatim regurgitation of existing, licensed code. This sparked a debate about infringement, developer liability, and the legal landscape surrounding LLM outputs.

  • Supporting Quote: "I always find it amazing that people are wiling to use AI beacuse of stuff like this, its been illegally trained on code that it does not have the license to use, and constantly willy nilly regurgitates entire snippets completely violating the terms of use" (20k).
  • Supporting Quote: "If you think any court system in the world has the capacity to deal with the sheer amount an LLM code can emit in an hour and audit for alleged copyright infringements ... I think we're trying to close the barn door now that the horse is already on a ship that has sailed." (nineteen999).

3. The Conflicting Impact of LLMs on Programmer Enjoyment and Workflow

Users held polarized views on whether LLMs enhance or degrade the enjoyment of programming, with some finding them liberating from tedious work and others feeling they remove the most rewarding parts of problem-solving.

  • Supporting Quote (Positive View): "Coding for me has become more fun than ever since Opus 4.5. I'm working more and genuinely enjoying it a lot more, haven't had this much fum building software in years." (mgraczyk).
  • Supporting Quote (Negative View): "AI tools can alleviate some of the tedium of working on plumbing and repetitive tasks, but they also get rid of the dopamine hits. I get no enjoyment from running machine-generated code, having to review it, and more often than not having to troubleshoot and fix it myself." (imiric).

πŸš€ Project Ideas

LLM Codebase Audit & Remediation Tool (The "License Guardian")

Summary

  • A service/tool designed to automatically scan LLM-generated or AI-assisted codebases for potential licensing violations (exact code regurgitation) or dependency license non-compliance (like missing MIT notices).
  • Addresses the major concern raised by users about LLMs potentially injecting copyrighted or improperly licensed code into projects, which poses a business risk.

Details

Key Value
Target Audience Software companies, startups, and individual developers using LLMs heavily for code generation (e.g., heavy users of Codex, Gemini, Claude).
Core Feature Automated scanning of code files/commits against known open-source repositories and common "regurgitation" patterns, coupled with an automated "remediation suggestion" interface (e.g., suggest alternative phrasing or algorithm implementation).
Tech Stack Static analysis tools (e.g., ESLint/Prettier extensions), dedicated fuzzing/matching algorithms (potentially hashing code snippets), integrated IDE extensions (VS Code/JetBrains), backend monitoring service (Node.js/Python).
Difficulty Medium
Monetization Hobby

Notes

  • Users were highly concerned about the risk: "Companies should. Its a business risk, you open yourself up to legal action" [20k].
  • The product directly addresses the discussion around the three.js sky shader, turning a messy manual investigation into an automated, continuous process.
  • Provides a clear, actionable workflow for developers worried about their generated code: "You will be surprised how easily this can be resolved." [nineteen999] by having agents review code for infringements.

Lightweight, Skill-Gap Focused Game Engine Framework (The "QuakeSim Kit")

Summary

  • A lean, high-performance, open-source (or commercially viable) game engine framework focused exclusively on simulating the low-level netcode, physics, and movement feel of classic competitive shooters (like CS 1.6 or Quake).
  • Solves the desire expressed by users for a simpler, skill-based FPS engine without the complexity and physics baggage of modern engines like Source/CS:GO.

Details

Key Value
Target Audience Indie game developers, hobbyists, and former competitive FPS players who want to build "old-school" feeling shooters.
Core Feature Pre-implemented, highly tunable core systems for client-server synchronization, precise hitbox detection, simple movement prediction/correction, and low-poly rendering pipelines.
Tech Stack Rust (for performance and safety), WebAssembly backend (for browser playtime/testing). Potentially leveraging insights from the linked community open-source geometry efforts.
Difficulty High
Monetization Hobby

Notes

  • Directly addresses the nostalgia and desire for simplified, skill-gapped gameplay: "I'd play it if it were the quake style old-graphics version of CS that allowed for skill gaps." [vpShane].
  • It sidesteps the "LLM can't build a commercial AAA game" critique by accepting complexity is intentionally discarded in favor of core mechanics: "It's literally trash [modern FPS output]. My bar is, does this actually work and is it best practice for how first person shooters are made by professional game developers." [gafferongames] -> This tool provides the best practice basics without the modern fluff.

Interactive Model Debugging Playground: Prompt-to-Output Trace Viewer

Summary

  • A web-based sandbox environment that visualizes the interaction between the user prompt, the underlying LLM steps (if available, e.g., thinking chains), and the final code output, while simultaneously performing automated source attribution checks.
  • Solves the need to understand why an LLM produced a specific (or flawed) result, especially regarding code quality and legal issues, by making the process transparent.

Details

Key Value
Target Audience Researchers, advanced prompt engineers, and curious developers testing LLM capabilities (like the initial poster, stopachka).
Core Feature Synchronized playback of model versions (Codex vs. Gemini vs. Claude), integrated lightbox functionality for viewing large generated assets, and direct linking of any produced code snippet to its closest matching open-source origin (if found during generation or post-hoc).
Tech Stack React/Vue frontend (for interactive UI), WebGL/Canvas for rendering game visualization, integration hooks for major LLM APIs.
Difficulty Medium
Monetization Hobby

Notes

  • Solves the UI frustration ("all the images on the site are TINY") by integrating interactive visual elements directly into the workflow.
  • Gives developers the necessary context to trust or distrust the output: "That’s pretty darn cool." [stopachka] followed by "how can you tell if the AI has actually created one for you or not?" [gafferongames]. This playground provides the tools for that judgment.
  • The transparency of tracing costs ("Now show us the cost, the time it took") and source attribution would be a major selling point for HN users valuing accountability.