Project ideas from Hacker News discussions.

Clair Obscur having its Indie Game Game Of The Year award stripped due to AI use

πŸ“ Discussion Summary (Click to expand)

1. Double Standard: AI for Code Accepted, AI for Art Criticized

Many argue hypocrisy in condemning AI-generated art/assets while tolerating AI code tools like Copilot.
"> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this." - hambes
"AI OK: Code / AI Bad: Art, Music. It's a double standard because people don't think of code as creative." - voidfunc
"Only when it comes to graphics/art. When it comes to LLMs for code, many people do some amazing mental gymnastics..." - m-schuetz

2. Licensing/Ethics: Training AI Violates FOSS/Art Rights (Stealing vs. Learning)

Debate on whether AI training on licensed code/art constitutes theft or fair use, with calls for no-LLM licenses.
"FOSS code that I have written is not intended to enrich LLM companies..." - ahartmetz
"This reasoning is invalid. If AI is doing nothing but simply 'learning from' like a human, then there is no 'stealing from artists' either." - stinkbeetle
"Most OS licenses requires attribution, so AI for code generation violates licenses the same way AI for image generation does." - m-schuetz

3. Overreaction to AI: Placeholders/Bans Are Luddite Witch Hunts

Disqualification for accidental placeholder AI textures seen as absurd, ideological, and anti-progress.
"I bet if they'd only used AI assisted coding would be a complete non-event, but oh no, some inconsequential assets were generated..." - danielbln
"That’s incredibly harsh. A blanket ban on AI generated assets is dumb as hell. Generating placeholder assets is completely acceptable." - thiht
"Banning using AI at all while developing the game is... obviously insane... equivalent to saying 'you may not use Photoshop... or VS Code...'" - veidr


πŸš€ Project Ideas

Traceable Asset Management (TAM)

Summary

  • A version-control-integrated system designed to flag and track the origin of every asset (source, human-made, or AI-generated) throughout the development pipeline.
  • Solves the "placeholder leakage" problem that led to the disqualification of Expedition 33, ensuring that AI-generated prototypes are programmatically separated from final shipping builds.

Details

Key Value
Target Audience Indie and AA game developers, legal units in creative studios.
Core Feature Automated metadata tagging for assets with "Block-on-Release" guards for AI tags.
Tech Stack Python, Git LFS / Perforce hooks, SQLite/PostgreSQL.
Difficulty Medium
Monetization Revenue-ready: SaaS subscription for teams or per-seat license.

Notes

  • HN commenters highlighted the risk of "accidental" shipping: "It was a terrible mistake letting placeholder assets get out in the final release" and "Some sort of temporary asset management system is required."
  • It provides a "Forensic Audit" trail to prove when and where AI was used, satisfying award committees or legal requirements.

AI-Free Certification & Registry

Summary

  • A third-party verification service that audits software repositories and asset pipelines to certify a product as "100% Human-Made."
  • Solves the trust gap between developers and "anti-AI" consumers/award bodies by providing a verified badge and audit report.

Details

Key Value
Target Audience Indie developers targeting the "Artisanal/Craft" niche, Award Bodies.
Core Feature Deep-scan of Git history and file entropy analysis to detect LLM-patterns or known Gen-AI artifacts.
Tech Stack ML (for detection), Go/Rust (for fast file scanning), Web3 (for immutable certificates).
Difficulty High
Monetization Revenue-ready: One-time audit fee per project/release.

Notes

  • Users expressed a need for clear boundaries: "The challenge of course is determining if AI was used... A forensic auditor would find that out."
  • Appeals to the "Digital Amish" or "Pro-Artist" sentiment mentioned in the thread: "Selling one's product as a moral option has been a fairly reliable marketing tactic."

CleanSlate: The Attribution-First Training Set

Summary

  • A curated, ethically sourced, and fully attributed dataset for training niche-specific models (e.g., textures, NPC dialogues) where every contributor is logged.
  • Solves the "unconsented use" and "lack of attribution" pain points by ensuring that any output can be traced back to a pool of compensated human creators.

Details

Key Value
Target Audience AI developers, game studios wanting "clean" internal models.
Core Feature A marketplace where artists opt-in to training and receive micro-royalties based on model usage.
Tech Stack Smart contracts for royalty distribution, Cloud Object Storage.
Difficulty High (Legal/Operations intensity)
Monetization Revenue-ready: Dataset licensing fees + platform percentage of royalties.

Notes

  • Addresses the primary ethical complaint: "Generative AI... is being rightfully criticised because it steals from artists."
  • Directly implements the "dividend in perpetuity" solution suggested: "Everyone who'd ever posted... gets a dividend."

Context-Aware "AI-Safe" IDE Shields

Summary

  • A plugin for VS Code/JetBrains that acts as a "firewall" for AI assistants (Copilot, Cursor), blocking prompts or suggestions that might involve copyleft (GPL) code or violate specific project licenses.
  • Solves the "accidental license infringement" problem where developers use AI that "hallucinates" or copies GPL code into a proprietary codebase.

Details

Key Value
Target Audience Enterprise software companies, Legal-conscious developers.
Core Feature Real-time license scanning of AI suggestions before they are accepted into the editor.
Tech Stack TypeScript (VS Code API), Scancode-toolkit, WASM.
Difficulty Medium
Monetization Revenue-ready: Premium tier for Enterprise (per user/month).

Notes

  • Users were concerned about license virality: "If its GPL... any code it generate should be GPLv3/AGPL too."
  • Satisfies the "learning != stealing" crowd by ensuring AI doesn't output recognizable 1-1 snippets of protected code.

Procedural-Gen Refactoring Tool

Summary

  • A specialized toolkit designed to convert static AI-generated assets (like 2D textures or simple 3D meshes) into "Traditional" procedural parameters (Perlin noise, math-based nodes).
  • Solves the "AI Slop" and "Environment/Energy" critiques by turning one-off generations into reproducible, lightweight, and hand-tuned procedural algorithms.

Details

Key Value
Target Audience Technical Artists, Game Developers.
Core Feature Image-to-Procedural node converter (e.g., for Blender or Substance).
Tech Stack Computer Vision (OpenCV), Math/Shader optimization.
Difficulty High
Monetization Revenue-ready: Individual tool license or "Pro" plugin.

Notes

  • Leverages the "consensus" that traditional procedural techniques are okay while Gen-AI is not: "No one is legitimately confused about the difference between hand-built procedural generation... and LLMs."
  • Converts the "Dead end" of Gen-AI into the "Progress" of a reusable tool.

Read Later