Project ideas from Hacker News discussions.

New accounts on HN more likely to use em-dashes

📝 Discussion Summary (Click to expand)

1. Bots and AI‑generated spam are already flooding HN

“I dunno, I agree. It sounds conspiratorial.” – loeg
“I’ve certainly noticed the summary posts.” – dematz
“I see a lot of bots replying to bots.” – EnderWT

2. The em‑dash has become a “red flag” for AI‑written text

“I’m still salty that I can’t use em‑dashes anymore for fear of my writing being flagged as AI generated.” – d4mi3n
“If you see an em‑dash in a comment I stop reading.” – SkyeCA
“The em‑dash is a signal that a bot is writing.” – marginalia_nu

3. Moderation and identity‑verification are being debated as a solution

“Getting rid of anonymity is in time going to lead to getting rid of the platform.” – OutOfHere
“Every human only gets 1 account. And then we still ban people that use AI.” – jascha_eng
“We could require a video ID capture for every post.” – 8cvor6j844qw_d6

4. AI‑generated content is eroding comment quality and trust

“I’m more worried about how many people reply to slop and start arguing with it.” – homebrewer
“The content is now just some meme or a ‘gotcha’.” – sunaookami
“The comments are now just some meme (especially on Reddit) or some kind of ‘gotcha’…” – sunaookami

5. Users feel pressured to self‑censor or mimic bot style

“I’m not trying to negate the fact. I'm just pointing out that a correlation without another indicator is not evidence enough.” – cookiengineer
“I have to be careful not to use em‑dashes or I’ll be accused of AI.” – d4mi3n
“I’m forced to write in a way that looks human, but I’m still using AI.” – kelseyfrog

6. The motivation behind bot/AI use is largely astroturfing, marketing, or political influence

“The motive is probably more depressing. A normal human who just wants human interaction.” – simianwords
“They’re using bots to upvote certain topics for monetary gain.” – beart
“The goal is to influence discussions in a particular direction for monetary or political gain.” – beart

7. The community is divided over whether to embrace or fight the typography‑based “AI‑signal”

“I will still use them – fully aware that some people will complain about AI.” – baxuz
“I’m not going to stop using em‑dashes.” – mghackerlady
“I’m sad that good typographical conventions have been co‑opted by the zeitgeist of LLMs.” – d4mi3n

These seven themes capture the core concerns and reactions circulating in the discussion.


🚀 Project Ideas

BotScore Dashboard for Hacker News

Summary

  • Provides a real‑time trust score for every HN account based on multi‑signal analysis (comment style, activity patterns, em‑dash usage, IP reputation, etc.).
  • Gives users and moderators a quick visual cue to spot likely bots or spam accounts.

Details

Key Value
Target Audience HN users, moderators, community managers
Core Feature Live trust score widget, historical trend graph, alert system
Tech Stack Python (FastAPI), PostgreSQL, ML model (scikit‑learn), React frontend, Docker
Difficulty Medium
Monetization Revenue‑ready: subscription (free tier + $5/month for advanced analytics)

Notes

  • HN commenters complain about “bots flooding the comments” and “shadowbanning” without explanation. A visible score would reduce frustration.
  • The dashboard can be embedded in a browser extension or a standalone site, encouraging community discussion about trust.

Typographic Analyzer for AI Detection

Summary

  • Detects typographic patterns (em‑dashes, en‑dashes, smart quotes, bullet styles) that correlate with LLM‑generated text.
  • Flags suspicious comments and provides a confidence score.

Details

Key Value
Target Audience HN users, moderators, researchers
Core Feature Text parsing, typographic feature extraction, AI‑vs‑human classifier
Tech Stack Node.js, TypeScript, TensorFlow.js, SQLite, Chrome extension
Difficulty Medium
Monetization Hobby (open source)

Notes

  • Many commenters feel “forced to stop using em‑dashes” because they’re seen as AI tells. This tool lets them see if their style is flagged and offers suggestions.
  • The extension can auto‑highlight suspicious comments in the UI, sparking discussion on authenticity.

Invitation‑Based Trust System

Summary

  • Implements a lightweight invitation tree for new HN accounts, where each invitee inherits a fraction of the inviter’s reputation.
  • Reduces the incentive to create fresh bot accounts for karma farming.

Details

Key Value
Target Audience New HN users, community moderators
Core Feature Invite link generation, reputation transfer, age‑based decay
Tech Stack Go, PostgreSQL, Redis, REST API, minimal frontend
Difficulty Medium
Monetization Revenue‑ready: freemium (basic invites free, premium invites with analytics)

Notes

  • HN’s “new” vs. “noob” comments show a spike in bot activity. An invitation system would make it harder to farm karma.
  • Users can see their “trust lineage” and feel more accountable, aligning with the community’s desire for authenticity.

AI‑Comment Disclosure Badge

Summary

  • Automatically detects AI‑generated content in comments and appends a non‑intrusive badge (“AI‑generated”) to the comment.
  • Encourages transparency and reduces witch‑hunt over typographic cues.

Details

Key Value
Target Audience HN users, content creators
Core Feature AI detection model, badge overlay, user opt‑in
Tech Stack Python (FastAPI), HuggingFace Transformers, Vue.js, Chrome extension
Difficulty Medium
Monetization Hobby (open source)

Notes

  • Commenters like “I’m not using AI” but still get flagged. A badge clarifies intent and reduces backlash.
  • The feature can be toggled by the commenter, respecting privacy while fostering honest discourse.

Shadowban Transparency API

Summary

  • Provides an audit trail of shadowban events for HN accounts, including timestamps, reasons, and moderator actions.
  • Allows users to request review and fosters trust in moderation.

Details

Key Value
Target Audience HN users, moderators, researchers
Core Feature Shadowban log API, search interface, notification system
Tech Stack Ruby on Rails, PostgreSQL, GraphQL, Docker
Difficulty Medium
Monetization Revenue‑ready: API access ($0.01 per request)

Notes

  • Users are frustrated by “shadowbanned without explanation.” An open API gives them evidence and a path to appeal.
  • Moderators can use the API to track patterns and improve policy enforcement.

Community Moderation Toolkit

Summary

  • A web app that visualizes comment clusters, bot rings, and voting patterns, enabling moderators to spot coordinated activity quickly.
  • Includes tools for bulk flagging, reputation adjustment, and reporting.

Details

Key Value
Target Audience HN moderators, community managers
Core Feature Graph analytics, cluster detection, bulk moderation actions
Tech Stack Python (Django), Neo4j, D3.js, WebSocket
Difficulty High
Monetization Revenue‑ready: subscription ($10/month per site)

Notes

  • The discussion highlights “bot farms” and “voting rings.” This toolkit turns raw data into actionable insights.
  • By visualizing the network, moderators can intervene before a bot cluster gains traction, preserving discussion quality.

Read Later