Project ideas from Hacker News discussions.

Google AI Overviews cite YouTube more than any medical site for health queries

📝 Discussion Summary (Click to expand)

1. AI health answers are often unreliable and can be dangerous

“It’s not mistakes, half the time it’s completely wrong and total bullshit information.” – jdlyga
“Google AI overviews are often bad, yes, but why is youtube as a source necessarily a bad thing?” – laborcontract

2. YouTube is being promoted as a primary source, raising concerns about commercial bias

“Google AI Overviews cite YouTube more than any medical website when answering queries about health conditions.” – xnx
“Google AI (owned by Alphabet) favoring YouTube (also owned by Alphabet) should be unsurprising.” – ThinkingGuy

3. The AI’s source‑selection process is opaque and often pulls in low‑quality or AI‑generated content

“Gemini cites lots of “AI generated” videos as its primary source, which creates a closed loop and has the potential to debase shared reality.” – abixb
“The analysis is really lazy garbage. It lumps together quality information and wackos as “youtube.com”.” – xnx

4. Users expect AI to be accurate and transparent, but it frequently over‑promises and under‑communicates uncertainty

“Basic problem with Google's AI is that it never says “you can't” or “I don't know”. So many times it comes up with plausible‑sounding incorrect BS.” – bjourne
“Google AI cannot be trusted for medical advice. It has killed before and it will kill again.” – josefritzishere

These four themes capture the core concerns of the discussion: safety of medical AI, corporate influence, source quality, and the mismatch between user expectations and AI behavior.


🚀 Project Ideas

SourceCred Browser Extension

Summary

  • Adds a source‑audit panel to AI‑generated answer boxes (Google AI Overviews, Gemini, ChatGPT, etc.).
  • Shows source credibility scores, excerpted text, and flags AI‑generated content.
  • Core value: gives users instant confidence checks on AI citations.

Details

Key Value
Target Audience HN users, researchers, medical professionals, students
Core Feature Real‑time source audit overlay with credibility metrics and source text
Tech Stack Chrome/Firefox extension, React, Node.js backend, OpenAI API for NLP scoring
Difficulty Medium
Monetization Revenue‑ready: freemium with paid analytics add‑on

Notes

  • “I want to see the source text” – many commenters expressed this need.
  • Enables discussion on how AI cites YouTube vs. peer‑reviewed sites.
  • Practical for anyone who relies on AI for quick answers and wants to verify sources.

MedTrust Knowledge Base

Summary

  • Curated, searchable database of peer‑reviewed medical articles, guidelines, and institutional reports.
  • Provides an API for LLMs to query only vetted sources.
  • Core value: replaces unreliable YouTube citations with trustworthy medical evidence.

Details

Key Value
Target Audience Medical researchers, clinicians, health‑tech startups
Core Feature API‑driven search over vetted medical literature with metadata (DOI, authorship, recency)
Tech Stack PostgreSQL, ElasticSearch, FastAPI, Docker, Kubernetes
Difficulty High
Monetization Revenue‑ready: tiered API pricing (free tier + paid plans)

Notes

  • “Google AI Overviews cite YouTube more than any medical site” – a pain point this solves.
  • Encourages LLMs to use high‑quality sources, reducing misinformation.
  • Sparks debate on how AI should source medical knowledge.

VideoAuth AI‑Video Detector

Summary

  • Detects AI‑generated videos on YouTube and other platforms, tagging them with an authenticity badge.
  • Uses multimodal analysis (visual, audio, metadata) to flag synthetic content.
  • Core value: prevents AI from citing unverified videos as evidence.

Details

Key Value
Target Audience Content creators, fact‑checkers, AI developers
Core Feature Real‑time video authenticity scoring and badge overlay
Tech Stack TensorFlow, PyTorch, FFmpeg, AWS Lambda, React
Difficulty Medium
Monetization Hobby (open source) with optional enterprise licensing

Notes

  • “AI-generated videos are being used as sources” – a major frustration.
  • Helps platforms like YouTube enforce transparency.
  • Provides a discussion point on the ethics of synthetic media.

SourceVerifier API

Summary

  • Accepts URLs and returns credibility metrics: author credentials, publication date, AI‑generation flag, and source type.
  • Designed for LLMs to filter out low‑trust or AI‑generated content before citing.
  • Core value: automates source vetting for AI systems.

Details

Key Value
Target Audience LLM developers, AI research labs, content platforms
Core Feature URL credibility scoring and AI‑content detection
Tech Stack Go, Redis, ML models, REST API, Docker
Difficulty Medium
Monetization Revenue‑ready: pay‑per‑request API with subscription tiers

Notes

  • “I want to know if a source is trustworthy” – a recurring comment.
  • Enables AI agents to say “I don’t know” when sources are unreliable.
  • Sparks practical utility for building trustworthy AI assistants.

Read Later