1. AI health answers are often unreliable and can be dangerous
“It’s not mistakes, half the time it’s completely wrong and total bullshit information.” – jdlyga
“Google AI overviews are often bad, yes, but why is youtube as a source necessarily a bad thing?” – laborcontract
2. YouTube is being promoted as a primary source, raising concerns about commercial bias
“Google AI Overviews cite YouTube more than any medical website when answering queries about health conditions.” – xnx
“Google AI (owned by Alphabet) favoring YouTube (also owned by Alphabet) should be unsurprising.” – ThinkingGuy
3. The AI’s source‑selection process is opaque and often pulls in low‑quality or AI‑generated content
“Gemini cites lots of “AI generated” videos as its primary source, which creates a closed loop and has the potential to debase shared reality.” – abixb
“The analysis is really lazy garbage. It lumps together quality information and wackos as “youtube.com”.” – xnx
4. Users expect AI to be accurate and transparent, but it frequently over‑promises and under‑communicates uncertainty
“Basic problem with Google's AI is that it never says “you can't” or “I don't know”. So many times it comes up with plausible‑sounding incorrect BS.” – bjourne
“Google AI cannot be trusted for medical advice. It has killed before and it will kill again.” – josefritzishere
These four themes capture the core concerns of the discussion: safety of medical AI, corporate influence, source quality, and the mismatch between user expectations and AI behavior.