News

Why popularity is not a measure of medical reliability

Aram Zegerius

Aram Zegerius

Technical conscience

Last weekend, The Guardian published research that should give us pause. An analysis of over 50,000 health-related search queries in Germany reveals that Google's AI Overviews – the AI-generated summaries at the top of search results – cite YouTube more frequently than any medical website.

The numbers are sobering:

  • YouTube: 4.43% of all citations (20,621 out of 465,823)
  • NDR.de (German public broadcaster): 3.04%
  • MSD Manuals: 2.08%
  • Netdoktor.de: 1.61%

No hospital network or academic institution comes close to YouTube's dominance. Academic journals represent just 0.48% of all citations. Government health authorities combined account for less than 1%.

The problem is not YouTube itself

Let's be fair: valuable medical content exists on YouTube. University hospitals publish surgical videos. Cardiologists explain ECG interpretation. Dermatologists demonstrate diagnostic techniques.

Google's own defence points to this: of the 25 most-cited YouTube videos in the study, 96% came from medical channels.

But here's the catch: those 25 videos represent less than 1% of all YouTube links that AI Overviews cite for health questions. What about the other 99%? The researchers are cautious: "With the rest of the videos, the situation could be very different."

Popularity versus reliability

Hannah van Kolfschooten, a researcher at the University of Basel specialising in AI and health law, identifies the core issue:

"The findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

That's the problem. The algorithms determining what AI systems cite are not optimised for medical accuracy. They optimise for engagement, views, and click behaviour.

The researchers estimate that only 34% of cited sources have formal safeguards for medical reliability. The remaining 66% do not. A wellness influencer with a million views can appear algorithmically "more valuable" than a clinical guideline. Broscience and lifestyle gurus get the same weight as peer-reviewed research.

The loop closes

Online discussions about this research point to a disturbing phenomenon: AI systems are increasingly citing AI-generated content. Videos with AI voiceovers and AI-generated imagery appear as sources for AI summaries.

This creates a closed circuit where information can no longer be traced back to primary sources like clinical trials or consensus guidelines. The origin of medical claims becomes increasingly opaque.

For consumers, this is worrying. For healthcare professionals, it's simply unworkable.

What healthcare professionals actually need

A GP who needs to quickly verify whether a medication is safe during pregnancy doesn't need a YouTube compilation. A nurse looking for the latest sepsis guidelines gains nothing from content that's popular but potentially outdated.

Healthcare professionals need information that's traceable to specific sources, based on current guidelines, and tailored to clinical practice.

General-purpose AI search engines can't provide this. They're built to answer the average question from the average user – not to meet the requirements of professional medical decision-making.

How Ask Aletta does things differently

At Ask Aletta, we take a different approach: the source determines the value, not popularity.

Our medical AI assistant draws exclusively from verified sources: clinical guidelines and peer-reviewed medical literature. We don't use YouTube, wellness blogs, or other content of unclear origin.

Every answer includes direct links to sources. Not because we use transparency as a marketing term, but because healthcare professionals have the right to verify what an AI system claims.

Ask Aletta also understands your discipline. A GP receives different information than a paediatrician or an emergency physician. Not because the medical facts differ, but because the relevant guidelines and clinical context are different.

The future of medical information

The SE Ranking study is a snapshot from December 2025, conducted in Germany. The researchers acknowledge limitations: results may vary by region, by time period, by question phrasing.

But the underlying dynamic is universal. As long as AI systems optimise for popularity and engagement, they will continue to cite suboptimal sources for medical queries.

The solution doesn't lie in improving general-purpose AI search engines. The solution lies in specialised systems designed from the ground up for medical reliability.