Doctors need AI literacy. But what does that look like in practice?
News

Doctors need AI literacy. But what does that look like in practice?

Aram Zegerius

Aram Zegerius

Technical conscience

Yesterday, the Dutch Federation of Medical Specialists published an AI competency set for physicians. The document outlines six competencies that medical residents and specialists need to work responsibly with AI. It might sound like just another policy document, but this one deserves attention.

The competency set didn't appear out of nowhere. In October 2025, 180 healthcare professionals gathered at UMC Utrecht to discuss AI literacy. The EU AI Act requires a baseline level of AI literacy. And meanwhile, more and more doctors are using AI tools in daily practice, often without any formal framework.

How much the topic is alive became clear yesterday evening, when the Federation of Medical Specialists (FMS) hosted a joint session of its Patient Safety Platform and AI Network at the Domus Medica in Utrecht. Ask Aletta co-founders Tijs Stehmann and Aram Zegerius were part of the programme and presented together on the responsible use of Ask Aletta in clinical practice. In the panel discussion that followed, with Professor Lotty Hooft (director of Cochrane Netherlands) and Michel van Genderen (intensivist and associate professor at Erasmus MC), the conversation centred on the opportunities and risks of AI for patient safety.

Tijs Stehmann and Aram Zegerius at the Patient Safety Platform and AI Network session of the Dutch Federation of Medical Specialists
Tijs Stehmann and Aram Zegerius at the Patient Safety Platform and AI Network session of the Dutch Federation of Medical Specialists

What's in it

The six competencies cover a wide range. A physician should be able to explain in broad terms what AI is and how algorithms work. Should critically assess AI output for reliability. Should clearly inform patients about how AI plays a role in diagnostics and treatment. And should weigh ethical and legal considerations.

Two competencies stand out to us.

The second competency states that a physician "recognises bias or incorrect assumptions in AI output" and "analyses results for accuracy, applicability, limitations and potential risks". The third states that a physician "uses AI within applicable quality standards, guidelines and legislation".

Those are good requirements. But they assume something most AI tools don't currently offer.

The gap between competency and tooling

Consider this: a doctor is supposed to critically assess AI output for reliability. That requires being able to see where that output comes from, and to check the sources. Is the answer based on a current guideline, or a five-year-old blog post?

With ChatGPT, Google's AI Overviews or similar tools, you can't do that. The output is a black box. You get an answer, but no insight into the evidence behind it. Last week we wrote about the problem of Google's AI citing YouTube more often than medical sources. How is a doctor supposed to "recognise bias" when the sources are invisible?

And then there's the requirement to use AI within applicable guidelines. That's difficult when the AI system itself doesn't know those guidelines, or doesn't distinguish them from random online content.

The competency set rightly describes what doctors should be able to do. But without the right tools, it stays theoretical.

Where Ask Aletta fits

We built Ask Aletta with exactly these questions in mind, long before the competency set was published. Not because we have a crystal ball, but because to us it's common sense. If you build an AI tool for healthcare professionals, that tool should make it possible to work professionally.

In practice, that means every answer in Ask Aletta includes direct links to its sources. You can see where the information comes from. You can judge for yourself whether the source is relevant to your clinical situation. That's not a feature we added to tick a box on a competency framework or a compliance requirement. It is simply how - to us - medical information should work.

Ask Aletta draws from verified sources such as clinical guidelines and peer-reviewed literature. We therefore don't use YouTube or wellness blogs. And the system understands your discipline, so the information matches your clinical context.

From policy to practice

The FMS has announced it will develop the competency set into modules on their Digital Learning Environment in the first half of 2026. That's a good step. But training alone won't be enough.

Doctors also need tools that support AI literacy in practice. Tools where you don't have to guess whether an answer is correct, but can actually verify it. Where the sources are medically validated and transparency isn't a promise but a property of the system.

The competency set is a starting point. The next step is making sure doctors actually have the means to put those competencies into practice.