With AI now becoming more deeply integrated into clinical practice, a critical challenge radiology now faces is to ensure that the use of AI does not compromise patients’ rights, according to a legal specialist.
It’s not just a professional duty but also a legal obligation to prioritize patients’ rights in the use of AI, wrote Hannah van Kolfschooten, PhD, a postdoctoral researcher at the Centre for Life Sciences Law at the University of Basel, in an opinion piece published on 13 January by European Radiology.
Hannah van Kolfschooten, PhD.
Under the recently adopted EU AI Act, hospitals and clinicians deploying “high-risk” AI systems, which includes many radiology applications, are required to demonstrate adequate AI literacy. Furthermore, this extends beyond the technical knowledge to understanding and assessing risks AI use poses for patients, wrote van Kolfschooten, who is a member of the WHO Technical Advisory Group on AI in Health. Until December 2025, she was a researcher at the University of Amsterdam.
In her piece, she outlines five basic patients’ rights that may be affected by the use of AI:
- Access to care: The use of biased and demographically-limited datasets used for training AI can affect outcomes for patients, especially those in marginalized groups, amplifying inequities already present in healthcare.
- Autonomy: With AI use, an algorithm may flag or dismiss findings, and the outputs may be accepted without explanation or clinician oversight, van Kolfschooten wrote, without giving patients the opportunity for informed decision-making about their own care.
- Information: The use of algorithms can make it difficult for radiologists to explain to patients how a diagnosis was reached or a finding was flagged.
- Privacy: The development of radiology AI requires datasets of images, which raises concerns about consent, data safety, and reuse of data, especially when a hospital or imaging facility is part of a collaboration with a technology company.
- Redress: Placing liability may become difficult if AI was used when diagnoses are inaccurate. “Without transparency and clearly defined liability frameworks, patients may find it nearly impossible to pursue compensation.”
While legal safeguards are now in place, such as the General Data Protection Regulation (GDPR) and the Medical Device Regulation (MDR), they do not adequately address these rights, she added. The recently adopted AI Act is too general, in the author’s view, and falls short of creating or enforcing patients’ rights.
Van Kolfschooten has proposed safeguarding these rights through a three-part approach of a European Charter of Digital Patients’ Rights to delineate clearly the rights of patients in the use and development of AI technology, developing professional guidelines for using AI in radiology, and continuous education and stronger AI literacy to ensure responsible use and the prioritizing of patients’ rights.
You can read the opinion piece here.


















