There's a scene Celine Tan from Singapore General Hospital described at ECR 2026 that most people in imaging will recognize. A new graduate radiographer shows up. Bright, capable, completed onboarding, attended the lectures and signed the checklists. Weeks later, still hesitating before exposures, avoiding questions, and making preventable mistakes.
"I understood the slides," she told Tan, "but I'm not confident when I'm alone in the room." That gap, between information and readiness, is where radiology has always struggled. And across the session’s presentations at RPS 314, "AI-Augmented Practice," it kept turning out to be where AI struggles too.
"We gave her information, but we did not give her readiness," said Celine Tan.Courtesy of Claudia Tschabuschnig
The ambiguous nodule, the protocol that doesn't cleanly map, or the failure that no one writes a paper about. While AuntMinnieEurope has followed radiographers' relationship with AI since ECR 2024, this year's session revealed though that AI tends to break down in exactly the places radiology already does.
No longer sustainable
Jonathan Loui Portelli, PhD, of the University of Malta opened his keynote by asking the room to picture a shift they'd already lived. "You're at the hospital. It's a busy shift. The department is full. You're in the CT unit. You have a long scanner list, a lot of patients waiting. You need to make rapid decisions, take calls, select protocols, position the patient, check parameters, ensure safety, and try to balance efficiency with personalized patient care."
"AI does not remove that responsibility, but it redistributes it," noted Jonathan Loui Portelli.Courtesy of Claudia Tschabuschnig
"Imagine someone sitting beside you. Not someone replacing you, but someone who can help. Won't get tired, won't get frustrated. Someone who can actually flag issues and suggest options, priorities, and alerts you when something maybe doesn't seem fit. But ultimately, the decision is yours."
Somebody has to lead this
AI didn't arrive because radiographers were failing, he said. "The demands on the workforce are no longer sustainable." But his real argument was political as much as clinical. Radiographers need to claim a seat at procurement tables, policy discussions, and implementation planning, before someone else fills it.
"The question is not whether AI will enter our departments, but who is going to lead how it is going to be used in the safest and most effective manner. AI does not remove that responsibility," he said. "It redistributes it."
Where the model hesitates
Zainab Dawood Aljneibi from Fatima College of Health Sciences in Abu Dhabi, United Arab Emirates, tested a pretrained convolutional neural network in Google AI Studio on 110 chest CT cases -- normal, benign, and malignant. In malignant cases: strong. Area under the ROC curve of 0.902. Clear masses, confident descriptors, reliable classification. Benign cases: 0.615.
In normal scans, the model occasionally flagged ground-glass opacities that weren't there. In benign cases, overlapping terminology produced misclassification. "Oversensitivity to minor changes," Aljneibi noted. Terms that sounded radiological but pointed nowhere definitive.
"These findings reinforce the need for continued model refinement and mandatory human oversight before clinical replacement," she concluded. The grey zone was grey for the AI too.
A culture that celebrates success
MRI radiographer Nikolaos Stogiannos from Corfu, Greece, arrived with a map of everything that can go wrong, drawn from an open-access review in the European Journal of Radiology. Three categories of AI failure: the model itself, the infrastructure it sits in, the humans using it.
The model came with algorithmic bias, datasets that lack diversity, poor external validation. Infrastructure: PACS and RIS integration failures, hardware gaps, and network deficiencies. Humans: automation bias, resistance to change, annotation errors, and sociocultural context that shapes adoption in ways no algorithm accounts for.
But the point that cut deepest was about testing. "AI testing, meaning internal or external validation, evaluates accuracy," he said. But it does not always explore the fairness, robustness, safety, productivity, acceptance, and the explainability of the model. And this is a very important source of AI failures."
Underneath that is a publication problem. "It is challenging to identify AI failures in a culture that mostly celebrates success." Departments that quietly withdrew a tool don't publish a paper about it. The field's understanding of what goes wrong is structurally incomplete, and decisions about what to deploy next are made on that incomplete picture.
The trust problem
Back to Tan and the radiographer who couldn't ask. Her team at Singapore General Hospital built XPA, an in-house AI chatbot designed to support newly recruited radiographers during onboarding, pulling from in-house teaching materials and department protocols. Eight participants used it for three months, then discussed their experience in a focus group.
What emerged wasn't a technology story. Senior radiographers remained the highest authority. But for basic questions, something else was happening. "Concerns about being judged, appearing incompetent, or disturbing colleagues, particularly outside working hours, were frequently described."
XPA worked not because it knew more than a senior colleague. It worked because it didn't make anyone feel stupid for asking. "The contextualized AI chatbot was perceived as a psychologically safe, nonjudgmental resource for preliminary clarifications," Tan said. An intermediary layer between not knowing and daring to escalate.
The information was always there. What was missing was a safe way to reach it. "Onboarding is not an event," Tan concluded. "It's a continuous support ecosystem."
Ambiguity, inherited
Benoît Dufour from Sion, Switzerland, presented a machine- learning model for recommending CT body protocols across 27 protocol types, trained on over 42,000 adult studies. Where the task was well-defined, the model was strong, reaching 91% accuracy on high-confidence predictions. The errors clustered in protocols with overlapping anatomical coverage, abdominal versus thoracoabdominal, where indication language is already imprecise.
"Some radiologists continue to express their protocol using the old nomenclature with phases and timings," Dufour observed, "and technologists must then adapt."
The model inherited that ambiguity directly. Where human communication was fuzzy, confidence dropped. The study was funded by GE HealthCare.
The one-third nobody counted
During the discussion, Stogiannos argued that a substantial share of CT energy use occurs during idle time and pointed to AI-controlled shutdown features as one practical area for savings.
"I know that the big equipment vendors are now building AI tools capable of switching off MRI scanner components that we don't use," he said. "When you're not doing the positioning, the table is automatically switched off. And we are eager to see in the future what is next for energy savings."
Not the transformation that gets the conference keynotes. The quiet operational fix for costs that have been running unnoticed for years.
The container matters
The EU-REST project found gaps in radiographer workforce planning across EU member states and proposed a structured, workload-based approach to staffing, alongside stronger education, training, and recognition of emerging and advanced professional roles as imaging practice evolves.
AI is part of the proposed answer to a workforce under serious pressure. But this session, taken as a whole, kept surfacing the same tension. AI is being asked to help a profession that doesn't systematically report its own failures, doesn't always create conditions where junior staff can admit uncertainty, and hasn't resolved basic communication gaps between radiologists and radiographers. The tool is arriving into that container. And the container shapes what the tool can actually do.
Portelli named the stakes plainly at the close of his keynote. "The safest AI use in imaging and radiotherapy will be guided by those who are skilled, educated, and empowered."
Our full coverage of ECR 2026 can be found here.




















