ECR: Will AI’s ‘smart assistants’ save radiologists from burnout?

As Large Language Models (LLMs) like ChatGPT are increasingly harnessed in daily radiology practice, research, and education, radiologists need to know how to optimize these tools and be aware of their current limitations, according to experts at Thursday’s session: "How to use generative AI for academic, clinical, and administrative tasks."

LLMs can be literally lifesaving, according to Starnberg-based neuroradiologist Dr. Stefan Thieme, co-founder of the AI-powered reporting tool TXT.KIWI in Germany. German radiologists in the private sector are increasingly under pressure, performing more scans in the same hours, while coping with cost pressures due to shrinking margins and the risk of burnout from increasing mental load, noted Thieme. However, LLMs and AI-supported reporting software are already saving radiologists time in everyday clinical tasks, he said.

Dr. Stefan Thieme presents field test results from June 2025.Dr. Stefan Thieme presents field test results from June 2025.Dr. Stefan Thieme and the ESR.

“LLMs as medical writing assistants can improve speed and quality of paperwork, as well as mental health. LLMs generate context-aware text in seconds and structured reports, which lowers fatigue and frustration,” Thieme told delegates in his presentation, “Using LLMs in clinical routine.”

He noted how these language models could fix “messy” dictation. The problem with standard speech-to-text is that it mangles German grammar, for example, generating wrong suffixes, breaking compound words, or getting Latin terms wrong. With LLM, far fewer corrections are needed, and it works particularly well for nonnative speakers, according to Thieme.

He also pointed to how in many free-text reports there was no structure or consistency, but when dictating freely with LLM, the model structures, standardizes, and corrects, while handling terminology, laterality, and any missing elements. However, while LLMs may provide structure, review by the radiologist must remain mandatory, he added.

Other uses included filling in templates. Even with free dictation, the LLM can map the content to the right fields with zero clicking, enabling the radiologist to just talk while the model fills the template. However, caution was needed for summarization, he noted, as the model might fail if a critical detail is left out.

One-stop shop

Where LLMs really deliver is in the generation of full reports from minimal dictation input, in terminology correction, in standardization, and in adapting structure to predefined templates. However, there is a catch, he noted.

“An LLM alone is just a text generator, without seamless integration, it’s just another Chatbot. The real value comes from integration in your workflow, and it is important to get software that really works for you,” Thieme told delegates.

He pointed to the necessity of a full package that covered dictation, correction, structure, templates, quality assurance, and review.

But with such a pipeline, is AI report generation really faster? A field test in June 2025 showed that across 6,976 CT and MRI reports, An LLM saved an average of 100 words per report (88 words dictated vs. 183 words in output), equating to around a minute per report saved. Using an LLM for a daily 40 MRI reports would mean savings of around 40 minutes per day. He further noted that every sixth dictation required fewer than 20 words for report output, and 50% of dictations required fewer than 60 words.

“When it comes to implementation of an LLM, begin with low-key tasks, build trust, and then expand. However, there are nonnegotiables. There always needs to be human review. You can’t run reports only with AI -- there needs to be a radiologist at the end of the chain to correct it,” he said.

Editorial AI

Use of LLM in writing and review was “OK,” but it must be disclosed, or it could be considered an indicator of fraudulent science, according to deputy editor of European Radiology Dr. Daniel Pinto dos Santos, who presented “What editors really think: generative AI in scientific publishing.”

“Being an editor in a time of LLMs can be challenging,” he said. “There are many good applications of LLMs to help write papers and improve on structure and language, and that’s fine with us.”

In the early days of AI-assisted writing, authors were hesitant to disclose the use of LLMs because they thought it might preclude them from being published. However, this is not the case, as the science is rated on its own merit, noted Pinto dos Santos, assistant professor of radiology at the University Medical Center Mainz, Germany. In fact, using LLM tools is positive if it makes the text easier to read.

“However, if we invite you to review for us, you, the reviewer, are not allowed to use public LLMs running in the cloud. These cloud-based LLMs share uploaded information, and until a paper is published, editors and reviewers are bound by confidentiality,” he warned. “If you use a local LLM, please still be careful.”

LLM tools make it easy to generate whole papers and fake science, but the human radar for AI detection is finely tuned, he noted.

“If you think something is so weird that it doesn’t make sense, please tell us. Humans are usually very good detectors of LLM-generated fake science, so trust your instincts,” he said.

Smart teachers

Besides clinical and scientific writing, smart assistants are also helping radiologists in the domain of education, according to Dr. Tugba Akinci d’Antonoli, radiologist from Basel University Hospital, who provided attendees with use cases in her presentation “Smart assistant: teaching and writing with generative AI.”

Traditional teaching methods, which besides textbooks and courses, also rely on peer learning and case-based learning with real patients, can result in uneven training, she noted.

“The main limitation is variability. The case volume and the mix depends on the institution and the patient population there, so residents can finish their training with very different experiences and competence levels,” she noted.

AI learning seeks to address this imbalance through “precision education” tools to provide residents with personalized curriculum development and individualized lesson plans depending on need, as well as learner assessment Akinci d’Antonoli noted. For example, AI teaching tools can generate detailed learning objectives for a second-year radiology resident, which will be different from those of a first-year resident. Such tools identify gaps in knowledge and keep pace with the individual learner’s progress. She cited Harvard’s “RadGame,” an AI-driven gamified platform that provides personalized training, pathology localization, and report writing.

She also pointed to the powerful simulation tools that could create different case scenarios, as well as tailor the teaching goals, such as improving technical and interventional skills or helping residents prepare for typical night shift scenarios, or the tighter time pressure of an emergency.

The session was chaired by Dr. Elmar Kotter from Freiburg Im Breisgau, Germany, and by Dr. Merel Huisman from Nijmegen, the Netherlands.

Our full coverage of ECR 2026 can be found here.

Page 1 of 2
Next Page