ECR: Cybersecurity -- radiologists are the last bastion for verifying GenAI images

As AI’s role in radiology continues to evolve and change, it has become increasingly susceptible to the risk of cyberattacks. An ECR session on 4 March called Cybersecurity in AI-driven radiology systems (E³ 226a) revealed how they are vulnerable and provided tips to tighten security strategies.

Is it possible to take a “fight fire with fire” approach and use AI for cybersecurity? Dr. Renato Cuocolo, radiologist and associate professor at the University of Salerno, Italy, believes that adversarial defense using AI to catch AI is possible. Models can be trained using “attacked” images so they learn to ignore noise or “poisoned” data, he said, answering this question posed in the discussion section by an attendee.

Furthermore, radiology needs “sentinel models”: AI that monitors other AI for suspicious behavior.

In his talk on Generative AI and aligning risks to data, Cuocolo pointed to the imminent need for red teaming in AI-driven Radiology. This would involve clinical “red teams” to proactively test and identify vulnerabilities in GenAI systems, including radiologists actively trying to break the model, for example, forcing it to diagnose a fracture where there is none.

“If a radiologist can fool the system, then a hacker can weaponize it,” he noted.

While the attack surface of AI and the vulnerabilities of large language models (LLMs) are becoming more widely known, what happens when the data itself becomes the weapon? This was the question he put to the audience during his presentation, noting that Generative AI doesn’t just analyze data but also generates it, and this shifts the risk from confidentiality and theft to integrity and trust, and rightly raises the question, “Can we trust our pixels?”

He noted that while LLMs such as ChatGPT and Gemini get more news coverage, Generative AI in radiology goes beyond text and targets image reconstruction, synthetic data (for example, generating “fake” patients for research), and image-to-report multimodal foundational models. With these GenAI targets, the risk is clear: If you can corrupt the generation process, you can corrupt the diagnosis, according to Cuocolo.

The first threat involves data poisoning or the “sleeping agent,” whereby an attacker inserts poisoned samples into the training data. The model then works perfectly for 99% of cases, but when a specific pattern appears, for example, a specific ID, scanner tag, or pixel noise signature, the AI will insert -- or hide -- a pathology such as a tumor.

“You can’t patch a poisoned model; you must retrain it,” he noted.

Radiologists also had to be aware of another threat -- a hallucination attack. This involved GenAI output that does not align with factual information or logical coherence. This could be accidental, such as AI removing a small lesion during denoising, or malicious, with an attacker injecting realistic fake evidence into PACS or the EHR, potentially triggering legal disputes over fraud.

A third threat to watch for was model inversion, which posed a privacy leak risk. For example, by asking GenAI to generate a brain MRI of a 40-year-old with glioblastoma from hospital X, the model might output a recognizable image of a real patient, causing a privacy breach without hacking. He noted that to avoid the risk of inversion and re-identification of real patients, extra mathematical noise could be added to datasets to mask individuals. There could also be model output auditing tools to check the similarity with training data samples. However, both these security layers increase computational costs.

He went on to describe how GenAI employs both images and text as input and output, and that this will increase the attack surface area in systems. One example of this was the “visual prompt injection” threat, in which attackers could hide malicious text inside the pixel data of an x-ray for example.

“While the radiologist might see a lung, the AI sees a command; hide the nodule,” he said.

While there were strategies to mitigate the potential threats, he pointed to the “human-in-the-loop firewall.”

“The ultimate security layer is the radiologist. AI findings must be verified by looking at the raw pixels, not just the overlay. And there must be training to learn how to recognize AI- and GenAI-specific issues,” he said. “Trust, but verify, the pixels.”

He added that while generative AI is transformative, it introduces truth decay and loss of the reliability of data.

“Security is no longer just about firewalls; it’s about data assurance, and we must align our risk management to the data itself,” he said.