ChatGPT-generated informed consent forms may improve patients' understanding of complex interventional radiology (IR) procedures, research has found.
The findings suggest that large language models (LLMs), when applied under expert supervision, can serve as effective tools for enhancing patient communication and health literacy, especially for transarterial radioembolization (TARE) and percutaneous hydatid cyst treatment (PHCT), according to Dr. Ural Koç and colleagues at the Ankara Bilkent City Hospital in Ankara, Turkey. Their study was published on 28 December in the European Journal of Radiology.
"In practice, many consent documents in interventional radiology are written at a linguistic complexity far beyond the average patient's comprehension level," stated the authors. Aiming to bridge the gap between algorithmic generation and patient-centered understanding, the group developed Turkish-language prompts for the web-based ChatGPT-4o interface to create informed consent documents for TARE and PHCT.
The ChatGPT outputs were checked for medical accuracy, ethical consistency, and readability. No hallucinations (defined as fabricated or clinically incorrect statements) and no omissions were identified, the group noted, adding that only minor grammatical adjustments were made when necessary.
To compare the ChatGPT-generated forms to standard consent forms, Koç and colleagues recruited 122 adults with no prior knowledge of or exposure to either of the two IR procedures. Sixty-two participants received the LLM-generated consent form, and 60 received the standard form, for either TARE or PHCT -- although all also signed the standard form prior to their procedure, the authors noted.
The standard TARE form contained 1,570 words, 141 sentences, and 101 paragraphs, with an average sentence length of 11.1 words. It prioritized comprehensiveness over accessibility ... suitable for legal documentation but inadequate for patients with limited literacy, according to researchers. The PHCT standard form was more readable but remained linguistically dense and in need of structural refinements and plain-language adaptations, they added.
In contrast, ChatGPT-4o composed shorter sentences with lower linguistic complexity. This study highlights the use of an LLM with an agglutinative language (Turkish).
Researchers tested and retested participants on procedure day to measure their comprehension and satisfaction, as well as consent form readability. Patients who read ChatGPT-generated IR consent forms demonstrated higher comprehension scores than those who read standard forms (82.9% vs. 77.3%, p = 0.04), the group found.
Within each procedure, ChatGPT-generated forms outperformed standard forms (TARE: 87.2% vs. 81.7%, p = 0.13; PHCT: 78.3% vs. 73.0%, p = 0.13), according to the results. The benefit was consistent across procedures and education levels, with the greatest relative improvement among people with lower educational attainment, according to the results.
The ChatGPT-generated TARE informed consent form demonstrated the most balanced readability profile among all evaluated materials (Flesch 39.2; Ateşman 53.9), Koç and colleagues noted. The TARE subgroup also demonstrated better knowledge retention for ChatGPT-generated forms (average patient knowledge score 84.5%, compared with 75.6% with the PHCT form).
Comprised of 481 words, 88 sentences, and 67 paragraphs, the LLM-generated TARE form yielded an average sentence length of 5.5 words and an average word length of 3.25 characters. "These features reflect a concise syntactic structure and effective segmentation that promote comprehension," the authors noted.
The PHCT LLM-generated form included 282 words, 51 sentences, and 26 paragraphs. The group noted that the average sentence of 5.5 words and word length of 3.01 characters suggested compact syntax but higher lexical density. "While medically precise, its procedural language elevated reading difficulty," Koç and colleagues wrote. "These findings demonstrate that ChatGPT-4o tends to mirror the inherent complexity of the procedure unless explicitly prompted for plain-language generation."
This study is among early prospective "patient-centered" studies evaluating LLM–generated informed consent forms in a real-world interventional radiology setting, according to the authors.
LLMs could simplify procedural communication. However, it is also worth noting that while participants reported satisfaction with the information provided about their illness, a majority reported that they still felt the need ask their doctor questions because the information provided was insufficient or unclear (40.3% definitely yes, 33.9% yes), with survey results favoring including visual explanations, according to an attitudinal survey.
Read the complete study here.


















