"AI is entering the radiological discipline very quickly and will soon be in clinical use," noted the team of imaging informatics experts and leaders of the Italian Society of Medical and Interventional Radiology (SIRM).
Prof. Emanuele Neri.
"Future legislation must outline the contours of the professional's responsibility, with respect to the provision of the service performed autonomously by AI, balancing the professional's ability to influence and therefore correct the machine and limiting the sphere of autonomy that instead technological evolution would like to recognize to robots," they explained.
The solution is to create an ethical AI, subject to regular checks, controls, and monitoring, as happens with humans, they wrote in an editorial published online on 31 January in La Radiologia Medica, the journal of the SIRM. AI must be subjected to a "vicarious civil liability, written in the software and for which the producers must guarantee the users, so that they can use AI reasonably and with a human-controlled automation."
The authors, led by Prof. Emanuele Neri from the department of translational research at the University of Pisa, urge the medical imaging community to consider these seven points:
- Using AI, the radiologist must be responsible for the diagnosis. AI cannot really act freely or know what it is doing, and if this assumption holds, then the only option is to make humans responsible for what the AI technology does.
- Radiologists must be trained on the use of AI because they are responsible for the actions of machines. Users should familiarize themselves with the technological requirements in this field, including the risk of self-diagnosis of patients using robots, and the use of such technologies should not diminish or harm the doctor-patient relationship, they explained.
- Radiologists involved in research and development must ensure that everybody respects the rules for a "trustworthy AI." They know about digital imaging and informatics, and those with a special interest in imaging informatics are often involved in development of AI algorithms and clinical validation.
- There is a danger of having to validate the unknown. In clinical practice, radiologists will be asked to monitor AI system outputs and validate AI interpretations, but the risk is they carry the ultimate responsibility of validating what they cannot understand.
- Radiologists' decisions may be biased by AI automation. Automation bias is the tendency for humans to favor machine-generated decisions, and it leads to errors of omission and commission; omission errors occur when a human fails to notice, or disregards, the failure of the AI tool.
- More AI tools may be needed to compensate for the lack of radiologists. Progressive shortages of radiologists in the years to come could lead to the implementation of ever-greater numbers of AI tasks.
- There is a need for informed consent and quality measures. Quality measures should relate to ensuring software robustness and data security, updating software and hardware, and avoiding equipment obsolescence. Attention must be paid to image processing, guaranteeing its integrity during the analysis process with neural networks, thus avoiding a modification of the raw data.
In a 2019 white paper, the European Society of Radiology (ESR) stated that the most likely and immediate effect of AI will be on the management of radiology workflow. It should help to improve and automate acquisition protocols, as well as increase appropriateness (with clinical decision-support systems) and structured reporting. AI can also enhance the ability to interpret the big data of image biobanks connected to tissue biobanks, with the development of radiogenomics, the ESR stated.
"But the fundamental problem of an ethical and not harmful AI still remains. Are there solutions?" asked Neri and colleagues.
They're convinced their seven-point strategy must be part of any solution.
Copyright © 2020 AuntMinnieEurope.com