Strickland calls for more radiology-led AI assessment

2019 02 28 18 35 3689 Strickland Nicola Ecr 2019 Video 400

The medical imaging community must learn how to check product credibility and needs to consider conducting its own studies to evaluate the accuracy of artificial intelligence (AI) algorithms in an effort to halt premature market acceptance, Dr. Nicola Strickland told participants on Thursday at the British Institute of Radiology's webinar, "AI in radiology: The main worries."

Strickland, who is due to stand down as president of the U.K. Royal College of Radiologists in September, voiced her concerns about AI at the event. She advocated careful scrutiny of AI software and provided tips on how to ascertain how robustly products had been tested.

Dr. Nicola Strickland, RCR president.Dr. Nicola Strickland, RCR president.

The key challenges of AI include addressing clinically relevant issues, assessing products, having access to enough data of sufficient quality, data anonymization and pseudoanonymization, regulation and integration, and premature introduction in clinical practice, according to Strickland.

The promises of AI are tantalizing, and triage of worklists on PACS could soon prove to be one of AI's "low-hanging fruit" that would prioritize automatically detected "urgent" cases and deprioritize "normal" findings, she explained.

Ultimately, in 10 to 15 years, the development of radiomics and radiogenomics means that AI will help predict tumor type from a region of interest on a CT images and provide prognoses of conditions, including cancer and cardiac disease, and the likely response of a drug in particular patients, based on their genomic data. Strickland also extolled the virtues of specific AI tools already in use, particularly speech-recognition programs and cardiac segmentation software.

Prime concern

In the here and now, a major worry is how radiologists can assess the new products hitting the market.

"Some may ask why it is necessary that it's radiologists who undertake this task," Strickland said. "There are two main reasons why we would want to do that. First, because at the current time, regulation on AI is inadequate and difficult to perform at the national or international level. Secondly, because there is no doubt that AI products are being prematurely introduced into clinical practice."

It's important for radiology to empower medical professionals by letting them know exactly what they're using and allowing them to judge whether the AI is fit for purpose, she added. Doctors, therefore, need to train in AI and machine learning. Indeed, the core curriculum of the Fellowship of the RCR has been recently rewritten to include the basic concepts of AI and submitted to the U.K. General Medical Council for approval.

Such training can enable doctors to understand the terms and methodology used in developing and testing the algorithms, know what to look for in a product, and be more aware of the relevant publications, particularly in peer-reviewed journals, as opposed to the more superficial presentations that show the software in a good light.

Importance of data and evidence

Doctors need to look at the data used for developing and training the algorithm in terms of numbers and provenance, what data were used for verifying the algorithm after development, and then what data were used for testing it, Strickland continued. She expressed her surprise about a thrombotic stroke diagnosis product that received the CE Mark and U.S. Food and Drug Administration clearance and had been developed using only 300 CT angiography exams analyzed by two neuroradiologists.

She also pointed to the need to learn about the accuracy of the algorithm, namely the number of true negatives and true positives, and how these were measured. Doctors must be trained to look at these aspects when assessing products and also when assessing publications about specific products.

"By knowing about that ..., then we can hope to make sure that there hasn't been 'overfitting' of the data, giving us spurious and misleading results," she said.

Honing questions to pose to AI developers is another skill to learn. Strickland illustrated this through the need to ascertain the Dice coefficient (overlap index) of an AI segmentation product, which would clarify how close the AI-generated contour overlapped with a human-drawn contour.

Going one step further, radiologists could conduct their own studies to judge AI being introduced into clinical practice to understand if it is safe for patients. This would mean knowing how to set up an appropriate methodology for a study to compare results from AI with radiologists' own diagnoses and also ascertaining the statistical power necessary and the case numbers needed for analysis.

Why should radiology bother?

Evidence is needed when approaching politicians or authorities to argue the case for AI or, conversely, illustrate displeasure with the technology and possibly halt any premature introduction of machine learning into practice, Strickland noted.

Some countries have a tendency to try to get AI-related products into clinical practice as soon as possible, due to workload-related issues, particularly radiologist numbers versus escalating patient exams, combined with multifaceted motivations, such as political will, financial gain, and drive toward personalized medicine.

"We need to look after patient welfare and don't want to be using inaccurate or, in some cases, dangerous software that may bias us in the wrong way. We need safeguards that the software designed to help us is safe in clinical practice," Strickland said.

Page 1 of 109
Next Page