U.K./Dutch team finds evidence gap in automated MRI analysis of dementia

2019 12 09 23 14 2273 Brain Dementia Leaves 400

There's currently a "significant evidence gap" for the use of many available automated volumetric MRI reporting applications in diagnosis of dementia, according to research led by investigators in the U.K. and the Netherlands.

Researchers systematically reviewed the published evidence for quantitative volumetric reporting tools that had received regulatory clearance for dementia assessment. They found that only four of the 17 companies had published any clinical validation of their software, according to the multinational team of investigators, led by Hugh Pemberton, PhD, of the University College London (UCL) in the U.K. The group included Pemberton's colleagues from UCL, Erasmus MC University Medical Center in the Netherlands, and VU University Medical Center in the Netherlands.

What's more, the researchers found no evidence yet of any workflow integration or evaluation of the software's use for dementia diagnosis in clinical practice.

"From this we conclude and recommend that more research can be done to validate these [quantitative volumetric reporting tools] in clinical settings to develop a more robust understanding of how each tool contributes to the diagnostic workflow in memory clinics," wrote Pemberton and colleagues in Neuroradiology. "This will not only support optimal clinical integration of quantitative tools but will also help neuroradiologists to make informed decisions regarding the use of quantitative assessment in their clinics."

With the growing number of commercial quantitative volumetric reporting applications, the researchers sought to assess the available evidence base to help neuroradiologists make informed decisions on adopting these tools in clinical practice.

They found 17 relevant applications that had received U.S. Food and Drug Administration clearance or the CE Mark in Europe:

  • CorInsights MRI (ADM Diagnostics)
  • Diadem (Brainminer)
  • Neuroreader (Brainreader)
  • cNeuro cMRI, cDSI (Combinostics)
  • NeuroQuant (Cortechs.ai)
  • THINQ (CorticoMetrics)
  • Icobrain-dm (Icometrix)
  • JAD-02K + Atroscan (JLK Inspection)
  • Biometrica (Jung Diagnostics)
  • Mdbrain (Mediaire)
  • Pixyl.Neuro.BV (Pixyl)
  • Quantib ND (Quantib)
  • Quibim Precision (Quibim)
  • QyScore (Qynapse)
  • AI-Rad Companion Brain MR (Siemens Healthineers)
  • SyMRI Neuro (SyntheticMR)
  • Vuno Med-DeepBrain (Vuno)

Next, the investigators searched PubMed, Scopus, and Ovid Medline to find published papers on the software. They also checked with the vendors to confirm their findings.

Of these 17 companies, 11 had published some form of technical validation on their segmentation method. However, only four -- Cortechs.ai, Jung Diagnostics, Brainreader, and Combinostics -- had published clinical validation of their software in a dementia or memory clinic population.

In addition, three firms -- CorTechs.ai, SyntheticMR, and Brainreader -- had published clinical validation studies in which the same quantitative diagnostic report was used in other neurodegenerative disorders. Pemberton et al did not find any published studies on the software's integration into workflow or use in the clinic.

As a result, they concluded that there was a "significant evidence gap" with regards to clinical validation, workflow integration, and in-use evaluation of these tools in the diagnosis of dementia on MRI.

As an added challenge for clinicians interested in incorporating this type of quantitative reporting software into their diagnostic workflow, the researchers noted that there's a large variation in available quantitative reporting features and a lack of comparative evidence of comparative validation on standardized imaging cohort data. As a result, there is little scope to assess their utility as diagnostic tools in the clinic, according to the investigators.

"We hope this review encourages such validation studies from the developers of these quantitative tools and recommend caution from clinicians when examining claims of the tools' clinical performance," they wrote.

Page 1 of 110
Next Page