Why all the fuss about artificial intelligence?

2017 05 09 11 15 29 355 Cyber Informatics 400

There's been a lot of talk recently about artificial intelligence (AI) in radiology, both at RSNA 2016 and ECR 2017. The question on everybody's mind is: "Will artificial intelligence replace radiologists?" I believe AI will make radiologists smarter and better diagnosticians.

The colloquial definition of AI is where "a machine mimics the cognitive function of the human mind such as learning and problem solving." AI draws upon the subjects such as computer science, maths, psychology, artificial psychology, philosophy, linguistics, and neuroscience.

Machine learning and deep learning are terms that are often talked about when AI is discussed for healthcare. Machine learning is a type of AI that uses a learning algorithm called artificial neural network (ANN). This algorithm learns without being explicitly programmed, and changes when exposed to new data. Deep learning is the type of machine learning that uses multiple ANNs.

Relevance of AI for radiology

Many of us are already familiar with the AI in our daily lives, including digital assistants using natural language processing like Siri and Alexa, competing at high-level games like chess with a computer, etc. There is a lot of research going into self-driving cars too. There are over 100 applications of AI, but for me, the three most relevant to radiology are computer audition, computer vision, and natural language processing.

Dr. Neelam Dugar is a consultant radiologist at the Doncaster & Bassetlaw Hospitals NHS Trust, U.K., and informatics adviser to the Royal College of Radiologists.Dr. Neelam Dugar is a consultant radiologist at the Doncaster & Bassetlaw Hospitals NHS Trust, U.K., and informatics adviser to the Royal College of Radiologists.

Computer audition is a type of AI that mimics the function of a human hearing system. This is commonly called speech recognition or voice recognition. An audio clip is transmitted through an ANN, and plain text is generated. However, in reality, there isn't just one ANN. Deep learning with multiple ANNs is used in speech recognition. Speech recognition neural network has a memory that influences future predictions. Many radiology departments have successfully implemented speech recognition AI.

At a recent national conference, I asked for a show of hands in the audience to find out which radiology departments had not implemented speech recognition, and not a single hand was raised. The success of voice recognition in radiology is largely down to seamless workflow integration. Every vendor's RIS application has successfully integrated with speech recognition servers. This provides an important lesson to vendors bringing AI into radiology.

Let us now move on to computer vision AI. This is a field of AI that deals with digital images or videos. It automates the tasks of the human visual system in processing, analyzing, and understanding digital images to produce symbolic or numerical data. The types of computer vision AI in radiology include computer-aided detection (CADe), computer-aided diagnosis (CADx), and computer-aided simple triage (CAST).

CAD's clinical progress

We have had CAD for many years now, but what has changed in technology is that rules-based algorithms have been replaced by machine-learning algorithms. CAD software using deep learning algorithms has improved the specificity of detection and reduced false-positive rates. This is making it more acceptable in clinical practice.

CAD has been used extensively in breast radiology for many years. CADe helps detect subtle masses in mammography due to a high sensitivity, whilst CADx supports decision-making (biopsy versus follow-up). CADx uses a combination of computer vision AI and natural language processing AI. The natural language processing component reads the clinical information on the request card, and along with computer vision AI, guides a radiologist toward a judgment on whether to suggest biopsy or follow-up when used in mammography screening.

Recent research shows that deep-machine learning is reducing false-positive rates in breast CAD, and therefore reducing breast biopsies. Chest x-ray CAD can act as a second reader. The specificity of deep learning chest x-ray CAD is also rising, as confirmed by recent research from Valencia, Spain. The Children's Hospital Los Angeles has developed CAD for bone-age assessment from hand x-ray images. This would provide huge efficiency gains for pediatric radiologists.

Another interesting example of CAD called MedyMatch has a high sensitivity for the detection of intracranial brain bleeds and is in early clinical trials. IBM has set up a partnership deal with MedyMatch and is distributing the software. Also, CAD software has been developed for CT lung nodule detection, CT pulmonary embolism detection, polyp detection on CT colon, lesion detection on breast MRI, etc.

Furthermore, IBM's acquisition of RIS/PACS vendor Merge Healthcare (of which E-Film PACS was a part) has sent a powerful message to the imaging AI market, as with access to large amounts of digital image data and associated radiologist reports will allow deep-learning CAD algorithms to evolve.

Luckily for us as customers, interoperability standards already exist for CAD. This will ensure that deep-learning technology and CAD is not locked down to any particular PACS vendor. From a workflow integration perspective, all relevant DICOM images from the modality would go to the CAD server and PACS at the same time.

The CAD report generated on the CAD server is sent to PACS as a standard DICOM structured report (SR). On the PACS display, the CAD markers can be presented as a separate image series or CAD markers can be toggled on and off the images. Standards ensure that any vendor's CAD can integrate with any vendor's PACS. This allows the customer to choose the CAD they think would be useful to their clinical practice. The customer needs to ensure that PACS is capable of displaying DICOM SR objects.

Another interesting application of computer vision AI is CAST. Here the computer-aided diagnosis detects an abnormality and raises the priority of reporting on the reporting worklist. This is being considered for the CAD-MedyMatch brain bleed product. For this to work, there are 2 types of integration required. The CAD server must have two outputs.

One output is the DICOM SR output with CAD markers as described previously. The CAD server also needs to send an HL7 ORM message output to raise the reporting priority on RIS (ORC 7.6 or OBR 27.6). This would certainly improve safety within the U.K.'s National Health Service (NHS) where abnormalities often lie within reporting backlogs.

Natural language processing AI

Natural language processing (NLP) AI is finding many uses in radiology. This is a type of AI that understands and interprets the human language. For instance, IBM Watson's digital assistant is using natural language processing by reading associated clinical information. Deep learning technology is also used by them. Researchers at University of California Los Angeles have created a "chatbot" that can answer interventional radiology related questions similar to digital assistants like Siri and Alexa. Similarly, natural language processing could be used by the CAD servers to learn from the radiologist's final report and improve the deep-learning algorithms for better detection.

The human intelligence required for creating a radiology report includes human vision and problem-solving skills. By definition, a radiology report is a medical opinion, which answers the clinical question, provides a tentative differential diagnosis when an abnormality is seen, and provides advice on the next step in management. As part of problem-solving, a radiologist reads the clinical information on the request card, the information present in the previous imaging history on PACS, and also reviews the previous images and reports available.

Upon reviewing the images on PACS, if the radiologist identifies any abnormality, s/he reviews further clinical information like blood results histopathology reports, clinic letters and discharge summaries, etc., to come up with a tentative or differential diagnosis. IBM Watson's device attempts to mimic some of the problem-solving functions of the human brain. Using computer vision AI it will identify abnormalities on the images, NLP AI will process clinical text data on the request card, and other data in the patient's medical record to suggest a possible diagnosis. However, this technology is still in research phase.

Looking to the future

AI shows a lot of promise for value-added radiology. My message to PACS vendors is that customers really want CAD software integrated into the reporting workflow. Chest x-ray CAD, lung nodule CAD, breast CAD, fracture CAD, and hand x-ray age analysis CAD are of real interest to us. They will increase efficiency and accuracy and therefore add value. PACS must be capable of accepting and displaying DICOM SR reports from CAD servers. RIS vendors must keep aware that CAST will make radiology departments with reporting backlogs safer, as it will prioritize reporting for images that have abnormalities in them.

AI will transform how radiologists work in the future. You must change your mindset from simple detection to "actionable reporting" for value-added radiology. Make yourself integral to local clinical teams and support them with regular dialogue in multidisciplinary team meetings and advice. Understand how your clinical teams manage patients, so that you can provide appropriate advice as part of actionable reporting. Be prepared and redefine yourself.

Dr. Neelam Dugar is a consultant radiologist at the Doncaster & Bassetlaw Hospitals NHS Trust, U.K., and informatics adviser to the Royal College of Radiologists (RCR). This article was written in her personal capacity, and her views and ideas are not necessarily shared by the RCR.

The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.

Page 1 of 109
Next Page