Success with AI rests on ethics and 'explainability'

2020 01 21 16 28 0714 Bartoletti Ivana Thumb 250

Artificial intelligence (AI) has the potential to transform radiology and improve patient outcomes, which is why there has been such intensive research in this field over the past few years.

Ivana Bartoletti.Ivana Bartoletti.

Earlier this month, researchers from Imperial College London and Google Health published research apparently showing that DeepMind's medical AI system can outperform doctors on identifying breast cancer from x-ray images. The scientists involved claimed that their robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening. If they are correct, it will be vitally important to harness the potential ahead of us through solid investments, research, and international cooperation.

But what can I tell you about AI that you don't already know? From my background, including in policy-making, I am often surprised at the yawning gaps between the advances in technology and the lack of social and political preparation to make the most of them. If we want innovation to be embraced by both patients and practitioners, it is essential to build a robust architecture of trust, accountability, and security at a time when, generally, technical innovation is going through a period of breakdown in public trust. The big tech companies have come under a lot of scrutiny in recent years, from Cambridge Analytica to Google.

Google, for example, came under fire in November 2019 for its use of patient health data, notably around its secretive "Project Nightingale" tie-up with Ascension, the U.S.'s second largest private healthcare company. A whistleblower reported that Google staff had access to the medical histories of millions of patients, which rightly alarmed millions of people. In the U.K., the National Health Service was found to have given DeepMind access to a trove of sensitive information on 1.6 million patients, including HIV status, mental health history, and abortions, without the patients being properly informed.

Main challenge for radiology

The key question for radiology, alongside the rest of medicine, is therefore how to wrap accountability and trust around AI innovation. How do we address the legitimate concerns that digitization has not only brought exponential growth in data generation but also concentrated the power and means to turn that data into precious knowledge in the hands of a few powerful private companies?

This is, in my view, problematic because distrust in research and innovation risks undermining the potential of AI in health. This is what I said in November 2019 at the British Institute of Radiology Big Data Conference in London, where I invited radiology practitioners and other experts to join the discussion around the future of the profession and technology in radiology.

It is important to understand what is changing and what the risks are. As algorithm-driven diagnosis improves, some argue that AI will replace the radiology and radiography professions -- a very threatening idea that is not going to help take-up and recruitment. But the reality is that success happens when machines and humans augment each other: For example, "human-in-the-loop" workflows that utilize AI as a time-saving triage tool perform better than either AI or human doctors working on their own. This means that an increasing part of the training to become a radiologist or radiographer needs to explore this cohabitation with diagnostic machines and the new situations that may arise.

Let's look at an example: If an algorithm makes a decision around a medical issue but the practitioner disagrees with it because his or her experience indicates otherwise, what happens at that point? From a patient's perspective this is complex, too. Patients tend to relate to doctors as human beings and recognize them as capable of errors and sometime failure. But would patients expect or ever accept machines to fail in a diagnosis or a treatment? These are all important issues affecting practitioners' daily life as well as the relationship between medics and patients. If we want to roll out AI systems across hospitals, we need to be able to address them.

Another useful example was the study from Stanford University that demonstrated AI can improve the accuracy of a diagnosis. Researchers trained an algorithm for radiology to detect 14 different pathologies and found that for 10 diseases, the algorithm performed just as well as human radiologists; for three, it underperformed compared with humans, and for one, pathology, the algorithm outperformed the experts.

What is 'explainability'?

"Explainability" is an issue that needs work. When a machine reaches a particular conclusion or assessment, it first needs to be an intelligible outcome for the practitioner, who in turn needs to explain it to the patient, who may be receiving life-shattering news. This is not an easy topic as AI means that often these machines learn by themselves -- hence the term machine learning -- and it could easily happen that the solution they reach is correct but unexplainable to a human. In such cases, we talk about black boxes. This does not mean that the decision is not valid -- it is just incomprehensible.

Explainable AI is a growing area of research, and I do believe that research will ultimately allow us to have traceable decisions. But it is worth remembering that explainability is always contextual. Most patients are baffled by their own bodies (despite the worthy movement toward patient empowerment) and will primarily want to know they have been accurately diagnosed and are getting the best treatment, whether or not it is fully explainable to them. It is the radiologist who is more likely to want to know exactly how a decision has been reached, especially by an algorithm that disagrees with them and might even "show them up" professionally.

Because algorithms need to be fed patients' data to be trained, privacy and ethics in the design of these systems of decision-making are paramount, and close attention to the sourcing of the data will help avoid harms to individuals.

One risk for algorithms is that they can latch on to relevant data, thus changing the course of their learning and behavior. An example of this happening in radiology could be where the algorithm is trained using data of patients at a particular hospital. The data may have characteristics and drivers that are specific to the particular area and circumstances, which means the same algorithm will not perform accurately in another place serving other patients.

All the issues above are just some of the topics we urgently need to discuss to ensure innovation, ethics, and privacy go hand in hand. That triple goal is not impossible -- but one thing we have learned over the past decade is that human-centered and responsible innovation requires great effort. If we care about health and well-being for all, this is the time for more radiologists to join the conversation. I am really interested to see what you bring to it.

Ivana Bartoletti is a specialist in privacy and digital ethics and co-founder of the Women Leading in AI network. Her new book, An Artificial Revolution: On Power, Politics and AI - Mood Indigo 3, is due to be published in May 2020.

The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.

Page 1 of 109
Next Page