AI: Is there a risk of looking into the future with blinkers on?

2017 02 14 13 55 06 507 Maverinck Logo 400

At a recent Round-Table Colloquium organized by the Council of the Round Table Foundation (TRTF), intrinsic and essential aspects of artificial intelligence (AI) were discussed by a number of invited professionals in the field.1 This column will present some of the essential points that were touched upon during this event.

Dr. Peter Rinck, PhD.Dr. Peter Rinck, PhD.

First and foremost, there was concern among the participants that there is no clear and generally accepted definition of what is considered to be AI.

There are "limited-intelligence" expert systems for dedicated applications, for instance, aimed at computer-assisted diagnosis (CAD), chess-playing software or self-driving cars -- better described as brainpower and knowledge replacement software. But the chess software cannot knit and a self-driving car cannot write novels. They cannot learn to knit and they cannot learn to write. Artificial diagnostic systems can diagnose a fracture, but they cannot put an arm in a cast.

This software will be a permanent apprentice, biased and subjective, never neutral and objective, reflecting the ideas, ways of thinking and the input of their creators. It is not transparent, but in most cases completely opaque.

If you are a referring physician and you want to know how the radiologist came to a diagnosis, the human image reader can explain it to you. Getting such an explanation from a machine will be difficult; it is unable to scrutinize and challenge the veracity of the data it digests.

Deep-learning AI cannot explain how it draws a conclusion -- in particular if its "learning" is augmented with surrogate data collected from the internet. The number of trained radiologists is shrinking. If there is no trained radiologist around, you have to live with the machine's outcome, and you have to believe its validity.

Algorithms can also be written in a way that the outcome is determined in advance by built-in bias, and certain procedures are recommended or even performed without further human deliberation and approval. Considering the state of the world, one cannot trust a machine-intelligent system that is a black box. More so, increasingly, doltish and blundering dilettantes have access to research facilities -- single-minded nerds, data autists, and unqualified "soft scientists."

Geeks as the new technicians

Radiologists taking care of patients every day tend to have a rather negative view of these nerds. Some years ago, they would still be considered computer geeks as part of academia, but now they are placed into the drawer of "technicians."

What used to be computer or information science has lost its scientific standing and is simply informatics now, IT – the nerds are computer or network technicians. They meddle in medical or scientific questions without having any knowledge or comprehension of practical medicine.

The technocratic attitude to develop novel data collection strategies and image reconstruction techniques does not relate to dealing with sick people. It is part of a wild-goose chase, or a foolish and hopeless search, like many quantitative applications in medical imaging.

Medicine is about human beings. The advocates of AI in medicine and particularly in diagnostic imaging no longer consider people. They are under the misconception that one can reconstruct a living person using data: Humans are reduced to data-delivering objects to be administered and processed by "healthcare desk jockeys".

The emphasis of AI is on a collective rather than individual description. It works with statistics, with averages. It's assembly line health care, not the medicine that has been the ethical base of being a medical doctor until a while ago. The idealistic goal of personalized medicine is being trampled on by the same people who propagated it as our goal some years ago.

AI will have the position of a "middleman" between medical doctor and patient, giving little but making a profit for the manufacturer. It will definitely be a major new cost factor in medicine -- not only in development but also in maintenance costs. And there is no proof whatsoever if the value of AI outweighs the value of a trained medical doctor -- except if medical training in developed countries gets even worse than it's now. Is it a work in progress or work in regress?

Dr. Peter Rinck, PhD, is a professor of radiology and magnetic resonance and has a doctorate in medical history. He is the president of the Council of the Round Table Foundation (TRTF) and the chairman of the board of the Pro Academia Prize.

The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.

Reference:

  1. Real and Artificial Intelligence in Bio-Medicine, Sophia Antipolis, France; 11-12 March 2022. The 10th Meeting in the Series - A Virtual Round Table. http://www.trtf.eu/events.html.
Page 1 of 109
Next Page