Building ethical AI: Why we must demand it

2018 02 20 19 49 7861 Cyber Informatics Head2 400

There is a school of thought that ponders a dark, dystopian future where artificially intelligent machines brutally and coldly run the world, with humans as only a biological tool. From Hollywood blockbusters, to evangelic tech entrepreneurs, we've all been exposed to the possibility of this type of future, but have we all stopped to ponder how we should avoid it? All of this dystopia is many, many decades away, and only one of several gazillion possible future outcomes. But that doesn't preclude getting the conversation started today.

For me, and many others, it boils down to one simple thing: ethics. Get your ethics right and machines should in theory never be able to take over and dictate a machine-brain version of the universe. At a more simple level, we need to start considering how we can avoid inhuman decisions, especially where they have the potential to hurt us the most: in the life and death environment of healthcare. We are eons away from a fully automated healthcare system, however, right now, artificial intelligence (AI) is being developed to help augment doctors' decision-making. But what if some of those decisions are wrong?

Dr. Hugh Harvey.Dr. Hugh Harvey.

Before explaining why ethical and responsible AI should be demanded by everyone, I need to highlight some recent news about a script programming error, which is affecting junior doctors in the U.K. Apparently, some spreadsheets were formatted differently to each other and an automated script to compile results didn't account for this. Nevertheless, it starkly highlights the ethical need for accountability, robustness, and accuracy in administrative systems, especially because AI is touted as being the replacement tool for very similar tasks.

The U.K. Royal College of Physicians is currently working on this problem (which has spawned the typically British hashtag of #ST3cockup on Twitter), and at the time of writing there are still many junior doctors waiting to find out what this error means for them and their families.

Junior doctors are increasingly feeling like they are an expendable resource rather than humans in a caring system. This error comes at a time when the dust was only just settling after the recent Dr. Bawa Garba case (where a junior doctor was struck off the medical register under extraordinarily unfair circumstances), and only a couple of years after the hugely damaging junior doctors' strikes over their imposed new working contract. For an administrative system to produce such a shocking new error, the effects of which are still unfolding, at a fragile time like this, it is not surprising to hear the uproar and disappointment from the medical ranks of the National Health Service (NHS).

Building ethics into everything

There is a wider picture here: the need for deep consideration of the potential impacts of automation.

It is heart-wrenching to think that without building ethical AI, we, as a society, run the risk of introducing blindly autonomous systems into the mix, capable of brutal and opaque decision-making. AI does not have the capacity to understand a larger social context in which to assess its errors, and if such an administrative error were ever to be made by a machine, I'm sure the uproar would be even louder. Regarding this current situation, heads are likely to roll. But will machine-heads roll if and when they make similar errors?

Recently, Future Advocacy published an exquisite report, funded by the Wellcome Trust, on the ethical, social, and political challenges of AI in health. Two quotes stand out to me from this report:

  • "[S]ome algorithmic errors can systematically burden certain groups over and above others, without this problem necessarily being visible if we look only at the overall performance."

I couldn't agree more with this statement. To put it within the context of the training application error, it is clear that certain groups of junior doctors have been burdened more than others, and the problem is only visible once those affected start reporting it. I am under no doubt that whoever wrote the offending spreadsheet script had checked it worked (i.e., overall performance was good), but was totally unaware of what their code was capable of doing, and likely was only notified once the proverbial hit the fan.

  • "[T]he more open a system is, including in terms of the development, commissioning, and procurement of algorithmic tools, then the more protected users will feel from the risks."

This second statement is also interesting in this context. Had the recruitment and selection system been more open, transparent, and even co-designed with those to be affected by it, would this error have occurred at all? For instance, if junior doctors knew the ranking system, the format of the spreadsheets, and the code used to compile results,?would someone have noticed beforehand? Now, I'm not suggesting in this context we should aim for that level of ideal transparency -- I'm simply posing the question, "Where do we draw the line between a closed system and a transparent one?"

When it comes to translating AI and autonomous systems into even more potentially dangerous situations, such as life and death decision-making (e.g., autonomous cars, cancer diagnoses), the ethical problems loom even larger.

I was fortunate enough to have spent some time with Prof. Alan Winfield of the Bristol Robotics Laboratory. He is an eminent thinker on ethical AI and robotics, and widely regarded as an authority in this field. He is working with IEEE to create new standards for the introduction of ethics to future technologies. I learned from him that a system is not ethical until it has fully considered and included everyone it could possibly impact. This is a profoundly interesting thought, and one that I think should be mandated across the board.

Work on ethics has begun

Thankfully there is a growing consensus that ethics for AI is absolutely essential. The recent House of Lords Select Committee report that I consulted on had a strong recommendation that the UK should forge a distinctive role for itself as a pioneer in ethical AI.

The impact of ethics in AI is only now hitting mainstream consciousness. Indeed, following the recent Facebook data sharing scandal, the company has set up an internal ethics unit to look at these kinds of problems, and Microsoft have even pulled sales over ethical concerns.

I would urge anyone developing AI tools for healthcare to take a long hard look at the potential risks of their systems, and try to think outside of the box on how to ensure an ethical approach.

Too often have we seen developers rush to be "first to market" and make headline claims of performance, without care or mention of ethics, and it is only a matter of time before we begin to see the brutal dystopian impact created when these systems ultimately fail, which they are guaranteed to do at some point?.

Take the recent NHS breast cancer screening error. An automated system didn't invite women for screening at the right time, leading some observers to claim that up to 270 women may have died as a result. The ethical ramifications are staggering. No one yet knows who is responsible; even the contractor who runs the system is deferring blame elsewhere.

If we are to avoid the vision of a cold inhumane future where real lives are cast aside by automated decisions and their inevitable errors, we must start talking about, and building in, ethics right now. Otherwise, it will be too late to save ourselves from the machines.

Editor's note: For a longer version of this article, click here.

Dr. Hugh Harvey is a Royal College of Radiologists (RCR) informatics committee member, clinical director at Kheiron Medical, and advisor to AI start-up companies, including Algomedica and Smart Reporting. He trained at the Institute of Cancer Research in London, and was head of regulatory affairs at Babylon Health.

The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnieEurope.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.

Page 1 of 109
Next Page