Do we have moral obligations to a machine that achieves consciousness?

Aziz Huq

THE WASHINGTON POST – Machine learning is a kind of artificial intelligence that crunches gigabytes or petabytes of data to isolate relationships no human could discern. It helps scientists working in high-energy physics and population-level genetics pick apart huge data sets.

Closer to home, you benefit from machine learning when Amazon recommends a new book, a bank flags a suspicious transaction on your account or your phone translates from another language. Machine learning tools now beat humans at chess, Go, and even cooperative games like Quake III and Overwatch. What could possibility go wrong?

Plenty, worries Susan Schneider, a philosopher of science and consciousness who has held positions at the Library of Congress, NASA and the University of Connecticut.

Schneider wants us to grapple now with artificial intelligence’s evolution beyond current uses of machine learning.

Her new book, Artificial You: AI and the Future of Your Mind, counts a bushel of tough questions that arise when machines become not only smarter than us – this has already happened – but also conscious.

Schneider envisages three main pathways to machine consciousness. Either it will be engineered – say by mapping neural activity and replicating it in silicon. Or else the boundaries of mind and machine will become increasingly porous as machine components link with and even replace pieces of the brain. She finally posits that we might encounter extra-terrestrial machine intelligence.

Since we’ve managed to evade contact so far, and it’s hard to see why we should expect visitors right now, this last possibility is hard to find interesting.

Both of the other pathways, however, are already being explored. Microsoft just made a billion-dollar investment in OpenAI’s effort to construct a general artificial intelligence. Elon Musk’s Neuralink company, meanwhile, announced the invention of micron-wide electrode “threads” to enable high-volume information transfers between machines and minds.

Even if you think machine brains a remote prospect, Schneider contends that it is still worth figuring out now whether machines can become self-aware and, if so, how to test for consciousness. This is because consciousness in her view is a watershed that demarcates “special legal and ethical obligations” for its makers and users. Ignoring those obligations, Schneider warns, may have catastrophic effects later.

Her idea of catastrophe, though, isn’t a lurid and bloody fantasy of “Westworld” or Skynet. Rather, it is that we will fail to recognise consciousness in a novel and alien machine form, and hence fail to give it due moral regard. Or, she worries, we will inadvertently lose our own distinctive consciousness though piecemeal chipping away at the brain-machine barrier. She urges a precautionary approach that avoids technologies that flirt with these risks.

Schneider, though, helps herself to a critical assumption – that consciousness is pivotal to our moral lives. In ordinary practice, consciousness seems neither necessary nor sufficient for moral concern. Most obviously, livestock are conscious. Yet they are raised and slaughtered without (much) compunction. For many, qualities of the natural world threatened by pollution, exploitation or climate change are valued objects of moral concern. Yet they plainly have no self-awareness. Our moral system thus treats consciousness as relevant to – but not a defining characteristic of – ethical concern.

Even if you think consciousness is self-evidently important, it remains too mysterious to be easily used as a marker of moral significance. To be sure, we have ruled out René Descartes’ notion that the pineal gland linked body and mind.