How do machines handle life-or-death decisions?

Published Feb 4, 2019

Share

The world is increasingly characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres so much so that we now refer to these as cyber-physical systems. We even have a term for this - the Fourth Industrial Revolution (4IR). We all desire a data-driven artificial intelligence world which will serve us unconditionally. We want machines to act on, and even anticipate, our every whim and fancy, to clean our homes (robots), to monitor us (lifelogging), to transport us (autonomous vehicles), and to make stock market decisions on our behalf (automated trading). Machines, as we know have no empathy and are amoral. The question I am intrigued by is “can machines be designed to make moral decisions?”

Let’s try a poser to see if we truly appreciate the context. Your car is careering out of control. However you can steer in just two directions. On the left hand side, is a group of 6 children which if the car knocks into, it will almost certainly save your life with the ultimate cost of the children’s lives. On the right hand side, death for you is certain as a 100-year old oak tree stands. The oak tree will stubbornly make no effort to absorb the momentum. What choice will you make? Heroic death or survival with eternal self-blame?

The former is what the heroic helicopter pilot, Eric Swaffer faced when he selflessly chose a fiery death for himself and his famed passenger Leicester City soccer Chairman, Srivaddhanaprabha over that of a few supporters in 2018. Pilot Swaffer, did this by crash landing in in open space away from a few supporters.

The previous poser, by the way, was “The Trolley or Tunnel problem”, conceived in 1967 by Philippa Foot. Watch it here. A runaway tram is heading down a track trolley and places the driver with a moral dilemma. Five men are working on this track, and are all certain to die when the trolley reaches them. However if the driver to switches the trolley’s path to an alternative spur of track, he will save all five. Unfortunately, one man is working on this junction, and will be killed if the switch is made. Talk about the devil and the deep blue sea. More recent literature replace this moral dilemma with a grandma and a baby. I was unaware of this until Google blew, what I thought was an column!

Now imagine that you are a programmer? How will you programme an autonomous self-driving drive vehicle to react to this moral dilemma? If we used an Artificial Intelligence or observational data-driven deep-learning system, the vehicle may well learn and “see through our hypocrisy”, understand our subtle survival me-first instincts and career into the kids.

Imagine if machines had to algorithmically decide on hospital care, by unemotionally looking at the big picture of resource availability - finance, bed, surgeon, and medical equipment, the potential return on investment after treating you. Can you return to work when get better or is it a futile exercise? This form of ethical reasoning is called consequentialism, which means the decision should be judged in terms of its consequences. I somehow think that you would prefer to have even a supposedly indifferent human nurse at reception, over an indifferent machine, if the decision concerned your love one!

Sometime we deploy machines to provide additional support – much like your parents constantly watches your back. Here one recognises the fallibility of man which may occur through information overload, distraction or fatigue. The Traffic Alert and Aircraft Collision Avoidance System (TCAS) is designed to reduce mid-air collisions particularly around airports. It monitors the airspace around an aircraft through a transponder. As TCAS is independent of air traffic control, it provides an additional threat warning to pilots of imminent collisions.

The Trolley/Tunnel Problem presented another damn or be damned moral dilemma. A driver travels towards a single lane narrow tunnel. Just before entering the tunnel a child attempts to run across the road and trips, effectively blocking the entrance. The driver has two options: hit and kill the child, or swerve onto either side of the tunnel, thus killing himself. How should the driver react?

What if we replaced you in the narrative and repositioned the dilemma between an elderly person and a child? How about between human and a pet? Many or fewer victims? Should a vehicle facing a moral dilemma stay on the legal lane, regardless or swerve? There is a brilliant paper called “the Moral Machine experiment” in Nature magazine. Read it here.

Here is what intrigued me about this study. The answer to the moral dilemma of the victim: elderly or the youth, provoked different responses in different cultural settings! Given a choice between a child or an old person, the folk in the East are more likely to select the child then the old person, while the response in the West was the opposite!

It is probable that AI-driven machines such as robots would more likely harm humans while carrying out operations than collude and rise against us. Tom Chatfield has been very helpful with this challenge if “my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear (to me) in advance together with its exact parameters. I might or might not agree, but I can’t say I wasn’t warned.”

It seems Data Scientist have much more to consider and learn then the already exciting combo of Mathematics, Statistics and Computer Science.

Dr Colin Thakur is the KZN NEMISA CoLab Director. This effort is part of the “Knowledge for Innovation project” and is our contribution towards #BuildingACapable4IRArmy.

Related Topics: