This article from the New Yorker takes a look at a modern twist on that story. I think I’ve linked to the Google driver-less car before. Its a neat project and further along than we might think. Driver-less cars will become standard within my lifetime most likely. What ethical subroutine do you program in? A cat jumps out in front of the car. Does the robot driver swerve at risk to the passengers or just go bump? What do you do? Make it harder. A child runs out in the street. There is no way to stop the car. Does the robot driver crash the car injuring the passenger? What if there are three passengers? Remember that the robot driver’s decision making is much faster than a human. The human might not have time to react. The robot does. What kind of ethical subroutine do you program? Who gets to choose? GM? Toyota? IPAB?
And those might be easy compared to the real end game. All those drones that the US is using to kill foreigners and create all kinds of collateral damage…just the first wave. The drones have humans controlling them at all times. We can argue about the ethics of drones, but if we don’t like how they are used we can vote in new administrations. What happens when the US replaces the big red one with the droid army? Robot soldier in an insurgent environment like say Afghanistan. What are the ethics to be programmed in a situation where jihadist and child could both be coming around the corner?
How do you teach ethics to a machine? How do we teach them to our kids? How were they taught in the past? Right now the default switch on all of that stuff is Utilitarianism. And machines can be strict utilitarians unlike most humans who are only so in the abstract. Religion doesn’t boil down to ethics, but ethics has until the enlightenment project been seen as a subset of religion. We are entering a world where religion-less ethics are being encoded. Are you hopeful about that?