Over the past seven minutes I’ve been asked to choose whether I would rather kill five young passengers by crashing their self-driving car into a concrete barrier, or if the car should swerve slightly right, save their little souls and kill five elderly people crossing the road instead. This was during an illustrated MIT ‘Moral Survey’ slideshow that presented myself, and others like me from 223 other countries, with thirteen different scenarios where a self-driving car would have to be specifically coded to make a moral decision in the instance of a nearing and unpreventable fatal accident.
In case you were wondering, I rather reluctantly chose to save the five young people and before I knew it, I was saving an adult, a toddler and a cat over three adults and a toddler; a doctor, a dog and a toddler over three slightly overweight illustrated figures; two joggers, a doctor, a cat and a woman over five adult passengers with a thief amongst them. It’s fascinating how, once the initial hesitation to resist making such a painfully moral choice, my subconscious began to dictate choices I did not even believe would come into my ethic-driven decision making process.
See this is the thing, as Uber, Tesla and Google’s Waymo driverless cars begin to take root in our world—which they will and already are, there are some pretty serious decisions that need to be coded into their AI. Who to prioritise in the case of an approaching accident is up there on the agenda. Sure the whole point of having AI systems drive on simulated city roads for an accumulation of millions of hours is meant to almost entirely eliminate the possibility of a crash by teaching the AI to drive thousands if not millions of times better than a human. But we all know that mistakes happen. In fact, just last year two lives were lost in the U.S. as a result of machine failure. One involving a self-driving Uber that did have a driver but who was not paying attention and the other with a Tesla self-driving car. Two lives were lost. In Uber’s case it was a woman pedestrian who the car simply did not register even though footage of the camera view clearly shows her appearing out of a dark pedestrian crossing and in Tesla’s case, it was a man who drove the car.
Turns out that on the scale of ‘do I prefer to kill less or more people’ I scored quite heavily into the “does not matter” spectrum (in opposition to the average). But hey, at least I preferenced humans over pets and women over men (sorry, I’m in a female-comradeship kind of mood). On a more serious note, despite the fascinating scores this MIT survey has collected over the past four years on different cultures’ value of life, their preference to fitness, age, animal or human, what the makers of these cars of the future now face is to make real decisions that will guide AI to prioritise what kind of life is more valuable; a decision that I believe no one has the right to make.
I do admit however that projecting my moral ethics from behind a screen, far away from the developers and coders of such systems is easy. And whether we agree or disagree, these decisions will need to be made. The real question is how such decisions will be made, who makes them and to what capacity. Within just thirteen slides, the survey was able to present back to me my own bias, as a twenty-something year-old woman, to preference young people and women. As the technology around us becomes more automated, more powerful and with a stronger grip on everything around us, these types of moral questions—or moral surveys—are only set increase. So if instilling morals into the machines that will serve us is set to be the course of the next few years, we better make sure there is a fair representation of demographics coding these decisions. Otherwise we could accidentally end up with an army of cars that prefer cats over toddlers. Just saying.