Imagine you and three friends are cruising down the street in a driverless car. The car turns a sharp corner when suddenly it sees a group of four children playing hopscotch in the middle of the road. The car has two options: either continue on its path, tragically hitting the group of children, or swerve into the divider, killing you and your three best friends. What should the car do?
This dilemma introduces the complex discussion surrounding the morality of autonomous vehicles (AV), prompting the question, “How is a non-living machine supposed to make life-or-death decisions?” As autonomous cars make their way from futuristic sci-fi movies to places like the Seaport, the ethical considerations of machine intelligence are becoming significantly more relevant.
The initiative to increase the number of self-driving cars stems largely from the fact that around 90 percent of all motor vehicle accidents are caused by human error. A car that reduces accident rates while simultaneously allowing a user to finish up on work emails seems to be worth the investment. There are, however, bumps on the road that need to be tackled before they can get anywhere. Companies like Google, Hyundai, and Volkswagen have already invested largely into this technology. Most notable is Google’s program, Waymo. This program boasts a product that will improve road safety and mobility, with three and a half million miles already under the cars’ belts. Google claims that their cars can “[predict] the future behavior of other road users” and are “able to respond quickly and safely to any changes on the road,” but offers no insight to how the cars make decisions such as in the previous scenario.
As autonomous cars make their way from futuristic sci-fi movies to places like the Seaport, the ethical considerations of machine intelligence are becoming significantly more relevant.
Programming these kinds of decisions into autonomous vehicles is no easy feat. If a series of very unfortunate events leads an AV to decide between hitting an old man and a young woman, who does it choose? To further explore the ethical dilemma presented by these hypothetical circumstances, several researchers at MIT created a program called Moral Machine. The website asks participants to make life-or-death decisions in numerous theoretical situations, and gathers these statistics for further consideration. Large disagreements about individual scenarios further demonstrate that these decisions are not at all easy.
While companies like Tesla and Google have not sat down and worked through every hypothetical scenario an AV could face, cities around the world like Boston are already embracing this emerging technology. Though driverless cars may seem like a luxury of the future, machines are learning how to mimic human behavior at a rapid pace. Perhaps in five or ten years, cars with drivers will be almost as taboo as they are now without.
To contribute to the Moral Machine, users can visit moralmachine.mit.edu
Image source: Pixabay