There’s a self-driving car problem: which lives to prioritize in the event of a crash. Human drivers make these choices instinctively, but algorithms will now be able to make them in advance. A new paper by MIT probes the public thought on these questions, collecting data from an online quiz named the Moral Machine. It asked users to make a series of ethical decisions regarding fictional car crashes. Nine separate factors were tested, here are the results.
Millions of users from 233 countries and territories took the quiz, making 40 million ethical decisions in total. From this data, the study’s authors found certain consistent global preferences. These include sparing humans over animals, more lives rather than fewer, and children instead of adults.
The data also showed significant variations in ethical preferences in different countries. These correlate with a number of factors, including geography and culture. It’s important to note that although these decisions will need to be made at some point in the future, self-driving technology still has a way to go. Autonomy is still in its infancy, and self-driving cars are still prototypes, not products. Experts also say that while it’s not clear how these decisions will be programmed into vehicles in the future, there is a clear need for public consultation and debate.
Does Culture Affect Ethical Preferences?
The results from the Moral Machine suggest there are a few shared principles when it comes to these ethical dilemmas. But the paper’s authors also found variations in preferences that followed certain divides. None of these reversed these core principles (like sparing the many over the few), but they did vary by a degree. The study’s authors suggest this might be because of differences between individualistic and collectivist cultures. These variations suggest that “geographical and cultural proximity may allow groups of territories to converge on shared preferences for machine ethics,” said the study’s authors.
However, there were other factors that correlated with variations that weren’t necessarily geographic. Less prosperous countries, for example, were less likely to want to crash into jaywalkers rather than people crossing the road legally. Even if cars won’t regularly have to choose between crashing into object X or object Y, they still have to weigh related decisions.
Ethics Vs Legislations For Self-Driving Cars
But how close are we to needing legislation on these issues? When are companies going to start programming ethical decisions into self-driving vehicles? The short answer to the second question is they already have. It’s likely that some sort of preferences are being coded in, even if the companies involved aren’t talking about them publicly.
It’s understandable that firms aren’t willing to be open about these decisions. Self-driving systems are not yet sophisticated enough for a lot of decisions. For example, differentiating between young and old people. State-of-the-art algorithms and sensors can make obvious distinctions, like between squirrels and cyclists. But they can’t make more subtle differentiations. Plus, whichever lives companies might say they prioritize — people or animals, passengers or pedestrians — will be a decision that upsets somebody. That’s why these are ethical dilemmas: there’s no easy answer.