There’s an undiscussed self-driving car problem: which lives should be prioritized when an accident occurs? Humans use instinct to make these choices, but what about an algorithm? A new paper by MIT looks into the public opinion on these questions. They collected data from an online quiz named the Moral Machine. It asks users to make decisions regarding imaginary car crashes. The paper says nine separate factors were tested, and here are the results.

Millions of users all over the world took this quiz, and it might be really helpful to mold future algorithms. Looking at the data there are some very consistent observations. There are some decisions that are common between the vast majority of people. These include sparing humans over animals, more lives rather than fewer, and children instead of adults.

The data also shows that ethical preferences change from country to country. This could be due to various factors such as geography and culture. Although these decisions will need to be made at some point in the future, the technology still has time to improve. Autonomous driving is still fairly new and considered to be in the prototyping stage. A well functioning product might be far off. It is still unknown how these preferences will take root into the technology, everyone can agree that this is a discussion that we need to have.

A sample scenario from the Moral Machine: should the user hit the pedestrians or crash into the barrier?

Does Culture Affect Ethical Preferences?

The results from the Moral Machine suggest culture has a big impact on the decisions people make. Although everyone shared a few core principles (like sparing many over a few), but they did vary up to a certain extent. The study’s authors think maybe this is due to the differences between individualistic and collectivist cultures. These variations suggest that “geographical and cultural proximity may allow groups of territories to converge on shared preferences for machine ethics,” said the study’s authors.

However, there were other factors that were neither based on culture nor geography. Something else was at play here, economic prosperity. Less prosperous countries, for example, were less likely to want to crash into jaywalkers rather than people crossing the road legally. These definitely won’t be daily decisions for an autonomous vehicle, but they still need to be thought about.

Ethics Vs Legislations For Self-Driving Cars

But do we really need legislation on these issues? When should companies start programming ethical decisions into self-driving vehicles? The short answer to the second question is that they already have. It’s likely that some sort of preferences are being coded in, although the companies involved aren’t making them public. Is it okay for firms to not be open about these decisions?

Self-driving systems are not yet capable enough of making decisions like these. For example, they still can’t differentiate between young and old people. State-of-the-art algorithms and sensors can still differentiate, like between squirrels and cyclists. But that is far off from what we need from these systems. Plus, whichever lives companies might say they prioritize — people or animals, passengers or pedestrians — will be a decision that just might upset somebody. That’s why these are ethical dilemmas: there’s no easy answer.

Read more such articles here.