It’s hard to believe that driverless cars are actually going to become a reality in the not so distant future, with the earliest predictions for car manufacturers and regulations to be in 2020. Some manufacturers are even testing their driverless prototypes on public roads right now. Google has been operating a small fleet of Toyota Prius’s on Nevada roadways since 2011, but Volvo, Jaguar, Land-rover, Tesla, and Ford aren’t far behind either.
These robotic vehicles are programmed to strictly obey the law and avoid crashes, but will this allow them to make the correct decision when the time comes? For example, sometimes its important to break the speed limit in an emergency or to adjust to the traffic flow. But, if these driverless cars are not able to break traffic laws then they could potentially be a danger in some situations. Programming a driverless to slavishly follow the law might be foolish and dangerous. Better to proactively consider the programming now than defensively react after a public backlash in national news.
Which brings us to another slightly more complex issue, the moral ethics of these operating systems. Philosophers have been thinking about ethics for thousands of years, and now we can apply that experience to driverless cars – not sure they had that in mind when they were determining their philosophies. One classical dilemma proposed by Philippa Foot and Judith Thomson is called the Trolley Problem, imagine the following:
A run away train is about to run over and kill five people standing on the tracks. You’re watching this scene from a safe area and stand next to a switch that can change the trains course to another sidetrack, on which only one person stands (who otherwise would survive if you did nothing). Would you pull switch to save the five people and intentionally kill the one person on the sidetrack?
A simple analysis of this scenario would probably have you saving the five people and killing only the one, based purely on the numbers. However, a more thoughtful approach would consider other factors too, including the moral distinction of intentionally killing someone versus letting someone die. It would seem worse to intentionally kill someone (the one person on the sidetrack) than to allow someone to die (the other five people) based on a series of events that you did not initiate or have any responsibility for.
These no-win scenarios are going to have to been considered when programming driverless cars, and you’d hope their operating systems will choose the lesser of two evils. Scenarios such as, is it better to save a child or an adult? What about multiple adults versus one child? Programmers will be faced with countless uncomfortable scenarios that they ethically have to decide what is right and what is not. Again, ethics by numbers alone seems naive and incomplete; rights, duties, conflicting values, and other factors will often come into play.
Ethical standards could even deviate between each car manufacturer, unless legislation dictates otherwise. A recent article written for BBC even poked fun at the idea of being able to download different ethical engines for your driverless car. There could be “a Nietzschean engine, which would drive right over everything – why not? God is dead anyway. Or an Ayn Rand model ethical engine, named after the Russian-American free market fanatic, which would use chip technology to scan the bank account of each pedestrian, calculating their net worth, swerving to miss the producers of society, and running down all the moochers. There could even be a Woody Allen ethical engine, which would start apologizing as you press on the gas, and continue all the way home.”
There are so many variables to consider once your own vehicle starts counting you out of the equation on your morning commute to work, but let’s hope that auto manufacturers and programmers review everything carefully before these vehicles hit the market. Change is unavoidable and not necessarily a bad thing, but major disruptions and new dangers should be anticipated and avoided where possible.
If you’re still interested in this topic, and enjoy listening to Podcasts, head over to Radio Lab and have a listen to their thoughts.