Picture yourself in this scenario: You are driving home from a day at work in the middle lane on a three lane highway. You are enjoying your cruise home when suddenly the car in front of you slams on its brakes. You are going too fast to stop in time to avoid a collision. A split-second decision must be made. You could choose to hit the car in front of you and hope that you can slow down enough to prevent serious injury. The car in front of you is a large truck so the driver in front will be fine. You could also swerve into one of the two other lanes. If you swerve into the left lane, you will collide with a motorcyclist. If you swerve to the right, you will hit a SUV with a family of five inside. There are many things to consider here. Do you prioritize the safety of yourself over others?


Most likely. But, if we know that drivers of cars are much less likely to be injured or killed in a crash than motorcyclists, is it then morally reprehensible to choose to endanger somebody else over yourself even if your chances of survival are higher than theirs? The same question is applicable to the SUV. Is it more ethical to make the choice to swerve into the one motorcyclist whose chances of death are quite high, or to run into the SUV which has a lower fatality rate but endangers more people? You might be thinking that no human driver could process that much information and make a decision based on it in such a small amount of time, the entire question is moot, and you would be mostly correct. However human drivers are no longer the sole standard, as driverless cars become more and more advanced every day. A computer could theoretically process the information above and come to a decision before the oh-so-flawed human driver was even aware they had a decision to make. Now, the entire idea of standardizing self-driving cars would be that they would avoid being in such scenarios in the first place; nevertheless, no system is perfect, and accidents happen. When a human driver is involved in an accident, they are often found to not be liable on the basis of human error and the fact that generally decisions made in fractions of seconds are destined to be nonoptimal. However, the ethical question is entirely changed when the driver of the car was preprogrammed to act a certain way. A computer can not be given the same pass for making a mistake that a human can. It is then imperative that, in certain unavoidable circumstances, a self-driving car’s algorithm be programmed to be able to make decisions based on ethics, and not just the law. Ethical thought exercises like the one above might seem like fringe scenarios, and that’s because they honestly really are, but that doesn’t mean that these questions and other similar ones need to be answered. To take the seemingly straightforward ethical approach, and the one that most people seem to favor, we can make an argument that a self-driving car should always prioritize a course of action that results in the least amount of death or injury. However, upon further examination of such a stance, the water is muddied by multiple factors. If minimizing death toll is always correct, should the self driving car then be programmed to show no preference to its own occupants? Should a car really make the decision to kill its 5 passengers instead of 6 pedestrians? And how does the legality of the pedestrians actions affect this dilemma? Is it ethical to program a car to kill or not kill certain people based upon whether or not they were using the crosswalk? Not just theoretically, but here in Kennewick, Pasco, or Richland. With people you know. This creates an interesting catch-22 situation. If you program cars to potentially make the decision to endanger the occupants instead of pedestrians, then potential buyers may feel less inclined to purchase a self-driving car. According to Google’s Waymo site, 96% of car accidents are caused by human error. If, in the effort to minimize death toll, we discourage people from buying the safer self-driving car, aren’t we indirectly causing other deaths? Mercedes has taken the clear stance on this saying that their cars will look to prioritize passenger safety over others. The website The Moral Machine was created by MIT students to test people’s opinions on different scenarios involving a self-driving car. Consumers are able to make a decision on who the self-driving car should save, and who it should not. The site aims to test people on many things to see what factors affect their decision-making preferences. Different variables are presented that can make decisions difficult and even uncomfortable. Expected factors such as avoiding intervention, upholding the law, and protecting the passengers are common ones that many discuss, but there are some difficult ones as well. These more difficult variables include the age of the pedestrians versus passengers, social values such as doctors versus robbers, and even the physical fitness of those involved. The questions aim to find exactly how much people are willing to place value on other humans and how much it matters to them. It seems monstrous to suggest that a machine be preprogrammed to value certain humans lives more than others, and I personally am uncomfortable doing so. But these may just be the hard questions we have to ask ourselves in the coming years of autonomization. How much control should a programmer have over what lives will be spared and saved? Take the Moral Machine test and let us know! What do you think should be valued and on what basis should driverless cars make decisions? Tell Anderson Law what you think! – written by high school intern Aja George



Leave a Reply