top of page

The Ethical Dilemma of Driverless Cars

The concept of driverless vehicles has been around for a long time. It started with sailors designing ships that could solely rely on the wind to steer it to its destination. At the dawn of the 20th century planes with autopilot were also invented, but the biggest stride in this field of research was made when the university of Carnegie Mellon and the automobile company Mercedes both came up with functional models for driverless cars in the 1980's, and almost 30 years later in 2014 the first self driven car was sold by the company Induct technology. With this change humanity as a whole is looking at an ideal future but as with any new concept driverless cars too give rise to many concerns like cost effectiveness and maintenance. But perhaps the biggest concern of all is the repercussions of not having a human driver, otherwise known as the ethical dilemma of driverless cars. Here with more on this, is Perspectoverse's Arya Pradhan.


Imagine the following scenario: you have just bought a new driverless car and you’re cruising down the highway in it. While you are riding you notice your car is surrounded on all three sides, by a truck carrying a heavy load in the front, a motorcycle on your right and a SUV on your left. Suddenly the heavy load from the truck falls down, and since the truck was in close proximity, your car can’t stop before crashing.

This leaves the car only 3 options, either minimize risk to others by crashing into the fallen load, minimize risk to the passengers by crashing into the motorcycle or take the neutral path and crash into the SUV which typically has high passenger safety standards. In this scenario, if it were a human driver, any outcome would be the result of a last-minute human reaction, but in the case of a driverless car, the decision taken would solely be based on the pre-written computer program which governs every aspect of the car. In this scenario, because the decision of the car was already planned, any damage to surrounding vehicles and human lives can be looked at as a decision made with malice aforethought. This situation is the perfect representation of the dilemma. It is undeniably complex and greatly challenges basic human morals.

To no one's surprise, the public has consequently spoken out against this controversial engineering marvel with their fair share of complaints regarding the mass production of these cars. To combat this, companies have integrated systems into their vehicles that can give more control to human drivers in extreme situations.

For example, in 2016, Tesla made a decision to not use their Automatic Braking System (ABS) in their models S and X when a driver has his foot on the accelerator. In other words, in an emergency situation, where the driver is faced with situations similar to the one in the trolley problem (deciding to cause damage to one person or object in order to save a greater number of people), the car’s automatic system won’t stop the driver in turning the car in a direction that the driver desires, even if it is the one that would cause greater damage.

This decision was also highly criticized as it endangered the passengers and any surrounding civilians. The official statement given by Tesla stated that they didn’t want to second guess drivers in emergency situations and by doing so they could effectively tackle any situation that presents the dilemma.

But this decision directly conflicts with the whole idea behind driverless cars that robots make better drivers than humans. After all, it is a decently valid argument. While situations in the dilemma may cause fatalities, it is nothing compared to the accidents that human drivers cause. Every year in the U.S there are 35000+ casualties due to road accidents and that’s the equivalent of a loaded 747 crashings every week. Even if we view this decision in the eyes of one of the most popular science-fiction writers, Isaac Asimov (whose laws of robotics are widely used to assess the functioning of robots), we can see that the decision ends up disobeying all the three renowned and revered laws.

One perspective could be that the decisions made by Tesla in order to not face the dilemma at all are a huge setback for the advancement of society as a whole. Even though there will always be a group of people who see this dilemma as morally wrong and from a neutral point of view, this judgement might even hold true in some cases, but they should also always consider the mass amounts of lives that automatic cars can and have saved. The idea of integrating automation into everyday activities like driving can be frightening, but it is reasonable to believe that if manufacturers stick to their ‘safety-first’ motto and keep on improving their models with better technologies and more realistic simulations, there might be a day where the threat to human life is essentially zero, therefore, resolving the dilemma.

Written by Arya Pradhan

Illustrated by Anushka Doshi


343 views0 comments

Recent Posts

See All
Post: Blog2 Post
bottom of page