In this post, Fiona Woollard discusses their recent article in Journal of Applied Philosophy on the kinds of constraints against harm relevant to self-driving cars.


We are preparing for a future when most cars do not need a human driver. You will be able to get into your ‘self-driving car’, tell it where you want to go, and relax as it takes you there without further human input. This will be great! But there are difficult questions about how self-driving cars should behave. One answer is that self-driving cars should do whatever minimises harm. But perhaps harm is not the only thing that matters morally: perhaps it matters whether an agent does harm or merely allows harm, whether harm is strictly intended or a mere side effect, or who is responsible for the situation where someone must be harmed.

I argue in a recent article that these distinctions do matter morally but that care is needed when applying them to self-driving cars. Self-driving cars are very different from human agents. These differences may affect how the distinctions apply.

Constraints against doing harm and self-driving cars

Let’s focus on the difference between doing harm and allowing harm. There seem to be constraints against doing harm even if this is the only way to prevent harm.

Consider the following case:

Hospital Trolley: A trolley car is taking a passenger to hospital.  There is one innocent person trapped in the track in front of the trolley. If the trolley driver stops the trolley, the passenger will die of their injuries. If the trolley driver does not stop the trolley, the person in the track will be hit by the trolley and die.

It is pretty clear that the trolley driver is required to stop. The best explanation of this seems to be that continuing would be doing harm and that there are constraints against doing harm that protect the innocent person on the track.

Let’s look at a similar case involving self-driving cars (this is a version of a case discussed by Frances Kamm):

Hospital Self-Driving Car: A self-driving car is taking a passenger to hospital down a very narrow road. There is one innocent pedestrian trapped in the road in front of the self-driving car. The self-driving car can continue, driving over the pedestrian, or stop before hitting the pedestrian. It cannot swerve to avoid the pedestrian. If the self-driving car stops, the passenger will die of their injuries. If the self-driving car does not stop, the innocent pedestrian will be hit by the car and die.

Just as in the Hospital Trolley case, it seems the self-driving car should be required to stop. Again, the best explanation of this seems to be that continuing would be doing harm and that there are constraints against doing harm that protect the pedestrian. More than this, it seems as if we really should recognise such constraints. We should not be happy at the idea of a self-driving car that sees no constraint against running over pedestrians.

Human Drivers vs Self-Driving Cars

The Hospital Self-Driving Car case shows there should be constraints against self-driving cars doing harm.

However, care is needed. Standard constraints against doing harm are designed to protect us against the behaviour of human agents. These protections are not normally triggered when non-agents do harm to us. If I am drenched by a heavy rainfall or struck by lightning, this does not infringe constraints against doing harm.

So, one important question is whether we should see self-driving cars as moral agents. If self-driving cars are moral agents like humans, then it seems the constraint against doing harm should apply to their behaviour in just the same way it applies to a human driver’s behaviour. But, on what we might call the traditional view, self-drivering cars are not moral agents.

There are lots of ways you might argue that self-driving cars are not moral agent. Talbot, Jenkins and Purves argue that moral agents need to act because of their beliefs about moral reasons but self-driving cars cannot do this because they do not really have beliefs. On other views, some self-driving cars are moral agents but still differ in significant ways from human moral agents. For example, List argues that we can make sense of a self-driving car having  (and acting on) beliefs just as we can make sense of a company having (and acting on) beliefs. But there is still nothing it is like to be a self-driving car or a company. Nyholm argues that whether self-driving cars – and indeed autonomous machines more generally – count as agents may not yet be determined.

Sometimes we see a human agent as doing harm through a non-agent. We often think this way about traditional vehicles. In the Hospital Trolley case above, the driver may not have needed to do anything to run over the pedestrian. But we still see the pedestrian as protected by the constraint against doing harm. We treat the trolley as an extension of the driver’s body in this situation – they are seen as doing what it does. The same applies to private cars: the driver of a private car usually counts as doing harm if the car does harm.

Traditional vehicles have become familiar. We have developed a clear sense of how to understand the behaviour of such vehicles. We can recognise who is responsible for a vehicle’s behaviour and whether they should count as doing what it does. We can recognise this, even if we cannot explain the underlying principles. But self-driving cars are not familiar. We do not yet have a settled understanding of how we should understand the behaviour of self-driving cars.

The Hospital Trolley cases suggests that we should not resolve these complications by deciding that constraints against doing harm do not apply to the behaviour of self-driving cars at all.  There needs to be a constraint against self-driving cars doing harm. However, the precise form of this constraint is as-yet-unsettled.

The Journal of Applied Philosophy is a unique forum for philosophical research that seeks to make a constructive contribution to problems of practical concern. Open to the expression of diverse viewpoints, it brings the identification, justification, and discussion of values to bear on a broad spectrum of issues in environment, medicine, science, policy, law, politics, economics and education. The journal publishes in all areas of applied philosophy, and posts accessible summaries of its recent articles on Justice Everywhere.

Twitter