a blog about philosophy in public affairs

On the Ethics of Self-Driving Cars: An Interview with Johannes Himmelreich

My colleague at Stanford’s Center for Ethics in Society, Johannes Himmelreich, is a philosopher who investigates agency and responsibility in contexts of collective collaboration and technological augmentation. Here, I ask Johannes about the ethical issues raised by the development of self-driving cars – one strand of his current research.

FN: Can you tell those of us who know less about the technology behind self-driving cars a little bit about where it’s currently at and how fast the development is going?

JH: In my view, the automotive sci-fi future will not come to your city within the next eight years. I would be very surprised if the majority of driving will be much different from what it is now. I expect we will see gradual improvements of systems that assist human driving. But, honestly, that’s more of a guess than a prediction. I actually can say very little about where the technology is at, since there is not much to go by that is publicly available and that is not just boisterous over-promising. This will change in the next 12-18 months. Google offshoot Waymo is starting a taxi service with self-driving cars in Phoenix, Arizona this year and General Motors’ brand Cruise say that they will start a similar so-called “robo-taxi” service in San Francisco next year. That’s when the rubber hits the road.

FN: How good is the technology today?

JH: In California, you can actually get a glimpse into where things stand because companies that test here are required to report on how these tests are going. In 2017, Waymo cars drove on average more than 5,500 miles, in dense city traffic, before a driver had to intervene or something else went wrong – and this number is actually worse this year than it was the year before because Waymo was bug-testing new features. That’s the best measure so far of where we are today.

The real answer to your question is though: it depends. California’s climate is very friendly for self-driving cars. In California, you don’t need to worry about keeping the car’s sensors free from snow or dirt and there won’t be a snow storm, so streets won’t suddenly look very different from one day to the next. Good performance here does not mean good performance everywhere. So, one question is “when will the first self-driving cars be in the hands of consumers in California?” A very different question is “when will these cars be able to actually work everywhere else?”

FN: Are these technical challenges the main challenges in the development of self-driving cars?

JH: I don’t know if technical challenges are the main challenges, there are definitely big non-technological challenges.

FN: For example, how will self-driving cars and human drivers get along?

JH: Yeah, some people propose that self-driving cars should get their own lanes. I don’t think that’s a good idea. It is not ambitious enough and it seems hardly feasible to segment the traffic system in this way in places that are not the US, where traffic is different.

Similarly, some people think that cyclists and pedestrians need to check their behaviour. No more jaywalking and the requirement to wear certain things, some kind of transponder on your bike for instance, to make sure the self-driving cars see you. Again, I think that is shirking the responsibility of getting this thing right. Also, this idea is as if H&M were saying: everybody lose weight now because we cannot tailor to certain sizes.

The list of problems goes on: How do self-driving cars communicate with other participants – the nod that the car has seen you and lets you cross? How can we manage the gradual transition to more supportive driving assistance? It will be very hard to stay alert when the car is doing most of the driving, but not all of it. It is very hard for humans to take over quickly in emergency situations. The so-called handover is a challenge. And now we didn’t even talk about safety problems of malicious actors. What if someone wants to hack the self-driving Uber taxi? A hack is relatively easy because you have direct access to the hardware.

FN: Let’s talk about the ethical problems. How have those interested in the ethics of self-driving cars tended to think about these issues, and how (if at all) do you think this needs to change going forward?

JH: I think all of these issues above are also ethical problems, even if they are often not seen as such. Instead, the big issue in the ethics of self-driving cars has been the trolley problem – or whatever this debate thinks the trolley problem is. Basically: if a crash is unavoidable, who should the car run into? Philosophers, often following the agenda set by public discussion, have tended to approach this question as an instance of the trolley problem. I think that’s a mistake and that’s what I argue in “Never Mind the Trolley“, forthcoming in Ethical Theory and Moral Practice.

FN: What’s the problem with the trolley problem?

JH: The main gripe that I have with the trolley problem is that it is so individualistic. A trolley case wants you to reflect on a dilemma of life and death and then come up with your considered judgment about what is right. But it is reasonable to disagree about what is right. People have different reasonable views about abortion. People will have different reasonable views about collision management of self-driving cars. It’s a good thing that we have politics to come to grips with such reasonable disagreement. I think that self-driving cars pose a real political challenge. But the trolley problem, by contrast, cannot help us answering the political problem. Instead, it may actually divert the focus on the first-order moral issue of what each of us thinks is right.

FN: But surely some trolley-like cases will arise with self-driving cars?

JH: I am actually not sure. A trolley problem assumes here that something has gone wrong so badly that somebody has got to die. But it also assumes that the car still has enough control to make a decision about who dies. That both of these things are true at the same time looks pretty impossible to me. Failure modes are likely to be correlated. I am not saying trolley-like cases are inconceivable to happen for self-driving cars. But what we are discussing right now is just not the right focus. The bigger problem is trade-offs about risk and practices and standards of safety. They also concern matters of life and death and these issues need to be addressed on a social level and they are much more general than collision scenarios.

FN: Given that this is the “Justice Everywhere” blog, I’d be particularly keen to hear what you think is the most pressing justice-based concern raised by this technology and the social changes it’s expected to bring about.

JH: I think self-driving cars put a common good at stake. Self-driving cars are incredibly individualistic. In the United States transportation is individualized and road-bound anyway. But, elsewhere self-driving cars threaten to undermine an existing public transportation infrastructure. Self-driving cars would take us further down the path that we are going down with ride-hailing and car-sharing services today. On this path, public transportation might become costlier. We need to ask ourselves: How do we integrate the benefits of self-driving cars smoothly into an existing public transportation infrastructure? Who has access to self-driving cars? And very generally: What is the future of public transportation in a world of self-driving cars? This is something that I worry about, in particular because it is bound up with broader worries relating to participation, spatial justice, and inequality. I think that self-driving cars, on their current trajectory at least, stand to exacerbate such existing justice-based concerns.

How self-driving cars change the infrastructure and shape the places where we live – this issue deserves a bit more attention. Another major issue of justice – that of job displacement and how self-driving cars will affect labour markets – is I think already well-established on the agenda as a larger symptom of technological progress.

FN: What could be done to meet this concern about spatial justice?

JH: That is the problem: I don’t have an answer. I think we should think about positive visions for self-driving cars for everyone. One part of this vision is safety. The aim should be a massive reduction in traffic deaths. That part of the vision is pretty clear, and the industry will hopefully be held to this expectation. But is safety all we hope for? What are other parts of the vision? For instance, many people are concerned that there’s a deepening divide between those living in urban and rural areas. Most of what I hear about the potential for self-driving cars is for urban areas; but isn’t there some positive vision for rural areas as well? With regards to this, and other justice-related concerns, I think those involved in the development and governance of self-driving cars should see if we can push the limits of our imagination.


Feel free to join the discussion in the comments below. For another blog post on this topic, written by Johannes, see The Conversation.

Fay Niker

Fay is Lecturer in Philosophy at the University of Stirling. Before taking up this role, she was a postdoctoral fellow at the Center for Ethics in Society at Stanford University. Her research interests lie at the intersection of ethics, moral psychology, and social and political philosophy.

Twitter 

Previous

Recent Vacancies in Political Theory/Philosophy/Ethics

Next

Lectureships at Newcastle Politics Department: Information & Applications

1 Comment

  1. I didn’t have any expectations concerning that title,
    but the more I was amazed. The writer did a fantastic
    job. I spent a few minutes reading and checking the facts.
    Everything is clear and understandable.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén