a blog about philosophy in public affairs

Is it possible to trust Artificial Intelligence (AI)?

In this post, Pepijn Al (University of Western Ontario) discusses his recent article in Journal of Applied Philosophy on trust and responsibility in human relationships with AI and its developers.


Chances are high that you are using AI systems on a daily basis. Maybe you have watched a series that Netflix recommended to you. Or used Google Maps to navigate. Even the Editor I used for this blogpost is AI-powered. If you are like me, you might do this without knowing exactly how these systems work. So, could it be that we have started to trust the AI systems we use? As I argue in a recent article, I think this would be the wrong conclusion to, because trust has a specific function which is absent in human-AI interactions.

Trust signals a dependency. It says: “I’m counting on you.” For trustworthy people, this trust is an (extra) reason to act – although not necessarily a decisive reason. In a society where trustworthiness is promoted and the untrustworthy are blamed, the question: “can I trust you to do this?” adds extra force to a request. Trust does not carry the same force when directed to an AI. The AI does not respond to this trust in the way humans can. Therefore, we cannot and should not trust AI. This may seem like philosophical nitpicking but understanding the function of trust and to which relationships it is applicable will help us to understand who is responsible when AI systems fail.

What is the function of trust?

In her illuminating work on trustworthiness, Karen Jones relates the function of trust to our dependency on others. Many of the things we want to achieve require the help of others. This makes us vulnerable. An unexpected action of someone else could ruin our plans, with potentially far-reaching consequences.

This does not mean that we should just take a leap and hope for the best. As Jones points out, unlike the weather or our cars, people can respond to the expectations in others. Trust makes use of this ability. When we trust a friend to keep a secret, we do not merely predict or hope that they will not tell it. A trustworthy friend will keep our secret because we trust them. So, when we trust we expect someone to be responsive to our trust and being responsive is a precondition for trustworthiness. The interaction between trust and trustworthiness helps us to decrease our vulnerability caused by our dependency.

We do not find a similar role of trust in relationships with objects, such as cars and AI systems. When we rely on our car to not break down during a trip, this expectation itself will not reduce the risk of our car breaking down. Current AI systems are like cars in this respect: trusting them does not change the way in which they act. The YouTube algorithm will not give better recommendations because we trust it. This means that it would be incorrect to think of human-AI relationships as trust relationships. Instead, we are merely relying on AI.

Trusting institutions and animals

Does this imply that we can only trust humans? That would be problematic, as we often put our trust in non-human entities, such as the justice system or our pets.

Fortunately, it turns out that trust in animals or institutions is like trust in people. In both cases, trust signals an expectation to which we expect a trustworthy institution or animal to respond. Institutions and animals need to be responsive to be trustworthy. A hospital, for example, cannot be trustworthy if the people in it are indifferent to the trust placed in them, even if this hospital delivers perfect care. Similarly, for a dog to be trustworthy, they must be responsive to the trust put in them. Without such responsiveness, the behavior of dogs would not differ from a parrot that talks on command, which we would not want to call trust. Thus, even for animals and institutions, trust requires responsiveness.

Who is responsible?

Understanding that human-AI relationships are not based on trust but on reliance is important, because it also tells us something about who can be held responsible. When someone we trusted lets us down, we hold them responsible and often blame them for the consequences. When your friend does not keep a secret, we feel betrayed and blame them. This reaction is not appropriate for AI. It does not make sense to blame Google Maps for betraying your trust when it sends you in the wrong direction (even though you might want to), because you would be blaming the system for something it is not able to do. And while holding a friend responsible might make them act differently, blame will not have any effect on how Maps navigates your routes in the future.

Instead of trusting and blaming AI systems, we should trust developers of AI and the institutions and people that make use of these algorithms. The developers and users are the ones who can and (when our trust is reasonable) should be moved by our trust. They are also the ones that should be held responsible when this trust is betrayed. Trusting them and holding them responsible will also have more impact, because it can affect their behavior.

What does this mean for using AI?

Does this mean that we should not depend on AI? No, it does not. AI systems can be immensely helpful, and it would be unwise to reject the use of AI systems because we cannot trust them. But it is important to understand that our relationships to the developers and users of AI is different from our relationship to the AI itself. The latter we rely on, the former we trust and should be held responsible.

 

The Journal of Applied Philosophy is a unique forum for philosophical research that seeks to make a constructive contribution to problems of practical concern. Open to the expression of diverse viewpoints, it brings the identification, justification, and discussion of values to bear on a broad spectrum of issues in environment, medicine, science, policy, law, politics, economics and education. The journal publishes in all areas of applied philosophy, and posts accessible summaries of its recent articles on Justice Everywhere.

Twitter 

Previous

Factory farm abolition the moderate way

Next

What is cultural decolonization?

1 Comment

  1. Wil Al

    Hoi Pepijn,
    Helder, strak geanalyseerd stuk waarmee het verschil met vertrouwen duidelijk is.
    Groet! ook van oma.
    Ik hoop dat mijn reactie bij je terecht komt:)

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén