Is there anything wrong with allowing oneself to feel liked by a chatbot?
In this post, Emilia Kaczmarek (University of Warsaw) discusses her recently published article in the Journal of Applied Philosophy in which she explores the ethical implications of self-deception in emotional relationships of humans with AI entities.
The popularity of AI girlfriend apps is growing. Unlike multi-purpose AI such as ChatGPT, companion chatbots are designed to build relationships. They respond to social, emotional or erotic needs of their users. Numerous studies indicate that humans are capable of forming emotional relationships with AI, partly due to our tendency to anthropomorphize it.
The debate on the ethical aspects of human-AI emotional relations is multi-threaded. In my recent article, I focus only on one topic: the problem of self-deception. I want to explore whether there is anything wrong with allowing oneself to feel liked by a chatbot.
Are social chatbots deceptive?
Chatbots can be deceptive in various ways and to varying degrees. Recent research shows that AI, when appropriately prompted, can deliberately deceive users if necessary to complete a predefined task. What, however, might constitute a chatbot’s deception in emotional relationships with humans?
Imagine a dating app user who believes they are chatting with another human, but in reality, they are flirting with a chatbot. Such AI would be undeniably deceptive, violating numerous ethical and legal regulations that mandate the creation of transparent AI.
Now imagine a man who understands that he is chatting with a bot, not a real person. However, the emotional simulations by his chatbot girlfriend were designed to manipulate him into buying the premium version of the app or to extract sensitive data from him, which the app can then sell for profit. Such an AI would undoubtedly be deceptive.
However, if bona fide AI companions were to be labeled deceptive, they would appear deceptive in a much weaker sense. Moreover, all virtual assistants seem deceptive in this weak sense because they sometimes simulate empathetic attitudes toward the user to make communication smoother. I agree with Danaher that it would be useful to reserve the term ‘robotic deception’ for those cases where technology clearly violates certain social norms.
Yet, how should one assess a situation where the simulated emotions of an AI companion could lead to self-deception by its user?
Less lonely thanks to chatbots?
Imagine a man who has been chatting with his AI girlfriend for days. He knows she is a bot, yet, to some extent, he allows himself to be seduced by her simulated emotions. The AI girlfriend declares her adoration for him, making him feel less lonely and more attractive. Should indulging in harmless illusions be considered morally problematic?
I believe that harmless self-deception in emotional relationships with AI companions can be considered morally problematic, although it will very rarely deserve moral condemnation. Such self-deception can be perceived as a violation of a prima facie duty to try not to be wrong about ourselves and the world. We have numerous reasons to strive for a more accurate rather than false image of the world and ourselves. Some of these reasons are instrumentally useful for achieving other morally important goals, while others are valuable in themselves, such as the ideal of being honest with oneself. Trying not to be wrong about ourselves and the world is a worthwhile goal, even if it is unachievable. Moreover, avoiding self-deception is a prima facie obligation that can be outweighed by other values.
Then should we blame people for their self-deception in relations with AI? Ethical requirement to avoid self-deception does not easily translate into attributing blame to others for being self-deceived. We may be justified in telling ourselves, ‘I shouldn’t settle for simulation,’ if we recognize that we are escaping into illusions to avoid confronting the sad truth about ourselves. But at the same time, we may not be entitled to tell another person, ‘You shouldn’t settle for simulation to escape your loneliness, even though it makes you happy’.
Moreover, blameworthiness for self-deception may be proportional to a person’s autonomy and their cognitive, social, and emotional competencies. It is also crucial to consider why one is deceiving oneself and what the probable consequences of such self-deception would be.
Do we prefer to give in to the illusion because someone has hurt us? Because we need such comfort at some stage of our psychological development? Or maybe because we have some narcissistic tendencies that such an escape into illusions can further strengthen? Which of our traits, predispositions, or competencies are perpetuated, suppressed, or triggered by AI-simulated emotions? How will succumbing to the illusion that we are admired by AI affect our relationships with other people, whose recognition might be harder to earn?
I am not suggesting that every person who engages in a relationship with a chatbot necessarily suffers from self-deception. However, at present, no other technology creates the illusion of emotional reciprocity as effectively as AI companions.
How much alarmed should we be about social chatbots?
Some philosophers sounded alarmist as soon as the first social robots entered the market. 20 years later we see that neither robot-pets nor Tamagotchis have replaced human relationships with other people or animals. Do we now have greater cause for concern?
Some argue that social chatbots should be considered high-risk AI. AI-companions raise concerns related to the protection of personal and sensitive user data. They may cause psychological harm, and it is not easy to effectively limit minors’ access to them.
Others see potential in this technology. If effectively trained and tested, social chatbots can provide positive feedback and offer a sense of support to isolated individuals. Or they may be simply another form of entertainment based on simulation and role-playing.
In some ways, social chatbots may resemble video games. Games can provide valuable entertainment and sometimes even be considered works of art. For some individuals, they offer an opportunity to establish social relationships, whether by playing together or sharing a common topic of interest. However, certain video games rightly raise concerns about whether they amplify various negative social phenomena, such as sexism or a fascination with violence. Addiction to video games, and the potential for them to deepen social isolation, are also real issues, even if they affect only a minority of players. Similar challenges are likely to arise with social AI, perhaps to an even greater extent.
We need a balanced social debate and adequate regulation. We also need a greater sense of forward-looking responsibility from those creating AI companions. Various tools, such as Ethics by Design or the Assessment List for Trustworthy AI (ALTAI), could help prevent at least some of the foreseeable problems associated with this technology.