Is there anything wrong with allowing oneself to feel liked by a chatbot?
In this post, Emilia Kaczmarek (University of Warsaw) discusses her recently published article in the Journal of Applied Philosophy in which she explores the ethical implications of self-deception in emotional relationships of humans with AI entities.
The popularity of AI girlfriend apps is growing. Unlike multi-purpose AI such as ChatGPT, companion chatbots are designed to build relationships. They respond to social, emotional or erotic needs of their users. Numerous studies indicate that humans are capable of forming emotional relationships with AI, partly due to our tendency to anthropomorphize it.
The debate on the ethical aspects of human-AI emotional relations is multi-threaded. In my recent article, I focus only on one topic: the problem of self-deception. I want to explore whether there is anything wrong with allowing oneself to feel liked by a chatbot.
(more…)