Author: Journal of Applied Philosophy

Is there anything wrong with allowing oneself to feel liked by a chatbot?

In this post, Emilia Kaczmarek (University of Warsaw) discusses her recently published article in the Journal of Applied Philosophy in which she explores the ethical implications of self-deception in emotional relationships of humans with AI entities.

Photo: Free to use by Mateusz Haberny.

The popularity of AI girlfriend apps is growing. Unlike multi-purpose AI such as ChatGPT, companion chatbots are designed to build relationships. They respond to social, emotional or erotic needs of their users. Numerous studies indicate that humans are capable of forming emotional relationships with AI, partly due to our tendency to anthropomorphize it.

The debate on the ethical aspects of human-AI emotional relations is multi-threaded. In my recent article, I focus only on one topic: the problem of self-deception. I want to explore whether there is anything wrong with allowing oneself to feel liked by a chatbot.

Are social chatbots deceptive?

Chatbots can be deceptive in various ways and to varying degrees. Recent research shows that AI, when appropriately prompted, can deliberately deceive users if necessary to complete a predefined task. What, however, might constitute a chatbot’s deception in emotional relationships with humans?

Imagine a dating app user who believes they are chatting with another human, but in reality, they are flirting with a chatbot. Such AI would be undeniably deceptive, violating numerous ethical and legal regulations that mandate the creation of transparent AI.

Now imagine a man who understands that he is chatting with a bot, not a real person. However, the emotional simulations by his chatbot girlfriend were designed to manipulate him into buying the premium version of the app or to extract sensitive data from him, which the app can then sell for profit. Such an AI would undoubtedly be deceptive.

However, if bona fide AI companions were to be labeled deceptive, they would appear deceptive in a much weaker sense. Moreover, all virtual assistants seem deceptive in this weak sense because they sometimes simulate empathetic attitudes toward the user to make communication smoother. I agree with Danaher that it would be useful to reserve the term ‘robotic deception’ for those cases where technology clearly violates certain social norms.

Yet, how should one assess a situation where the simulated emotions of an AI companion could lead to self-deception by its user?

Less lonely thanks to chatbots?

Imagine a man who has been chatting with his AI girlfriend for days. He knows she is a bot, yet, to some extent, he allows himself to be seduced by her simulated emotions. The AI girlfriend declares her adoration for him, making him feel less lonely and more attractive. Should indulging in harmless illusions be considered morally problematic?

I believe that harmless self-deception in emotional relationships with AI companions can be considered morally problematic, although it will very rarely deserve moral condemnation. Such self-deception can be perceived as a violation of a prima facie duty to try not to be wrong about ourselves and the world. We have numerous reasons to strive for a more accurate rather than false image of the world and ourselves. Some of these reasons are instrumentally useful for achieving other morally important goals, while others are valuable in themselves, such as the ideal of being honest with oneself. Trying not to be wrong about ourselves and the world is a worthwhile goal, even if it is unachievable. Moreover, avoiding self-deception is a prima facie obligation that can be outweighed by other values.

Then should we blame people for their self-deception in relations with AI? Ethical requirement to avoid self-deception does not easily translate into attributing blame to others for being self-deceived. We may be justified in telling ourselves, ‘I shouldn’t settle for simulation,’ if we recognize that we are escaping into illusions to avoid confronting the sad truth about ourselves. But at the same time, we may not be entitled to tell another person, ‘You shouldn’t settle for simulation to escape your loneliness, even though it makes you happy’.

Moreover, blameworthiness for self-deception may be proportional to a person’s autonomy and their cognitive, social, and emotional competencies. It is also crucial to consider why one is deceiving oneself and what the probable consequences of such self-deception would be.

Do we prefer to give in to the illusion because someone has hurt us? Because we need such comfort at some stage of our psychological development? Or maybe because we have some narcissistic tendencies that such an escape into illusions can further strengthen? Which of our traits, predispositions, or competencies are perpetuated, suppressed, or triggered by AI-simulated emotions? How will succumbing to the illusion that we are admired by AI affect our relationships with other people, whose recognition might be harder to earn?

I am not suggesting that every person who engages in a relationship with a chatbot necessarily suffers from self-deception. However, at present, no other technology creates the illusion of emotional reciprocity as effectively as AI companions.

How much alarmed should we be about social chatbots?

Some philosophers sounded alarmist as soon as the first social robots entered the market. 20 years later we see that neither robot-pets nor Tamagotchis have replaced human relationships with other people or animals. Do we now have greater cause for concern?

Some argue that social chatbots should be considered high-risk AI. AI-companions raise concerns related to the protection of personal and sensitive user data. They may cause psychological harm, and it is not easy to effectively limit minors’ access to them.

Others see potential in this technology. If effectively trained and tested, social chatbots can provide positive feedback and offer a sense of support to isolated individuals. Or they may be simply another form of entertainment based on simulation and role-playing.

In some ways, social chatbots may resemble video games. Games can provide valuable entertainment and sometimes even be considered works of art. For some individuals, they offer an opportunity to establish social relationships, whether by playing together or sharing a common topic of interest. However, certain video games rightly raise concerns about whether they amplify various negative social phenomena, such as sexism or a fascination with violence. Addiction to video games, and the potential for them to deepen social isolation, are also real issues, even if they affect only a minority of players. Similar challenges are likely to arise with social AI, perhaps to an even greater extent.

We need a balanced social debate and adequate regulation. We also need a greater sense of forward-looking responsibility from those creating AI companions. Various tools, such as Ethics by Design or the Assessment List for Trustworthy AI (ALTAI), could help prevent at least some of the foreseeable problems associated with this technology.

Why it can be OK to have kids in the climate emergency

In this post, Elizabeth Cripps (University of Edinburgh) discusses her new article published in the Journal of Applied Philosophy, in which she explores whether it is justifiable to have children despite the carbon footprint it creates.

Credit: Andrea Thomson Photography.

In the US, having a child has a carbon price tag of 7 tonnes a year. In France, it’s 1.4 tonnes. Going vegan saves only 0.4 tonnes yearly, living car free 2.4 tonnes, and avoiding a Transatlantic flight 1.6 tonnes.

For those of us who have or want kids, this is an uncomfortable fact. We know we should pursue climate justice, including by cutting our own carbon impact. Does it follow that someone living an affluent life in a country like the UK or the US should stay childless?

Not necessarily. What’s more, by putting this argument under pressure, we learn some important lessons for moral philosophers. We need to talk more about individual sacrifice in the face of global emergencies. In so doing, we must engage carefully with sociological and psychological scholarship and attend to the insights of demographic groups who have experienced injustice.

(more…)

Non-monogamy and the “Black Marriage Problem”

In this discussion post, Justin Clardy (he/they; Santa Clara University) introduces their article recently published in the Journal of Applied Philosophy on polyamory and a defense for minimal marriage among the Black population in the USA.

The short synopsis of the article is accompanied by an asynchronous conversation among Anika Simpson (Howard) Faith Charmagne, Luke Brunning (Leeds) and Nannearl Brown (PAGES TRG) where they will engage with the article in terms of its academic and practical implications for the Black population in the US.

Created with Bing AI Image Generator (2024).

Synopsis by Justin Clardy

The Black marriage problem—or the fact that “Black folks just aren’t getting or staying married like they used to”—has been a concern for Black writers. This problem is concerning because just less than 60 years ago, Black marriages rates were thought to be one of the zeniths of the Civil Rights Movement.

In 2022, Ralph Richard Banks appeared in the New York Post doubling down on his 2011 suggestion that in order to solve the Black marriage problem, Black women should consider marrying more white men. What’s striking about Banks’ suggestion is not just that it does not take endogamy as seriously as it should, it also does not take non-monogamy among Black folks as seriously as it should either. What possibilities would expanding legal marriage to include plural marriages offer for the same populations of unmarried Black folks that Black writers believe to be driving the Black marriage crisis? This is one of the questions that I explore in a recent article called “Polyamory in Black.”

Historical records in the U.S. tell stories of non-monogamous relationships dating back to the antebellum period. Some of these relationships were, of course, forged by the pernicious design of the domestic slave trade. Other Black non-monogamous intimate relationships, however, were chosen. In her book, Black Women Black Love: America’s War on African American Marriage, Dianne Stewart writes about Dorcas Cooper who was content to remain in a polygamous marriage after arriving on a plantation to find her husband married to a second woman. When Cooper recognized how well her husband’s second wife, Jenny, took care of Cooper’s kids, historical record even shows a deep fondness of Jenny from Cooper as she would not “let anybody say anything against [Jenny].” Historical record also during Reconstruction, shows Freedmen’s Bureau agents disregarding non-monogamous intimacies in the years following the Civil War by breaking up Black non-monogamous families as one agent recounted “Whenever a negro appears before me with 2 or 3 wives…I marry him to the woman who has the greatest number of helpless children who would otherwise become a charge on the bureau.” Importantly, then just as now, marriage was tethered to a bundle of rights and entitlements that had material consequences, such as the denial of Civil War pensions, on Black individuals and families who the institution forbade.

Despite (or, perhaps because of) the presence of Black non-monogamies, both in the antebellum and Reconstruction periods, anti-non-monogamous propaganda routinely portrayed non-monogamists as Black or barbaric in order to convey messages of chaos, foreigners, and despotism. As I show in an article published in the Journal of Applied Philosophy, some of these anti-black anti-non-monogamous impressions were published in media outlets following the Reynolds v United States decision handed down by the Supreme Court. Even the Court’s official opinion white engagement with non-monogamy was said to produce a “peculiar race” as the practice was thought natural and common among Asiatic and African peoples but foreign to whites.

Insofar as the Reynolds opinion remains one of the highest opinions handed down by the U.S. Supreme Court on plural marriage, present day marriage law has disproportionately harmful consequences on the growing population of Black polyamorists in the U.S.—both socially and materially. For example, non-monogamists are more likely than their monogamist counterparts to have their relationship(s) subjected to social scrutiny and are less likely than their monogamous counterparts to have their relationships cohere with zoning laws forbidding the number of “unrelated” people living in the same household. The ongoing ban against plural marriages in the U.S. generate interesting questions about what it might take to end non-monogamous oppression and enact measures to repair the harms done by legal marriage on Black non-monogamists. And, as I argue in “Polyamory in Black” I think that a compelling rationale can be offered for thinking about Black reparations along these lines.

(more…)

Should We Mourn the Loss of Work?

In this post, Caleb Althorpe (Trinity College Dublin) and Elizabeth Finneron-Burns (Western University) discuss their new open access article published in the Journal of Applied Philosophy, in which they discuss the moral goods and bads of a future without work.

Photo by Possessed Photography on Unsplash

It is an increasingly held view that technological advancement is going to bring about a ‘post-work’ future because recent technologies in things like artificial intelligence (AI) and machine learning have the potential to replace not just complex physical tasks but also complex mental ones. In a world where robots are beginning to perform surgeries independently and where AI can perform better than professional human lawyers, it does not seem absurd to predict that at some point in the next few centuries productive human labour could be redundant.

In our recent paper, we grant this prediction and ask: would a post-work future be a good thing? Some people think that a post-work world would be a kind of utopia (‘a world free from toil? Sign me up!’). But because there is a range of nonpecuniary benefits affiliated with work, then a post-work future might be problematic.

(more…)

If animals have rights, why not bomb slaughterhouses?

In this post, Nico Müller (U. of Basel) and Friderike Spang (U. of Lausanne) discuss their new article published in the Journal of Applied Philosophy, in which they look at the relation between animal rights and violent forms of activism. They argue that violent activism frequently backfires, doing more harm than good to the animal rights cause.

Created with DALL.E (2024)

In 2022 alone, some ten billion land animals were killed in US slaughterhouses. That’s ten billion violations of moral rights, at least if many philosophers since the 1960s (and some before that) have got it right. If the victims were human, most of us would condone the use of violence, even lethal violence, in their defense. So regardless of whether you agree with the values of the animal rights movement, you may wonder: Why isn’t this movement much more violent? It seems like it should be, on its own terms.

(more…)

When whatever you do, you get what you least deserve

In this post, David Benatar (U. Cape Town) discusses his article recently published in the Journal of Applied Philosophy on the paradox of desert, exploring the issues that arise from ‘acting rightly’ and the costs it may incur.


(C) David Benatar. Camondo Stairs, Galata, Istanbul, 2022

Imagine that you are a soldier fighting a militia that is embedded within an urban civilian population. You face situations in which, in the fog of war, you are unsure whether the person you confront is a civilian or a combatant, not least because the combatants you are fighting often dress like civilians. You can either shoot and ask questions later, or you can pause, even momentarily, to take stock, and risk being shot.

Depending on the precise circumstances, pausing may be either a moral requirement or merely supererogatory (that is, a case of going beyond the call of duty). Either way, the soldier who pauses is morally superior to the soldier who shoots without hesitation. However, there will be situations in which a soldier is killed precisely because he acted in the morally better way.

(more…)

How the animal industry undermines consumers’ autonomy

In this post, Rubén Marciel (UPF and UB) and Pablo Magaña (UPF) discuss their article recently published in the Journal of Applied Philosophy on the ethical legitimacy of misleading commercial speech for ‘green’ or ‘ethically produced’ animal products.

Photo by Mae Mu with Unsplash Licence.
(more…)

Invisible discrimination: the double role of implicit bias

In this post, Katharina Berndt Rasmussen (Stockholm University & Institute for Futures Studies) discusses her recently published article in the Journal of Applied Philosophy (co-authored by Nicolas Olsson Yaouzis) exploring the roles that implicit bias and social norms play in discriminating hiring practices.


The US, like many other countries, is marked by pervasive racial inequalities, not least in the job market. Yet many US Americans, when asked directly, uphold egalitarian “colour-blind” norms: one’s race shouldn’t matter for one’s chances to get hired. Sure enough, there is substantial disagreement about whether it (still) does matter, but most agree that it shouldn’t. Given such egalitarian attitudes, one would expect there to be very little hiring discrimination. The puzzle is how then to explain the racial inequalities in hiring outcomes.

A second puzzle is the frequent occurrence of complaints about “reverse discrimination” in contexts such as the US. “You only got the job because you’re black” is a reaction familiar to many who do get a prestigious job while being black, as it were. Why are people so suspicious when racial minorities are hired?

(more…)

Countering Social Oppression

In this post, Suzy Killmister (Monash) discusses her recently published article in the Journal of Applied Philosophy giving an answer to the question, what, if anything, can members of oppressed groups do to counter that oppression?

© Adam Fagen (CC BY-NC-SA 2.0)

During the Memphis Sanitation Strike of 1968, protestors marched through the streets carrying signs bearing the slogan ‘I Am a Man’. Today, protesters march through the streets carrying signs declaring ‘Trans Rights are Human Rights’, while others proclaim ‘No Human is Illegal’. What’s going on here? And more importantly, what explains the rhetorical power of such statements?

(more…)