This is a guest post by Hollie Meehan (University of Lancaster).
We have been warned by the CEO of AI company Anthropic that up to 50% of entry-level jobs could be taken by AI in the coming years. While reporters have pointed out that this could be exaggeration to drive profits, it raises the question of where AI should fit into society. Answering this is a complicated matter that I believe could benefit from considering virtue ethics. I’ll focus on the entry-level job market to demonstrate how these considerations can play an important role in monitoring our use of AI and mitigating the potential fallout.
It has been almost three years since ChatGPT was released in the public arena amid great hopes, worries, and, perhaps more than anything, hype. While artificial intelligence tools, including of the Large Language Model variety to which ChatGPT belongs, were already deployed in many areas by then, it was this event that sparked both widespread obsession with AI and the subsequent pouring of trillions of dollars into AI development in just three years. Meanwhile, though still relatively in its infancy, Generative AI has indeed impacted numerous fields, and these include education and research, which together form the core dimensions of academia. Some of the concerns raised by the usage of Generative AI in academia, especially when it comes to student evaluations, have already been taken up on this blog, here, here, here, and here. In fact, out of all the Justice Everywehere posts focusing on AI in the past three years, exactly half looked at this particular problem, which is unsurprising, since on the one hand most of the contributors to this blog are educators, and on the other one that political philosophy is methodologically built around the capacity to engage in original thinking. In this post, which inaugurates the new season of Justice Everywhere, I want to signal a broader issue, which is – to put it bluntly – that key aspects of the way in which academia currently works are likely to be upended by AI in the near future. And, crucially, that as a collective, we seem to be dangerously inert in the face of this scenario.
The popularity of AI girlfriend apps is growing. Unlike multi-purpose AI such as ChatGPT, companion chatbots are designed to build relationships. They respond to social, emotional or erotic needs of their users. Numerous studies indicate that humans are capable of forming emotional relationships with AI, partly due to our tendency to anthropomorphize it.
The debate on the ethical aspects of human-AI emotional relations is multi-threaded. In my recent article, I focus only on one topic: the problem of self-deception. I want to explore whether there is anything wrong with allowing oneself to feel liked by a chatbot.
As the global temperature and ocean levels rise, it is our responsibility to limit our collective environmental impact as much as possible. If the benefits of AI don’t outweigh the risks associated with increasing our rate of energy consumption, then we may be obligated to shut down AI for the sake of environmental conservation. However, if AI becomes conscious, shutting them down may be akin to murder, morally trapping us in an unsustainable system.
This blog explores issues of justice, morality, and ethics in all areas of public, political, social, economic, and personal life. It is run by a cooperative of political theorists and philosophers and in collaboration with the Journal of Applied Philosophy.