Tagged: Artificial intelligence

Can Our Current Academic Model Go On in the Age of AI?

It has been almost three years since ChatGPT was released in the public arena amid great hopes, worries, and, perhaps more than anything, hype. While artificial intelligence tools, including of the Large Language Model variety to which ChatGPT belongs, were already deployed in many areas by then, it was this event that sparked both widespread obsession with AI and the subsequent pouring of trillions of dollars into AI development in just three years. Meanwhile, though still relatively in its infancy, Generative AI has indeed impacted numerous fields, and these include education and research, which together form the core dimensions of academia. Some of the concerns raised by the usage of Generative AI in academia, especially when it comes to student evaluations, have already been taken up on this blog, here, here, here, and here. In fact, out of all the Justice Everywehere posts focusing on AI in the past three years, exactly half looked at this particular problem, which is unsurprising, since on the one hand most of the contributors to this blog are educators, and on the other one that political philosophy is methodologically built around the capacity to engage in original thinking. In this post, which inaugurates the new season of Justice Everywhere, I want to signal a broader issue, which is – to put it bluntly – that key aspects of the way in which academia currently works are likely to be upended by AI in the near future. And, crucially, that as a collective, we seem to be dangerously inert in the face of this scenario.

Source: https://www.forbes.com/sites/timbajarin/2020/11/06/an-ai-robot-wrote-my-term-paper/
(more…)

Is there anything wrong with allowing oneself to feel liked by a chatbot?

In this post, Emilia Kaczmarek (University of Warsaw) discusses her recently published article in the Journal of Applied Philosophy in which she explores the ethical implications of self-deception in emotional relationships of humans with AI entities.

Photo: Free to use by Mateusz Haberny.

The popularity of AI girlfriend apps is growing. Unlike multi-purpose AI such as ChatGPT, companion chatbots are designed to build relationships. They respond to social, emotional or erotic needs of their users. Numerous studies indicate that humans are capable of forming emotional relationships with AI, partly due to our tendency to anthropomorphize it.

The debate on the ethical aspects of human-AI emotional relations is multi-threaded. In my recent article, I focus only on one topic: the problem of self-deception. I want to explore whether there is anything wrong with allowing oneself to feel liked by a chatbot.

(more…)

Why Conscious AI Would Be Bad for the Environment

Image credit to Griffin Kiegiel and Sami Aksu

This is a guest post by Griffin Kiegiel.

Since the meteoric rise of ChatGPT in 2021, artificial intelligence systems (AI) have been implemented into everything from smartphones and electric vehicles, to toasters and toothbrushes. The long-term effects of this rapid adoption remain to be seen, but we can be certain of one thing: AI uses a lot of energy that we can’t spare. ChatGPT reportedly uses more than 500,000 kilowatt-hours of electricity daily, which is massive compared to the 29 kilowatt-hours consumed by the average American household.

As the global temperature and ocean levels rise, it is our responsibility to limit our collective environmental impact as much as possible. If the benefits of AI don’t outweigh the risks associated with increasing our rate of energy consumption, then we may be obligated to shut down AI for the sake of environmental conservation. However, if AI becomes conscious, shutting them down may be akin to murder, morally trapping us in an unsustainable system.

(more…)