Category: Technology

Should Universities Restrict Generative AI?

In this post, Karl de Fine Licht (Chalmers University of Technology) discusses his article recently published in the Journal of Applied Philosophy on the moral concerns of banning Generative AI in universities.

Rethinking the Ban

Universities face a challenging question: what should they do when the tools that help students learn also raise serious moral concerns?

Generative AI (GenAI) tools like ChatGPT offer immediate feedback, personalized explanations, and writing or coding assistance. But they also raise concerns: energy and water use, exploitative labor, privacy risks, and fears of academic dishonesty. In response, some universities have banned or severely restricted these tools. That may seem cautious or principled—but is it the right response?

In a recent academic paper, I argue that while these concerns are real, banning GenAI is often not the most justified or effective approach. Universities should instead pursue responsible engagement: adopting ethical procurement practices, educating students on thoughtful use, and leveraging their institutional influence to demand better standards.

Do Bans Make a Difference?

Many arguments for bans focus on harm. Using GenAI may contribute to carbon emissions, involve labor under poor conditions, or jeopardize student privacy. But how much difference does banning GenAI at a single university make?

Not much. Models are trained centrally and used globally. Universities typically rely on existing, pre-trained models—so their marginal contribution to emissions or labor practices is negligible. Even if all universities banned GenAI, it’s not clear this would shift global AI development or halt resource use. Worse, bans may backfire: students may use GenAI anyway, without oversight or support, leading to worse outcomes for learning and equity.

Take climate impact. Training models like GPT-4 requires substantial energy and water. But universities rarely train their own models; they use centralized ones whose training would happen regardless. Further, GenAI’s daily use is becoming more efficient, and in some applications—like architectural design, biodiversity monitoring, and climate modeling—GenAI may even help reduce emissions. The better route is for universities to demand energy-efficient models, support green cloud services, and explore carbon offsetting—not prohibit tools whose use is educationally beneficial and environmentally marginal.

Or consider labor exploitation. Training GenAI models often relies on underpaid workers performing harmful tasks, especially in the Global South. That’s a serious ethical issue. But again, banning use doesn’t necessarily help those workers or change the system. Universities could instead pressure companies to raise labor standards—leveraging their roles as clients, research partners, and talent suppliers. This collective influence is more likely to yield ethical improvements than a local ban.

The Reduction Problem

Even if you think universities are morally complicit by using GenAI—regardless of impact—you face a further problem: consistency. If the right response to morally tainted technologies is prohibition, why stop with GenAI?

Much of higher education depends on digital infrastructure. Computers, smartphones, and servers are produced under similarly problematic labor and environmental conditions. If the logic is “avoid complicity by avoiding use,” then many standard technologies should also be banned. But that leads to a reductio: if universities adopted this policy consistently, they would be unable to function.

This doesn’t mean ethical concerns should be ignored. Rather, it shows that avoiding all complicity isn’t feasible—and that universities must find ways to act responsibly within imperfect systems. The challenge is to engage critically and constructively, not withdraw.

Hidden Costs of Prohibition

There are also moral costs to banning GenAI.

Students continue to use AI tools, but in secret. This undermines educational goals and privacy protections. Vulnerable students—those with fewer resources, time, or academic support—may be most reliant on GenAI, and most harmed by a ban. When students use unvetted tools outside institutional guidance, the risks to their privacy and integrity increase.

Instead of banning GenAI, universities can offer licensed, secure tools and educate students on their appropriate use. This supports both ethical awareness and academic integrity. Just as we teach students to cite sources or evaluate evidence, we should teach them to engage with GenAI responsibly.

Setting Ethical Precedents

Some argue that even small contributions to harm are morally significant—especially when institutions help normalize problematic practices. But even if that’s true, it doesn’t follow that bans are the best response.

A more constructive alternative is to model responsible AI use. That includes setting ethical procurement standards, embedding AI literacy in curricula, and advocating for transparency and fair labor. Universities, especially when acting collectively, have leverage to influence AI providers. They can demand tools that respect privacy, reduce emissions, and avoid exploitative labor.

In other words, universities should take moral leadership—not by withdrawing, but by shaping the development and use of GenAI.

Choosing a Better Path

GenAI is not going away. The real question is how we engage with it—and on whose terms. Blanket bans may seem safe or principled, but they often achieve little and may create new harms.

Instead, universities should adopt a balanced approach. Acknowledge the risks. Respond to them—through institutional advocacy, ethical licensing, and student education. But also recognize the benefits of GenAI and prepare students to use it well.

In doing so, universities fulfill both moral and educational responsibilities: not by pretending GenAI doesn’t exist, but by helping shape the future it creates.

Karl de Fine Licht is Associate Professor in Ethics and Technology at Chalmers University of Technology. His research focuses on the ethical and societal implications of artificial intelligence, with particular emphasis on public decision-making and higher education. He has published extensively on trustworthy AI, generative AI, and justice in technology governance, and regularly advises public and academic institutions on responsible AI use.

Writing Assignments in the age of Gen AI

If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?

(more…)

Just do(pe) it? Why the academic project is at risk from proposals to pharmacologically enhance researchers.

In this post, Heidi Matisonn (University of Cape Town) and Jacek Brzozowski (University of KwaZulu-Natal) discuss their recently published article in the Journal of Applied Philosophy in which they explore the justifiability and potential risks of cognitive enhancement in academia.

Image created with ChatGPT.

The human desire to enhance our cognitive abilities, to push the boundaries of intelligence through education, tools, and technology has a long history. Fifteen years ago, confronted by the possibility that a ‘morally corrupt’ minority could misuse cognitive gains to catastrophic effect, Persson and Savulescu proposed that research into cognitive enhancement should be halted unless accompanied by advancements in moral enhancement.

In response to this, and following on from Harris’ worries about the mass suffering that could result from delaying cognitive enhancement until moral enhancement could catch up, in 2023, Gordon and Ragonese offered what they termed a ‘practical approach’ to cognitive enhancement research in which they advocated for targeted cognitive enhancement —specifically for researchers working on moral enhancement.

Our recent article in the Journal of Applied Philosophy suggests that while both sets of authors are correct in their concerns about the significant risks related to cognitive enhancement outrunning moral enhancement, their focus on the ‘extremes’ neglects some more practical consequences that a general acceptance of cognitive enhancement may bring — not least of which relate to the academic project itself.

(more…)

From the Vault: Universities, Academia and the academic profession

While Justice Everywhere takes a short break over the summer, we recall some of the highlights from our 2023-24 season. 

Trinity College Library, Dublin. antomoro (FAL or FAL), via Wikimedia Commons

Here are a few highlights from this year’s posts relating to academia, the modern university, and the academic profession:

Stay tuned for even more on this topic in our 2024-25 season!

***

Justice Everywhere will return in full swing in September with fresh weekly posts by our cooperative of regular authors (published on Mondays), in addition to our Journal of Applied Philosophy series and other special series (published on Thursdays). If you would like to contribute a guest post on a topical justice-based issue (broadly construed), please feel free to get in touch with us at justice.everywhere.blog@gmail.com.

Why Conscious AI Would Be Bad for the Environment

Image credit to Griffin Kiegiel and Sami Aksu

This is a guest post by Griffin Kiegiel.

Since the meteoric rise of ChatGPT in 2021, artificial intelligence systems (AI) have been implemented into everything from smartphones and electric vehicles, to toasters and toothbrushes. The long-term effects of this rapid adoption remain to be seen, but we can be certain of one thing: AI uses a lot of energy that we can’t spare. ChatGPT reportedly uses more than 500,000 kilowatt-hours of electricity daily, which is massive compared to the 29 kilowatt-hours consumed by the average American household.

As the global temperature and ocean levels rise, it is our responsibility to limit our collective environmental impact as much as possible. If the benefits of AI don’t outweigh the risks associated with increasing our rate of energy consumption, then we may be obligated to shut down AI for the sake of environmental conservation. However, if AI becomes conscious, shutting them down may be akin to murder, morally trapping us in an unsustainable system.

(more…)

The Disruption of Human Reproduction

This is already my third post about ectogestative technology, better known as “artificial womb technology”. While in the first post, I explored the idea that this technology could potentially advance gender justice, in the second, I approached the technology from the perspective of post-phenomenology. In this third post, I look at the technology as an example of a socially disruptive technology. Ongoing research in the philosophy of technology investigates the ways in which 21st Century technologies such as artificial intelligence, synthetic biology, gene-editing technologies, and climate-engineering technologies affect “deeply held beliefs, values, social norms, and basic human capacities”, “basic human practices, fundamental concepts, [and] ontological distinctions”. Those technologies deeply affects us as human beings, our relationship to other parts of nature such as non-human animals and plants, and the societies we live in. In this post, I sketch the potential disruptive effects of ectogestative technology on practices, norms, and concepts related to expecting and having children.

(more…)

What’s really at stake with Open Access research? The Case of Sci-Hub and the “Robin Hood of Science”

A mural dedicated to Sci-Hub at UNAM. Txtdgtl, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

This is a guest post by Georgiana Turculet (Universitat Pompeu Fabra).

In his recently published “More Open Access, More Inequality in the Academia”, Alex Volacu aptly criticizes present Open Access (OA) policies for giving rise to structural inequalities among researchers, and increasing revenues only for publishers. His analysis is aimed at contextualizing some recent academic events, that is, the board of the well-known Journal of Political Philosophy resigning due to pressures from publishers to increase the intake of open access publications. However, it would benefit from considering the wider context of recent alternative form of resistance to corporate publishers’ pressures.

(more…)

Driving for Values

Smart cities are full of sensors and collect large amounts of data. One reason for doing so is to get real-time information about traffic flows. A next step is to steer the traffic in a way that contributes to the realisation of values such as safety and sustainability. Think of steering cars around schools to improve the safety of children, or of keeping certain areas car-free to improve air quality. Is it legitimate for cities to nudge their citizens to make moral choices when participating in traffic? Would a system that limits a person’s options for the sake of improving quality of life in the city come at the cost of restricting that person’s autonomy? In a transdisciplinary research project, we (i.e., members of the ESDiT programme and the Responsible Sensing Lab) explored how a navigation app that suggests routes based on shared values, would affect users’ experiences of autonomy. We did so by letting people try out speculative prototypes of such an app on a mobile phone and ask them questions about how they experienced different features of the app. During several interviews and a focus group, we gained insights about the conditions under which people find such an app acceptable and about the features that increase or decrease their feeling of autonomy.

(more…)

What Claims Do We Have Over Our Google Search Profiles?

This is a guest post by Hannah Carnegy-Arbuthnott (University of York).

We’ve all done things we regret. It used to be possible to comfort ourselves with the thought that our misadventures would soon be forgotten. In the digital age, however, not only is more of our personal information captured and recorded, search engines can also serve up previously long-forgotten information at the click of a button.

(more…)

The diversity of values in virtual reality

In this post, Rami Ali (University of Arizona) discusses his recent article in Journal of Applied Philosophy on the range of values possible in the virtual world.


AI-generated image generated by Rami Ali’s prompt using OpenAI

Early in The Matrix Cypher confronts Neo with a question: “Why, oh why, didn’t I take that blue pill?” The confrontation is meaningful and significant. The red pill gave them their nonvirtual life outside the matrix. But is that life really more valuable than their blue pill-life inside the matrix? We’re invited to take a side and it’s tempting to do so. But neither choice is right. In The Values of the Virtual I argue that virtual items are not less or more valuable, nor of equal or sui generis value when compared to their nonvirtual counterparts. Or more aptly, they are all of these, depending on the virtual instance we have in mind. Taking sides short-changes the diversity of the virtual world and everything populating it, leaving us with less nuance than we need to understand and govern our virtual lives.

(more…)