Category: Academia

Should Universities Restrict Generative AI?

In this post, Karl de Fine Licht (Chalmers University of Technology) discusses his article recently published in the Journal of Applied Philosophy on the moral concerns of banning Generative AI in universities.

Rethinking the Ban

Universities face a challenging question: what should they do when the tools that help students learn also raise serious moral concerns?

Generative AI (GenAI) tools like ChatGPT offer immediate feedback, personalized explanations, and writing or coding assistance. But they also raise concerns: energy and water use, exploitative labor, privacy risks, and fears of academic dishonesty. In response, some universities have banned or severely restricted these tools. That may seem cautious or principled—but is it the right response?

In a recent academic paper, I argue that while these concerns are real, banning GenAI is often not the most justified or effective approach. Universities should instead pursue responsible engagement: adopting ethical procurement practices, educating students on thoughtful use, and leveraging their institutional influence to demand better standards.

Do Bans Make a Difference?

Many arguments for bans focus on harm. Using GenAI may contribute to carbon emissions, involve labor under poor conditions, or jeopardize student privacy. But how much difference does banning GenAI at a single university make?

Not much. Models are trained centrally and used globally. Universities typically rely on existing, pre-trained models—so their marginal contribution to emissions or labor practices is negligible. Even if all universities banned GenAI, it’s not clear this would shift global AI development or halt resource use. Worse, bans may backfire: students may use GenAI anyway, without oversight or support, leading to worse outcomes for learning and equity.

Take climate impact. Training models like GPT-4 requires substantial energy and water. But universities rarely train their own models; they use centralized ones whose training would happen regardless. Further, GenAI’s daily use is becoming more efficient, and in some applications—like architectural design, biodiversity monitoring, and climate modeling—GenAI may even help reduce emissions. The better route is for universities to demand energy-efficient models, support green cloud services, and explore carbon offsetting—not prohibit tools whose use is educationally beneficial and environmentally marginal.

Or consider labor exploitation. Training GenAI models often relies on underpaid workers performing harmful tasks, especially in the Global South. That’s a serious ethical issue. But again, banning use doesn’t necessarily help those workers or change the system. Universities could instead pressure companies to raise labor standards—leveraging their roles as clients, research partners, and talent suppliers. This collective influence is more likely to yield ethical improvements than a local ban.

The Reduction Problem

Even if you think universities are morally complicit by using GenAI—regardless of impact—you face a further problem: consistency. If the right response to morally tainted technologies is prohibition, why stop with GenAI?

Much of higher education depends on digital infrastructure. Computers, smartphones, and servers are produced under similarly problematic labor and environmental conditions. If the logic is “avoid complicity by avoiding use,” then many standard technologies should also be banned. But that leads to a reductio: if universities adopted this policy consistently, they would be unable to function.

This doesn’t mean ethical concerns should be ignored. Rather, it shows that avoiding all complicity isn’t feasible—and that universities must find ways to act responsibly within imperfect systems. The challenge is to engage critically and constructively, not withdraw.

Hidden Costs of Prohibition

There are also moral costs to banning GenAI.

Students continue to use AI tools, but in secret. This undermines educational goals and privacy protections. Vulnerable students—those with fewer resources, time, or academic support—may be most reliant on GenAI, and most harmed by a ban. When students use unvetted tools outside institutional guidance, the risks to their privacy and integrity increase.

Instead of banning GenAI, universities can offer licensed, secure tools and educate students on their appropriate use. This supports both ethical awareness and academic integrity. Just as we teach students to cite sources or evaluate evidence, we should teach them to engage with GenAI responsibly.

Setting Ethical Precedents

Some argue that even small contributions to harm are morally significant—especially when institutions help normalize problematic practices. But even if that’s true, it doesn’t follow that bans are the best response.

A more constructive alternative is to model responsible AI use. That includes setting ethical procurement standards, embedding AI literacy in curricula, and advocating for transparency and fair labor. Universities, especially when acting collectively, have leverage to influence AI providers. They can demand tools that respect privacy, reduce emissions, and avoid exploitative labor.

In other words, universities should take moral leadership—not by withdrawing, but by shaping the development and use of GenAI.

Choosing a Better Path

GenAI is not going away. The real question is how we engage with it—and on whose terms. Blanket bans may seem safe or principled, but they often achieve little and may create new harms.

Instead, universities should adopt a balanced approach. Acknowledge the risks. Respond to them—through institutional advocacy, ethical licensing, and student education. But also recognize the benefits of GenAI and prepare students to use it well.

In doing so, universities fulfill both moral and educational responsibilities: not by pretending GenAI doesn’t exist, but by helping shape the future it creates.

Karl de Fine Licht is Associate Professor in Ethics and Technology at Chalmers University of Technology. His research focuses on the ethical and societal implications of artificial intelligence, with particular emphasis on public decision-making and higher education. He has published extensively on trustworthy AI, generative AI, and justice in technology governance, and regularly advises public and academic institutions on responsible AI use.

Teaching Freedom: Revisiting Berlin’s Two Concepts

A photograph showing a tiled wall with an ornate pattern. A poster is stuck on the wall, with a drawing of a hand with a broken chain on it. The text reads "Libertad es Sagrada".
A poster on a wall. Photo provided by author.

This is a guest post by Nick Boden (University of Bristol)

Teachers and academics face questions relating to freedom each day. How will students engage with the material? How should students be in the learning environment? Are students free to choose tasks or are their choices constrained by the practitioners preferred methods? These questions place instructors at the centre of an ongoing debate about freedom. Is freedom simply the absence of constraints? Or is there more going on?

At first glance, Isaiah Berlin’s (1958) idea of positive and negative freedom offers a useful framework. Positive freedom can be thought of as “the freedom to”. Rules or regulations are put into place to increase the options available to you. Negative freedom is explained as “freedom from” constraints. Barriers are removed and options are available to you. For example, advocates of negative freedom would explain being left alone to make decisions and choices increases freedom. Whereas advocates of positive freedom would welcome things like welfare funding, taking away the “freedom from” taxes, to “provide freedom to” buy basic goods whilst unemployed. A form of collective freedom.

(more…)

What we train need not be the same as what we assess: AI damage limitation in higher education

It has always been clear that ChatGPT’s general availability means trouble for higher education. We knew that letting students use it for writing essays would make it difficult if not impossible to assess their effort and progress, and invite cheating. Worse, that it was going to deprive them of learning the laborious art and skill of writing, which is good in itself as well as a necessary instruments to thinking clearly. University years (and perhaps the last few years of high school, although, I worry, only for very few) is the chance to learn one’s writing and thinking. When there is quick costless access to the final product, there is little incentive for students to engage in the process of creating that product themselves; and going through that process is, generally, a lot more valuable than the product itself. Last March, philosopher Troy Jollimore published a lovely essay on this theme. So, we knew that non-regulated developments in artificial intelligence are inimical to this main aim of higher education.

Even more concerning news are now starting to find us: Not only is the use of ChatGPT bad for students because the temptation to rely on it is too hard to withstand, but respectable studies such as a recent one authored by scholars at MIT show that AI has significant negative effects on users’ cognitive abilities. The study indicates that the vast majority of people using Large Language Models (LLMs), such as ChatGPT, in order to write, forget the AI-generated content within minutes. Neural connections for the group relying on natural intelligence-only were almost twice as strong as those in the group using LLMs. And regular users who were asked to write without the help of LLMs did worse than those who never used ChatGPT at all. The authors of the study are talking about a “cognitive debt”: the more one relies on AI, the more they lose thinking abilities. All these findings are true of most users; a silver line, perhaps, is that users with very strong cognitive capacities displayed higher neural connection when using LLMs.

In short, LLMs are here to stay, at least until proper regulation – which is not yet on the horizon – kicks in; if this study is right, they can give valuable support to the more accomplished scholars (perhaps at various stages of their career) while harming everybody else. Part of the university’s job is to develop the latter group’s cognitive abilities; encouraging students to use LLMs appears, in light of these considerations, a kind of malpractice. And assigning at home essays is, in practice, encouragement.

(more…)

Beyond the Ivory Tower Interview with Toby Buckle

This is the latest interview in our Beyond the Ivory Tower series, a conversation between Sara van Goozen and Toby Buckle. Toby Buckle runs the popular Political Philosophy Podcast. He has a BA in PPE from Oxford University and an MA in Political Philosophy from the University of York. He spent many years working with political and advocacy groups in the United States, such as Human Rights Campaign, Environment America,  Working Families Party and Amnesty International. He started his podcast around seven years ago, and has interviewed academics including Elizabeth Anderson, Orlando Patterson, Phillip Pettit, and Cecile Fabre, as well as politicians (such as Senator Sherrod Brown, or Civil Rights Commission Chair, Mary Francis Berry), commentators (such as Ian Dunt) and public figures (such as Derek Guy AKA Menswear Guy). He is the editor of What is Freedom? Conversations with Historians, Philosophers, and Activists (Oxford University Press, 2021). He writes regularly for Liberal Currents. In this interview, we discuss running a podcast, the enduring relevance of historical philosophers, and what young academics can do to build a public profile.

(more…)

Call for Papers: “Ethical and Epistemological Issues in the Teaching of Politics”

Justice Everywhere is pleased to share the following call for papers:


The Centre for the Pedagogy of Politics (CPP) at UCL and the Teaching Political Theory Network (TPTN) at the University of York are co-organising a one-day workshop focussed on ethical and epistemological issues in the teaching of politics.

Time: Friday, 6 June 2025

Location: University College, London

The teaching of politics is taken to include the teaching of all relevant sub-disciplines (e.g., political science, international relations, political theory) as well as activities that inform and support it (e.g., related pastoral and administrative activities).

The aim of the workshop is to provide a platform for educators and researchers to critically explore contemporary philosophical issues, scholarly debates, and innovative pedagogical approaches related to the central theme.

We welcome presentations, case studies, papers, and panel proposals that might address, but are not restricted to, the ethical and/or epistemological dimensions of:

  • the teaching of argumentation in politics;
  • background methodological choices/assumptions;
  • neutrality of teacher viewpoint;
  • freedom of speech in the classroom;
  • teaching controversial/offensive/upsetting topics;
  • inclusive classroom practices;
  • decolonising/liberating the curriculum;
  • differential treatment of students;
  • modes of assessment;
  • reducing the emphasis on grades;
  • use of A.I.;
  • programme design;
  • co-designing teaching materials with students;
  • aiming to enhance student employability;
  • the teaching of interdisciplinary subjects.

Please send your expression of interest and a short abstract of no more than 100 words to polsci.cpp@ucl.ac.uk by the end of Wednesday 9th April 2025.

We look forward to hearing from you soon!

Limits of language promotion

This post is written by Dr. Seunghyun Song (Assistant professor, Tilburg University). Based on her research on linguistic justice, she provides a tentative answer to the issue of the limits of the linguistic territoriality principle and its aim to protect languages. She uses the Dutch case as a proxy for these discussions.

Image by woodleywonderworks from Flickr (Creative Commons)

(more…)

Writing Assignments in the age of Gen AI

If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?

(more…)

Teaching students to be good

What’s the point of teaching moral and political philosophy?

Ancient philosophers around the world would have thought the answer to this question was blindingly obvious: the point is to make students better – better as citizens, rulers, or just as human beings.

Yet today I suspect very few academics would defend this position, and most would find the idea of inculcating virtue among their students to be silly at best, dangerous at worst.

I think the ancients were right on this one. We should educate our students to make them better moral and political agents. And I don’t think this has to be scarily illiberal at all – at least, that’s what I’m going to argue here.

The model of ethical discourse my students seem to be learning in secondary school
(more…)

Just do(pe) it? Why the academic project is at risk from proposals to pharmacologically enhance researchers.

In this post, Heidi Matisonn (University of Cape Town) and Jacek Brzozowski (University of KwaZulu-Natal) discuss their recently published article in the Journal of Applied Philosophy in which they explore the justifiability and potential risks of cognitive enhancement in academia.

Image created with ChatGPT.

The human desire to enhance our cognitive abilities, to push the boundaries of intelligence through education, tools, and technology has a long history. Fifteen years ago, confronted by the possibility that a ‘morally corrupt’ minority could misuse cognitive gains to catastrophic effect, Persson and Savulescu proposed that research into cognitive enhancement should be halted unless accompanied by advancements in moral enhancement.

In response to this, and following on from Harris’ worries about the mass suffering that could result from delaying cognitive enhancement until moral enhancement could catch up, in 2023, Gordon and Ragonese offered what they termed a ‘practical approach’ to cognitive enhancement research in which they advocated for targeted cognitive enhancement —specifically for researchers working on moral enhancement.

Our recent article in the Journal of Applied Philosophy suggests that while both sets of authors are correct in their concerns about the significant risks related to cognitive enhancement outrunning moral enhancement, their focus on the ‘extremes’ neglects some more practical consequences that a general acceptance of cognitive enhancement may bring — not least of which relate to the academic project itself.

(more…)

Utopia, Dystopia, and Democracy: Teaching Philosophy in Wartime Ukraine

A photograph of a large multistorey building destroyed by shelling.
Karazin Business School, Kharkiv, July 2022. Photography by Aaron J. Wendland.

This is a guest post by Orysya Bila (Ukrainian Catholic University) and Joshua Duclos (St Paul’s School), as part of the Reflections on the Russia-Ukraine War series, organized by Aaron James Wendland. This is an edited version of an article published in Studia Philosophica Estonica. Justice Everywhere will publish edited versions of several of the papers from this special issue over the next few weeks.

Why teach philosophy in wartime Ukraine? It’s a fair question. It’s a necessary question. Given the variety and gravity of Ukraine’s urgent needs, few will think to themselves: “But what about philosophy? Is Ukraine getting enough philosophy?” As two scholars committed to teaching philosophy in wartime Ukraine – one American, one Ukrainian – we believe an explanation is in order. 

(more…)