Category: Education

Should Universities Restrict Generative AI?

In this post, Karl de Fine Licht (Chalmers University of Technology) discusses his article recently published in the Journal of Applied Philosophy on the moral concerns of banning Generative AI in universities.

Rethinking the Ban

Universities face a challenging question: what should they do when the tools that help students learn also raise serious moral concerns?

Generative AI (GenAI) tools like ChatGPT offer immediate feedback, personalized explanations, and writing or coding assistance. But they also raise concerns: energy and water use, exploitative labor, privacy risks, and fears of academic dishonesty. In response, some universities have banned or severely restricted these tools. That may seem cautious or principled—but is it the right response?

In a recent academic paper, I argue that while these concerns are real, banning GenAI is often not the most justified or effective approach. Universities should instead pursue responsible engagement: adopting ethical procurement practices, educating students on thoughtful use, and leveraging their institutional influence to demand better standards.

Do Bans Make a Difference?

Many arguments for bans focus on harm. Using GenAI may contribute to carbon emissions, involve labor under poor conditions, or jeopardize student privacy. But how much difference does banning GenAI at a single university make?

Not much. Models are trained centrally and used globally. Universities typically rely on existing, pre-trained models—so their marginal contribution to emissions or labor practices is negligible. Even if all universities banned GenAI, it’s not clear this would shift global AI development or halt resource use. Worse, bans may backfire: students may use GenAI anyway, without oversight or support, leading to worse outcomes for learning and equity.

Take climate impact. Training models like GPT-4 requires substantial energy and water. But universities rarely train their own models; they use centralized ones whose training would happen regardless. Further, GenAI’s daily use is becoming more efficient, and in some applications—like architectural design, biodiversity monitoring, and climate modeling—GenAI may even help reduce emissions. The better route is for universities to demand energy-efficient models, support green cloud services, and explore carbon offsetting—not prohibit tools whose use is educationally beneficial and environmentally marginal.

Or consider labor exploitation. Training GenAI models often relies on underpaid workers performing harmful tasks, especially in the Global South. That’s a serious ethical issue. But again, banning use doesn’t necessarily help those workers or change the system. Universities could instead pressure companies to raise labor standards—leveraging their roles as clients, research partners, and talent suppliers. This collective influence is more likely to yield ethical improvements than a local ban.

The Reduction Problem

Even if you think universities are morally complicit by using GenAI—regardless of impact—you face a further problem: consistency. If the right response to morally tainted technologies is prohibition, why stop with GenAI?

Much of higher education depends on digital infrastructure. Computers, smartphones, and servers are produced under similarly problematic labor and environmental conditions. If the logic is “avoid complicity by avoiding use,” then many standard technologies should also be banned. But that leads to a reductio: if universities adopted this policy consistently, they would be unable to function.

This doesn’t mean ethical concerns should be ignored. Rather, it shows that avoiding all complicity isn’t feasible—and that universities must find ways to act responsibly within imperfect systems. The challenge is to engage critically and constructively, not withdraw.

Hidden Costs of Prohibition

There are also moral costs to banning GenAI.

Students continue to use AI tools, but in secret. This undermines educational goals and privacy protections. Vulnerable students—those with fewer resources, time, or academic support—may be most reliant on GenAI, and most harmed by a ban. When students use unvetted tools outside institutional guidance, the risks to their privacy and integrity increase.

Instead of banning GenAI, universities can offer licensed, secure tools and educate students on their appropriate use. This supports both ethical awareness and academic integrity. Just as we teach students to cite sources or evaluate evidence, we should teach them to engage with GenAI responsibly.

Setting Ethical Precedents

Some argue that even small contributions to harm are morally significant—especially when institutions help normalize problematic practices. But even if that’s true, it doesn’t follow that bans are the best response.

A more constructive alternative is to model responsible AI use. That includes setting ethical procurement standards, embedding AI literacy in curricula, and advocating for transparency and fair labor. Universities, especially when acting collectively, have leverage to influence AI providers. They can demand tools that respect privacy, reduce emissions, and avoid exploitative labor.

In other words, universities should take moral leadership—not by withdrawing, but by shaping the development and use of GenAI.

Choosing a Better Path

GenAI is not going away. The real question is how we engage with it—and on whose terms. Blanket bans may seem safe or principled, but they often achieve little and may create new harms.

Instead, universities should adopt a balanced approach. Acknowledge the risks. Respond to them—through institutional advocacy, ethical licensing, and student education. But also recognize the benefits of GenAI and prepare students to use it well.

In doing so, universities fulfill both moral and educational responsibilities: not by pretending GenAI doesn’t exist, but by helping shape the future it creates.

Karl de Fine Licht is Associate Professor in Ethics and Technology at Chalmers University of Technology. His research focuses on the ethical and societal implications of artificial intelligence, with particular emphasis on public decision-making and higher education. He has published extensively on trustworthy AI, generative AI, and justice in technology governance, and regularly advises public and academic institutions on responsible AI use.

What we train need not be the same as what we assess: AI damage limitation in higher education

It has always been clear that ChatGPT’s general availability means trouble for higher education. We knew that letting students use it for writing essays would make it difficult if not impossible to assess their effort and progress, and invite cheating. Worse, that it was going to deprive them of learning the laborious art and skill of writing, which is good in itself as well as a necessary instruments to thinking clearly. University years (and perhaps the last few years of high school, although, I worry, only for very few) is the chance to learn one’s writing and thinking. When there is quick costless access to the final product, there is little incentive for students to engage in the process of creating that product themselves; and going through that process is, generally, a lot more valuable than the product itself. Last March, philosopher Troy Jollimore published a lovely essay on this theme. So, we knew that non-regulated developments in artificial intelligence are inimical to this main aim of higher education.

Even more concerning news are now starting to find us: Not only is the use of ChatGPT bad for students because the temptation to rely on it is too hard to withstand, but respectable studies such as a recent one authored by scholars at MIT show that AI has significant negative effects on users’ cognitive abilities. The study indicates that the vast majority of people using Large Language Models (LLMs), such as ChatGPT, in order to write, forget the AI-generated content within minutes. Neural connections for the group relying on natural intelligence-only were almost twice as strong as those in the group using LLMs. And regular users who were asked to write without the help of LLMs did worse than those who never used ChatGPT at all. The authors of the study are talking about a “cognitive debt”: the more one relies on AI, the more they lose thinking abilities. All these findings are true of most users; a silver line, perhaps, is that users with very strong cognitive capacities displayed higher neural connection when using LLMs.

In short, LLMs are here to stay, at least until proper regulation – which is not yet on the horizon – kicks in; if this study is right, they can give valuable support to the more accomplished scholars (perhaps at various stages of their career) while harming everybody else. Part of the university’s job is to develop the latter group’s cognitive abilities; encouraging students to use LLMs appears, in light of these considerations, a kind of malpractice. And assigning at home essays is, in practice, encouragement.

(more…)

Ideology-critique in the classroom

Over the last few weeks, I have been marking exams for the economic ethics course I taught this year. The experience has not been particularly joyful. Admittedly, marking rarely is, but it gets worse when one develops a feeling of uselessness and failure, as I experienced on this occasion.

The source of this feeling was the realization of the grip of inegalitarian ideologies on my students. Since most of them were studying business, I should maybe have expected it, but I naïvely hoped that their ethics course might have led them to somewhat question their inegalitarian beliefs. And perhaps it has. It would take a combination of anonymous ex-ante and ex-post opinion surveys to measure it.

Whether it would be ethical to conduct such a survey is an interesting question (your opinions are welcome), but not the one I wanted to discuss in this post. The one I am concerned with is whether it would be acceptable, from an ethics of teaching perspective, to engage more straightforwardly in ideology-critique in my course, in the future.

My reaction when marking my exams.
(more…)

Writing Assignments in the age of Gen AI

If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?

(more…)

Utopia, Dystopia, and Democracy: Teaching Philosophy in Wartime Ukraine

A photograph of a large multistorey building destroyed by shelling.
Karazin Business School, Kharkiv, July 2022. Photography by Aaron J. Wendland.

This is a guest post by Orysya Bila (Ukrainian Catholic University) and Joshua Duclos (St Paul’s School), as part of the Reflections on the Russia-Ukraine War series, organized by Aaron James Wendland. This is an edited version of an article published in Studia Philosophica Estonica. Justice Everywhere will publish edited versions of several of the papers from this special issue over the next few weeks.

Why teach philosophy in wartime Ukraine? It’s a fair question. It’s a necessary question. Given the variety and gravity of Ukraine’s urgent needs, few will think to themselves: “But what about philosophy? Is Ukraine getting enough philosophy?” As two scholars committed to teaching philosophy in wartime Ukraine – one American, one Ukrainian – we believe an explanation is in order. 

(more…)

From the Vault: Universities, Academia and the academic profession

While Justice Everywhere takes a short break over the summer, we recall some of the highlights from our 2023-24 season. 

Trinity College Library, Dublin. antomoro (FAL or FAL), via Wikimedia Commons

Here are a few highlights from this year’s posts relating to academia, the modern university, and the academic profession:

Stay tuned for even more on this topic in our 2024-25 season!

***

Justice Everywhere will return in full swing in September with fresh weekly posts by our cooperative of regular authors (published on Mondays), in addition to our Journal of Applied Philosophy series and other special series (published on Thursdays). If you would like to contribute a guest post on a topical justice-based issue (broadly construed), please feel free to get in touch with us at justice.everywhere.blog@gmail.com.

What’s really at stake with Open Access research? The Case of Sci-Hub and the “Robin Hood of Science”

A mural dedicated to Sci-Hub at UNAM. Txtdgtl, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

This is a guest post by Georgiana Turculet (Universitat Pompeu Fabra).

In his recently published “More Open Access, More Inequality in the Academia”, Alex Volacu aptly criticizes present Open Access (OA) policies for giving rise to structural inequalities among researchers, and increasing revenues only for publishers. His analysis is aimed at contextualizing some recent academic events, that is, the board of the well-known Journal of Political Philosophy resigning due to pressures from publishers to increase the intake of open access publications. However, it would benefit from considering the wider context of recent alternative form of resistance to corporate publishers’ pressures.

(more…)

Modern education systems erode trust – this may be a big problem.

Photo by lauren lulu taylor on Unsplash

As teachers, our work is inescapably affected by a range of structural features such as the marketisation and commodification of higher education, the erosion of benefits and of pay, and more. Many of these have been amply studied and debated, including on this blog. Today, however, I want to discuss a relatively underexplored dimension of all this – the slow erosion of trust between staff and students.

In a (higher) education setting, trust is an important value, for several reasons. For one, students are typically young adults and being given responsibility – and being trusted with that responsibility – is an important part of the process of growing up. I’m specifically inspired here by an approach to assessment known as ‘ungrading’. Regardless of the merits of the method, Jesse Stommel’s summary of the core philosophy of ungrading is something that needs to be taken extremely seriously: ‘start by trusting students’.

But it’s also a principled point. From a broadly Kantian perspective, one important aspect of ethical behaviour is respect for others as ‘ends in themselves’. While we all may occasionally jokingly remind each other that students’ brains haven’t fully developed yet, it is important to remember that this does not mean that they lack the capacity for autonomy. Indeed, because of their age, it is perhaps more important than ever to allow them to practice, or exercise autonomy.

(more…)

Why We Should ‘Environmentalise’ the Curriculum

A photograph of a group of people sitting on a frosty hillside. One person is standing up and talking to the others.
Outdoor Philosophy Session by the Critique Environmental Working Group: Place-Based Ecological Reflection Exercise in Holyrood Park, Edinburgh. Photo supplied by authors.

This is a guestpost in Justice Everywhere’s Teaching Philosophy series. It is written by Talia Shoval, Grace Garland and Joseph Conrad, of the Environmental Working Group of the University of Edinburgh’s Centre for Ethics and Critical Thought (Critique).

In this blogpost, we share insights from the exploratory journey we undertook into ‘environmentalising’ the curriculum: a project aimed at bringing the environment to the fore of learning and teaching in higher education. After briefly explaining the guiding rationale, we sketch the contours of the environmentalising project and suggest trajectories for moving forward.

As political theorists working on issues concerning the environment, we start from the working observation that environmental issues tend to be downplayed—or worse, altogether overlooked—in the context of academic learning and teaching, as well as in scholarly research. The environment, when it is mentioned, is often treated as a miscellaneous category, an ‘Other’ that falls outside the remit of and constitutes the backdrop to human affairs. This tendency is exemplified by the lack of environmental materials in syllabi across the social sciences and humanities. Even when environmental issues are present, they are discussed, more often than not, in human-centred ways. Juxtaposed with the evidence of environmental degradation all around, this felt odd, and somewhat disquieting. Our initial intuition told us that the environment should take up much more space in academic curricula and common research, learning, and teaching practices—even in the social sciences, including politics and ethics.

(more…)

Taking political education out of families

Political education can be defined as the process by which people come to form political judgments – how they evaluate different political parties and issues of public policy, basically. The primary context of political education is the family. It is in this environment that people are first exposed to political judgments and inculcated with political values. It should come as no surprise that, as a result, many (if not most) people remain faithful to their parents’ political orientations, as research in political sociology often reports. Fortunately, though, political education is not reducible to family transmission. As they grow up, kids become more and more exposed to different political views, be it in school or within their social network, and they can be influenced by all sorts of people and events in this process. It remains true, however, that in the absence of a strong countervailing educational process, families are the main driver of political education in most if not all countries. Should we be happy with this situation?

(more…)