Category: Teaching

Should Universities Restrict Generative AI?

In this post, Karl de Fine Licht (Chalmers University of Technology) discusses his article recently published in the Journal of Applied Philosophy on the moral concerns of banning Generative AI in universities.

Rethinking the Ban

Universities face a challenging question: what should they do when the tools that help students learn also raise serious moral concerns?

Generative AI (GenAI) tools like ChatGPT offer immediate feedback, personalized explanations, and writing or coding assistance. But they also raise concerns: energy and water use, exploitative labor, privacy risks, and fears of academic dishonesty. In response, some universities have banned or severely restricted these tools. That may seem cautious or principled—but is it the right response?

In a recent academic paper, I argue that while these concerns are real, banning GenAI is often not the most justified or effective approach. Universities should instead pursue responsible engagement: adopting ethical procurement practices, educating students on thoughtful use, and leveraging their institutional influence to demand better standards.

Do Bans Make a Difference?

Many arguments for bans focus on harm. Using GenAI may contribute to carbon emissions, involve labor under poor conditions, or jeopardize student privacy. But how much difference does banning GenAI at a single university make?

Not much. Models are trained centrally and used globally. Universities typically rely on existing, pre-trained models—so their marginal contribution to emissions or labor practices is negligible. Even if all universities banned GenAI, it’s not clear this would shift global AI development or halt resource use. Worse, bans may backfire: students may use GenAI anyway, without oversight or support, leading to worse outcomes for learning and equity.

Take climate impact. Training models like GPT-4 requires substantial energy and water. But universities rarely train their own models; they use centralized ones whose training would happen regardless. Further, GenAI’s daily use is becoming more efficient, and in some applications—like architectural design, biodiversity monitoring, and climate modeling—GenAI may even help reduce emissions. The better route is for universities to demand energy-efficient models, support green cloud services, and explore carbon offsetting—not prohibit tools whose use is educationally beneficial and environmentally marginal.

Or consider labor exploitation. Training GenAI models often relies on underpaid workers performing harmful tasks, especially in the Global South. That’s a serious ethical issue. But again, banning use doesn’t necessarily help those workers or change the system. Universities could instead pressure companies to raise labor standards—leveraging their roles as clients, research partners, and talent suppliers. This collective influence is more likely to yield ethical improvements than a local ban.

The Reduction Problem

Even if you think universities are morally complicit by using GenAI—regardless of impact—you face a further problem: consistency. If the right response to morally tainted technologies is prohibition, why stop with GenAI?

Much of higher education depends on digital infrastructure. Computers, smartphones, and servers are produced under similarly problematic labor and environmental conditions. If the logic is “avoid complicity by avoiding use,” then many standard technologies should also be banned. But that leads to a reductio: if universities adopted this policy consistently, they would be unable to function.

This doesn’t mean ethical concerns should be ignored. Rather, it shows that avoiding all complicity isn’t feasible—and that universities must find ways to act responsibly within imperfect systems. The challenge is to engage critically and constructively, not withdraw.

Hidden Costs of Prohibition

There are also moral costs to banning GenAI.

Students continue to use AI tools, but in secret. This undermines educational goals and privacy protections. Vulnerable students—those with fewer resources, time, or academic support—may be most reliant on GenAI, and most harmed by a ban. When students use unvetted tools outside institutional guidance, the risks to their privacy and integrity increase.

Instead of banning GenAI, universities can offer licensed, secure tools and educate students on their appropriate use. This supports both ethical awareness and academic integrity. Just as we teach students to cite sources or evaluate evidence, we should teach them to engage with GenAI responsibly.

Setting Ethical Precedents

Some argue that even small contributions to harm are morally significant—especially when institutions help normalize problematic practices. But even if that’s true, it doesn’t follow that bans are the best response.

A more constructive alternative is to model responsible AI use. That includes setting ethical procurement standards, embedding AI literacy in curricula, and advocating for transparency and fair labor. Universities, especially when acting collectively, have leverage to influence AI providers. They can demand tools that respect privacy, reduce emissions, and avoid exploitative labor.

In other words, universities should take moral leadership—not by withdrawing, but by shaping the development and use of GenAI.

Choosing a Better Path

GenAI is not going away. The real question is how we engage with it—and on whose terms. Blanket bans may seem safe or principled, but they often achieve little and may create new harms.

Instead, universities should adopt a balanced approach. Acknowledge the risks. Respond to them—through institutional advocacy, ethical licensing, and student education. But also recognize the benefits of GenAI and prepare students to use it well.

In doing so, universities fulfill both moral and educational responsibilities: not by pretending GenAI doesn’t exist, but by helping shape the future it creates.

Karl de Fine Licht is Associate Professor in Ethics and Technology at Chalmers University of Technology. His research focuses on the ethical and societal implications of artificial intelligence, with particular emphasis on public decision-making and higher education. He has published extensively on trustworthy AI, generative AI, and justice in technology governance, and regularly advises public and academic institutions on responsible AI use.

Teaching Freedom: Revisiting Berlin’s Two Concepts

A photograph showing a tiled wall with an ornate pattern. A poster is stuck on the wall, with a drawing of a hand with a broken chain on it. The text reads "Libertad es Sagrada".
A poster on a wall. Photo provided by author.

This is a guest post by Nick Boden (University of Bristol)

Teachers and academics face questions relating to freedom each day. How will students engage with the material? How should students be in the learning environment? Are students free to choose tasks or are their choices constrained by the practitioners preferred methods? These questions place instructors at the centre of an ongoing debate about freedom. Is freedom simply the absence of constraints? Or is there more going on?

At first glance, Isaiah Berlin’s (1958) idea of positive and negative freedom offers a useful framework. Positive freedom can be thought of as “the freedom to”. Rules or regulations are put into place to increase the options available to you. Negative freedom is explained as “freedom from” constraints. Barriers are removed and options are available to you. For example, advocates of negative freedom would explain being left alone to make decisions and choices increases freedom. Whereas advocates of positive freedom would welcome things like welfare funding, taking away the “freedom from” taxes, to “provide freedom to” buy basic goods whilst unemployed. A form of collective freedom.

(more…)

What we train need not be the same as what we assess: AI damage limitation in higher education

It has always been clear that ChatGPT’s general availability means trouble for higher education. We knew that letting students use it for writing essays would make it difficult if not impossible to assess their effort and progress, and invite cheating. Worse, that it was going to deprive them of learning the laborious art and skill of writing, which is good in itself as well as a necessary instruments to thinking clearly. University years (and perhaps the last few years of high school, although, I worry, only for very few) is the chance to learn one’s writing and thinking. When there is quick costless access to the final product, there is little incentive for students to engage in the process of creating that product themselves; and going through that process is, generally, a lot more valuable than the product itself. Last March, philosopher Troy Jollimore published a lovely essay on this theme. So, we knew that non-regulated developments in artificial intelligence are inimical to this main aim of higher education.

Even more concerning news are now starting to find us: Not only is the use of ChatGPT bad for students because the temptation to rely on it is too hard to withstand, but respectable studies such as a recent one authored by scholars at MIT show that AI has significant negative effects on users’ cognitive abilities. The study indicates that the vast majority of people using Large Language Models (LLMs), such as ChatGPT, in order to write, forget the AI-generated content within minutes. Neural connections for the group relying on natural intelligence-only were almost twice as strong as those in the group using LLMs. And regular users who were asked to write without the help of LLMs did worse than those who never used ChatGPT at all. The authors of the study are talking about a “cognitive debt”: the more one relies on AI, the more they lose thinking abilities. All these findings are true of most users; a silver line, perhaps, is that users with very strong cognitive capacities displayed higher neural connection when using LLMs.

In short, LLMs are here to stay, at least until proper regulation – which is not yet on the horizon – kicks in; if this study is right, they can give valuable support to the more accomplished scholars (perhaps at various stages of their career) while harming everybody else. Part of the university’s job is to develop the latter group’s cognitive abilities; encouraging students to use LLMs appears, in light of these considerations, a kind of malpractice. And assigning at home essays is, in practice, encouragement.

(more…)

Ideology-critique in the classroom

Over the last few weeks, I have been marking exams for the economic ethics course I taught this year. The experience has not been particularly joyful. Admittedly, marking rarely is, but it gets worse when one develops a feeling of uselessness and failure, as I experienced on this occasion.

The source of this feeling was the realization of the grip of inegalitarian ideologies on my students. Since most of them were studying business, I should maybe have expected it, but I naïvely hoped that their ethics course might have led them to somewhat question their inegalitarian beliefs. And perhaps it has. It would take a combination of anonymous ex-ante and ex-post opinion surveys to measure it.

Whether it would be ethical to conduct such a survey is an interesting question (your opinions are welcome), but not the one I wanted to discuss in this post. The one I am concerned with is whether it would be acceptable, from an ethics of teaching perspective, to engage more straightforwardly in ideology-critique in my course, in the future.

My reaction when marking my exams.
(more…)

Call for Papers: “Ethical and Epistemological Issues in the Teaching of Politics”

Justice Everywhere is pleased to share the following call for papers:


The Centre for the Pedagogy of Politics (CPP) at UCL and the Teaching Political Theory Network (TPTN) at the University of York are co-organising a one-day workshop focussed on ethical and epistemological issues in the teaching of politics.

Time: Friday, 6 June 2025

Location: University College, London

The teaching of politics is taken to include the teaching of all relevant sub-disciplines (e.g., political science, international relations, political theory) as well as activities that inform and support it (e.g., related pastoral and administrative activities).

The aim of the workshop is to provide a platform for educators and researchers to critically explore contemporary philosophical issues, scholarly debates, and innovative pedagogical approaches related to the central theme.

We welcome presentations, case studies, papers, and panel proposals that might address, but are not restricted to, the ethical and/or epistemological dimensions of:

  • the teaching of argumentation in politics;
  • background methodological choices/assumptions;
  • neutrality of teacher viewpoint;
  • freedom of speech in the classroom;
  • teaching controversial/offensive/upsetting topics;
  • inclusive classroom practices;
  • decolonising/liberating the curriculum;
  • differential treatment of students;
  • modes of assessment;
  • reducing the emphasis on grades;
  • use of A.I.;
  • programme design;
  • co-designing teaching materials with students;
  • aiming to enhance student employability;
  • the teaching of interdisciplinary subjects.

Please send your expression of interest and a short abstract of no more than 100 words to polsci.cpp@ucl.ac.uk by the end of Wednesday 9th April 2025.

We look forward to hearing from you soon!

Writing Assignments in the age of Gen AI

If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?

(more…)

Teaching students to be good

What’s the point of teaching moral and political philosophy?

Ancient philosophers around the world would have thought the answer to this question was blindingly obvious: the point is to make students better – better as citizens, rulers, or just as human beings.

Yet today I suspect very few academics would defend this position, and most would find the idea of inculcating virtue among their students to be silly at best, dangerous at worst.

I think the ancients were right on this one. We should educate our students to make them better moral and political agents. And I don’t think this has to be scarily illiberal at all – at least, that’s what I’m going to argue here.

The model of ethical discourse my students seem to be learning in secondary school
(more…)

The Difficulty of Doing Non-Western Political Theory

I am currently designing an undergraduate course on ‘contemporary non-western political theory’, a task fraught with difficulties. Ever since I moved to Europe for my postgraduate studies, I have felt a certain discomfort with the ethnocentrism in analytical political theory departments here, that is at once apparent and not-so-apparent. Apparent, because 99% of the authors I read in a ‘global’ justice course or the scholars I meet at ‘international’ conferences turn out to be people who grew up and trained in the ‘west’. Not-so-apparent because the content of the research taught and produced by these scholars is often genuinely universal. Questions such as ‘what justifies democracy’ or ‘is equality inherently valuable’ or ‘what grounds human rights’ can and often do have answers that transcend cultural particularities. That is, in fact, what attracted me to analytical political theory in the first place – it’s concern with some basic, normative issues that presumably affect all human societies. 

(more…)

Modern education systems erode trust – this may be a big problem.

Photo by lauren lulu taylor on Unsplash

As teachers, our work is inescapably affected by a range of structural features such as the marketisation and commodification of higher education, the erosion of benefits and of pay, and more. Many of these have been amply studied and debated, including on this blog. Today, however, I want to discuss a relatively underexplored dimension of all this – the slow erosion of trust between staff and students.

In a (higher) education setting, trust is an important value, for several reasons. For one, students are typically young adults and being given responsibility – and being trusted with that responsibility – is an important part of the process of growing up. I’m specifically inspired here by an approach to assessment known as ‘ungrading’. Regardless of the merits of the method, Jesse Stommel’s summary of the core philosophy of ungrading is something that needs to be taken extremely seriously: ‘start by trusting students’.

But it’s also a principled point. From a broadly Kantian perspective, one important aspect of ethical behaviour is respect for others as ‘ends in themselves’. While we all may occasionally jokingly remind each other that students’ brains haven’t fully developed yet, it is important to remember that this does not mean that they lack the capacity for autonomy. Indeed, because of their age, it is perhaps more important than ever to allow them to practice, or exercise autonomy.

(more…)

Why We Should ‘Environmentalise’ the Curriculum

A photograph of a group of people sitting on a frosty hillside. One person is standing up and talking to the others.
Outdoor Philosophy Session by the Critique Environmental Working Group: Place-Based Ecological Reflection Exercise in Holyrood Park, Edinburgh. Photo supplied by authors.

This is a guestpost in Justice Everywhere’s Teaching Philosophy series. It is written by Talia Shoval, Grace Garland and Joseph Conrad, of the Environmental Working Group of the University of Edinburgh’s Centre for Ethics and Critical Thought (Critique).

In this blogpost, we share insights from the exploratory journey we undertook into ‘environmentalising’ the curriculum: a project aimed at bringing the environment to the fore of learning and teaching in higher education. After briefly explaining the guiding rationale, we sketch the contours of the environmentalising project and suggest trajectories for moving forward.

As political theorists working on issues concerning the environment, we start from the working observation that environmental issues tend to be downplayed—or worse, altogether overlooked—in the context of academic learning and teaching, as well as in scholarly research. The environment, when it is mentioned, is often treated as a miscellaneous category, an ‘Other’ that falls outside the remit of and constitutes the backdrop to human affairs. This tendency is exemplified by the lack of environmental materials in syllabi across the social sciences and humanities. Even when environmental issues are present, they are discussed, more often than not, in human-centred ways. Juxtaposed with the evidence of environmental degradation all around, this felt odd, and somewhat disquieting. Our initial intuition told us that the environment should take up much more space in academic curricula and common research, learning, and teaching practices—even in the social sciences, including politics and ethics.

(more…)