Monthly Archive: July 2025

Should Universities Restrict Generative AI?

In this post, Karl de Fine Licht (Chalmers University of Technology) discusses his article recently published in the Journal of Applied Philosophy on the moral concerns of banning Generative AI in universities.

Rethinking the Ban

Universities face a challenging question: what should they do when the tools that help students learn also raise serious moral concerns?

Generative AI (GenAI) tools like ChatGPT offer immediate feedback, personalized explanations, and writing or coding assistance. But they also raise concerns: energy and water use, exploitative labor, privacy risks, and fears of academic dishonesty. In response, some universities have banned or severely restricted these tools. That may seem cautious or principled—but is it the right response?

In a recent academic paper, I argue that while these concerns are real, banning GenAI is often not the most justified or effective approach. Universities should instead pursue responsible engagement: adopting ethical procurement practices, educating students on thoughtful use, and leveraging their institutional influence to demand better standards.

Do Bans Make a Difference?

Many arguments for bans focus on harm. Using GenAI may contribute to carbon emissions, involve labor under poor conditions, or jeopardize student privacy. But how much difference does banning GenAI at a single university make?

Not much. Models are trained centrally and used globally. Universities typically rely on existing, pre-trained models—so their marginal contribution to emissions or labor practices is negligible. Even if all universities banned GenAI, it’s not clear this would shift global AI development or halt resource use. Worse, bans may backfire: students may use GenAI anyway, without oversight or support, leading to worse outcomes for learning and equity.

Take climate impact. Training models like GPT-4 requires substantial energy and water. But universities rarely train their own models; they use centralized ones whose training would happen regardless. Further, GenAI’s daily use is becoming more efficient, and in some applications—like architectural design, biodiversity monitoring, and climate modeling—GenAI may even help reduce emissions. The better route is for universities to demand energy-efficient models, support green cloud services, and explore carbon offsetting—not prohibit tools whose use is educationally beneficial and environmentally marginal.

Or consider labor exploitation. Training GenAI models often relies on underpaid workers performing harmful tasks, especially in the Global South. That’s a serious ethical issue. But again, banning use doesn’t necessarily help those workers or change the system. Universities could instead pressure companies to raise labor standards—leveraging their roles as clients, research partners, and talent suppliers. This collective influence is more likely to yield ethical improvements than a local ban.

The Reduction Problem

Even if you think universities are morally complicit by using GenAI—regardless of impact—you face a further problem: consistency. If the right response to morally tainted technologies is prohibition, why stop with GenAI?

Much of higher education depends on digital infrastructure. Computers, smartphones, and servers are produced under similarly problematic labor and environmental conditions. If the logic is “avoid complicity by avoiding use,” then many standard technologies should also be banned. But that leads to a reductio: if universities adopted this policy consistently, they would be unable to function.

This doesn’t mean ethical concerns should be ignored. Rather, it shows that avoiding all complicity isn’t feasible—and that universities must find ways to act responsibly within imperfect systems. The challenge is to engage critically and constructively, not withdraw.

Hidden Costs of Prohibition

There are also moral costs to banning GenAI.

Students continue to use AI tools, but in secret. This undermines educational goals and privacy protections. Vulnerable students—those with fewer resources, time, or academic support—may be most reliant on GenAI, and most harmed by a ban. When students use unvetted tools outside institutional guidance, the risks to their privacy and integrity increase.

Instead of banning GenAI, universities can offer licensed, secure tools and educate students on their appropriate use. This supports both ethical awareness and academic integrity. Just as we teach students to cite sources or evaluate evidence, we should teach them to engage with GenAI responsibly.

Setting Ethical Precedents

Some argue that even small contributions to harm are morally significant—especially when institutions help normalize problematic practices. But even if that’s true, it doesn’t follow that bans are the best response.

A more constructive alternative is to model responsible AI use. That includes setting ethical procurement standards, embedding AI literacy in curricula, and advocating for transparency and fair labor. Universities, especially when acting collectively, have leverage to influence AI providers. They can demand tools that respect privacy, reduce emissions, and avoid exploitative labor.

In other words, universities should take moral leadership—not by withdrawing, but by shaping the development and use of GenAI.

Choosing a Better Path

GenAI is not going away. The real question is how we engage with it—and on whose terms. Blanket bans may seem safe or principled, but they often achieve little and may create new harms.

Instead, universities should adopt a balanced approach. Acknowledge the risks. Respond to them—through institutional advocacy, ethical licensing, and student education. But also recognize the benefits of GenAI and prepare students to use it well.

In doing so, universities fulfill both moral and educational responsibilities: not by pretending GenAI doesn’t exist, but by helping shape the future it creates.

Karl de Fine Licht is Associate Professor in Ethics and Technology at Chalmers University of Technology. His research focuses on the ethical and societal implications of artificial intelligence, with particular emphasis on public decision-making and higher education. He has published extensively on trustworthy AI, generative AI, and justice in technology governance, and regularly advises public and academic institutions on responsible AI use.

How much is too much? Why defining ‘mass incarceration’ is important – and isn’t as easy as it seems

In this post, Vincent Chiao, discusses his article recently published in the Journal of Applied Philosophy on how to understand the “mass” part of “mass incarceration.”

By Our World In Data. See English Wikipedia: Our World in Data. – https://ourworldindata.org/grapher/prison-population-rate. CC BY 4.0,

The United States incarcerates more people than any other country in the world. On a per capita basis, the United States incarcerates at a higher rate than any other democracy, with the possible exception of El Salvador. Yet at the same time, a disturbingly large share of crime is never reported much less punished. This raises the simple question: how do we know when a penal system incarcerates too many people? Even as “mass incarceration” has become a staple of both academic research and political discourse over the last decade, and even as renewed attention has been paid to glaring racial disparities, the question of scale – how much is too much – has remained surprisingly elusive.

Why defining excess is not as easy as it seems.

It is tempting to think that it is sufficient to point to the sheer scale of incarceration in the United States. Tempting—but wrong. Most crimes in the United States go unpunished, including “core” crimes of interpersonal violence. According to the National Crime Victimization Survey, a third of robberies, half of aggravated assaults, and the overwhelming majority of rapes and sexual assaults go unreported, much less punished. Strikingly, one advocacy group estimates that there are approximately 433,000 sexual assaults in the United States every year, and that ‘out of every 1000 sexual assaults, 975 perpetrators will walk free.’ This implies that each year there are approximately 422,000 instances of sexual assault in which no one is held accountable. For context, that is strikingly close to the total number of people admitted to prison in 2021.

It is true that people tend to be incarcerated for longer in the United States than in other parts of the world, but that alone does not show that the United States incarcerates “too many” people. In part, this is because punishments of varying degrees of severity might all be in some sense “proportionate,” and in part because the large number of unpunished crimes creates significant headroom in incarceration rates. The United States could incarcerate many more people, and potentially incarcerate them for longer, without violating basic rights against punishing the innocent or disproportionate punishment of the guilty.

Otherwise put: incarceration rates tend to be driven more by policy than by crime. What makes this into a philosophical problem is principled disagreement about what we are trying to do when we punish people for committing crimes. Crime prevention? Reparation? Symbolic vindication? Rehabilitation? Something else? We tend to be more confident that criminals should be punished than we are as to why they should be punished. But that makes it difficult to say if what we are getting is too much, too little, or just about right.

What about crime prevention?

Crime prevention is the most common, and most popular, answer to “why do we punish criminals?” But it is easy to see why one might hesitate. “Is incarceration an efficient way of preventing crime?” quickly leads to comparing the interests of the innocent in not being victimized against the interests of the guilty in not being imprisoned. Not only is that a hard question to answer objectively, but it also involves intrusive value judgments that liberals have reason to eschew. Telling people that their safety isn’t “worth the cost” can easily sound condescending, particularly when the costs mostly fall on those who choose to break the law.

Three conceptions of excess

This presents a difficult, though not insurmountable, challenge. For starters, we could define excess incarceration in strictly Paretian terms: can we release people from jails and prisons without increasing crime? Since this approach makes some people better off without making anyone worse off, it does not require trading off different people’s interests.

Alternatively, we could consider whether alternative modes of preventing crime could substitute for incarceration, again holding crime constant. By holding crime constant, we would only be asking whether there are ways of controlling crime that have a less malign impact on people’s lives than prisons. This too does not involve weighing competing interests.

The main limitation of these approaches is that they take existing levels of criminal victimization as sacrosanct. As a result, a quite substantial degree of incarceration could potentially be justified if it prevented trivial increases in crime. That might lead us to seek a more demanding conception of excess. That will, however, require weighing competing interests – those of potential victims in not having their rights violated and those of potential prisoners in not being incarcerated. As noted, this can easily come across as condescending, and worse, as involving intrusive judgments of worth.

That said, it’s worth noting that very few people are absolutists about crime. Most of us regularly make practical trade-offs between convenience and safety, for instance, which routes we will walk, where to lock our bikes, whether to install a security system. These mundane decisions – along with jury awards, tangible costs, and survey data – reveal how people subjectively value safety versus other goods.

Such information would, of course, need to be carefully considered to control for morally salient biases. Nonetheless, the broader point is that a utilitarian conception of excess is not committed to paternalistically evaluating whether people are wrong to fear crime as much as they do. Its theory of value can be constructed from the bottom up rather than imposed from the top down. Doing so can help mitigate concerns about condescending or intrusive value judgments.

So what?

Mass incarceration is unjust. This is in part because the burdens of incarceration are unfairly distributed, but it is also in part because those burdens are excessive in absolute terms. The moral critique of mass incarceration thus depends on an analytical metric—a theory of what it is to incarcerate too many people. The metric we choose will tell us what it means to truly bring the era of mass incarceration to an end.


Vincent Chiao’s research interests are in public law, with a particular focus on the philosophy of criminal law. He is the author of Criminal Law in the Age of the Administrative State (OUP 2018). Themes in his work include the place of law in formal and informal social orders, punishment and the evolution of cooperation, and the rule of law as a social technology.