Category: Technology

Worse than AI writing is AI reading. What can we do?

While we’re all worried that assigning home-written essays stopped making sense because students are outsourcing the task to AI, and we’re all scrambling to invent alternative ways of assessment, this particular blogger is even more concerned about the effects of students relying on brief (or not so brief) AI-generated summaries of the readings that they should do before class. In my short post-LLM teaching experience, worse than AI writing is AI “reading”. And, I want to stress, that’s not merely because students aren’t doing the readings. Rather, it’s because they seem to think that what they get from, say, ChatGPT, is enough for them to understand the view of the author in question, and have justified opinions about it. This surely doesn’t work, at least not for readings in philosophy, which is what I teach. Students may know, in a nutshell, what conclusion the author wants to support and a couple of reasons in favour of it. But because they don’t struggle to extract these from an article, or a book chapter, with their own natural intelligence, they fail to appreciate the complexity of the issue we discuss, the difficulty of defending well a particular position on it, the temptation of thinking about that matter very differently and the extraordinary challenge which, sometimes, is to even formulate the right questions. The result is boredom.

In the classroom, boredom is the kiss of death; eyes would like to roll, hands yearn for the phone but remain still, because my students are mostly polite and because I ban mobile phones in class. Everybody seems to be having a mental cramp. Of course we do: since they have not been through the discovery adventure, but instead skipped to the outcome, students’ comments are flat, their questions – which they should prepare in advance of class and be ready to talk about with their colleagues – are pro forma, and most often vague, so as not to betray the lack of familiarity with the text. Boredom is contagious. People appear unable to imagine how one could think differently about the questions we discuss – something that a well-written paper would have made vivid to them. Even incendiary topics (say, culture wars material) are met with apathy.

For many years, Jo Wolff had a wise and funny series of editorials in the Guardian; one of the earliest was praising academic prose for being boring. It’s for fiction writers to create mystery and suspense; philosophers (for instance) should start with the punch line and then deliver the argument for it. I agree with sparing readers the suspense, but after a series of academic conversations with ChatGPT I discovered that, if pushed to the extreme – the formulation of a thesis and the bare bones argument for it – this kind of writing is the worst. It kills curiosity.

What should we do? Perhaps turn some of our classes into reading-together-in-silence events? Back to monastic education! I talked to colleagues, who told me about several things they’re trying to make students read again (without AI.) An obvious possibility is to ban all use of LLMs by students and explain the reasons: Our job is not primarily to populate their minds with theories, but to help them understand arguments, teach them how to pull them apart, and maybe to occasionally build them. I’m not sure about this solution either. For one thing, a well-prompted LLM is better at reconstructing a slightly unclearly and imprecisely presented argument than the average reader and many students; often, AI often produces much better abstracts of academic work than academics themselves, and well-written abstracts are really useful. Another problem is that policies which can’t be enforced are for that reason deficient, and, I suspect, the very attempt to directly police students on their use of AI would be just as anti-pedagogical as the use of AI itself. (Reader, do you learn from those you resent?)

Alternative suggestions are to change how we teach. Quite a few colleagues have started to read out excerpts in class, then discuss them on the spot. One of them goes as far as asking students to memorise them, in an attempt to revive proven methods of Greek and Roman antiquity. This sounds good, time consuming as it is; better do a little, and do it well, than do a lot for naught, though I’d stop short of requiring memorisation. Others ask students to annotate their readings before class, and check, or use Perusall and similar platforms to read the assignments collectively, in preparation for class. I did Perusall to great success in the Covid era, but when I tried it again recently it was a disaster of cheating and complaints. Some teachers are printing out readers, or organising hard copies of books for the students, in the hope that this dissuades them from uploading digital files to LLMs. One colleague introduced 5-10 minutes flash exams at the beginning of each class, to check that students have read. And another one picks two students in each class, randomly, and asks them to co-chair the discussion about the reading of that day.

In the medium term, perhaps universities should double – or triple – the length of time that students spend together, with an instructor, for each class, and earmark the extra time as “study group”, when students read and write. There’s something dystopian about this model and it would massively increase work loads for instructors, so in practice it should mean more jobs, perhaps with lesser compensation. But is this really worse than giving up on the goal of teaching students how to read and write essays? Everybody would resist, no doubt but by the time the value of degrees, including their market value, will be next to nothing, universities might face a choice between closing down and reforming in ways that we find hard to imagine now.

As for the next academic year, I wonder whether I should assign readings that I won’t cover at all in my lecturing, but which will be of great help to students in the discussion section. Those who come to class having read only the LLM-created abstract will be the poorer for it. But, since I won’t ask them to discuss the papers, we might – most of us – escape the boredom mill.

Any thoughts?

Xenophobic bias in Large Language Models

In this post Annick Backelandt argues that xenophobia should be understood as a distinct bias in Large Language Models, rather than being subsumed under racial bias. She shows how LLMs reproduce narratives of “foreignness” that particularly affect migrants and refugees, even without explicit racial references.

Image by HelenSTB from Flickr

(more…)

LLMs can be harmful, even when not making stuff up

This is a guest post by Joe Slater (University of Glasgow).

A screenshot of a phone, showing an AI generated summary in response to the question "How many rocks shall I eat".
Provided by author

It is well known that chatbots powered by LLMs – ChatGPT, Claude, Grok, etc. – sometimes make things up. People have sometimes called these “AI hallucinations”. With my co-authors, I have argued that we should describe chatbots as bullshitting, in the sense described by Harry Frankfurt, i.e., the content is produced with an indifference to the truth. Because of this, developing chatbots that no longer generate novel false utterances (or reduce the proportion of false utterances they output) has been a high priority for big tech companies. We can see this in the public statements made by, e.g., OpenAI, boasting of reduced hallucination rates.

One factor that is sometimes overlooked in this discourse is that generative AI can also be detrimental in that it may stifle development, even when it accurately depicts the information it has been trained on.

Recall the instance of the Google AI overview, which is powered by Google’s Gemini LLM, claiming that “According to UC Berkeley geologists, you should eat at least one small rock per day”. This claim was initially made in the satirical news website, The Onion. While obviously false claims like this are unlikely to deceive, it demonstrates a problem. False claims may be repeated. Some of these could be ones that most people accept, or even that most experts accept. This poses serious problems.

In this short piece, I want to highlight three worries that might escape our notice if we focus only on chatbots making stuff up:

  1. Harmful utterances (true or otherwise),
  2. Homogeneity and diminished challenges to orthodox views (true or otherwise)
  3. Entrenched false beliefs
(more…)

Can entry-level jobs be saved by virtuous AI?

Photo credit: RonaldCandonga at Pixabay.com, https://pixabay.com/photos/job-office-team-business-internet-5382501/

This is a guest post by Hollie Meehan (University of Lancaster).

We have been warned by the CEO of AI company Anthropic that up to 50% of entry-level jobs could be taken by AI in the coming years. While reporters have pointed out that this could be exaggeration to drive profits, it raises the question of where AI should fit into society. Answering this is a complicated matter that I believe could benefit from considering virtue ethics. I’ll focus on the entry-level job market to demonstrate how these considerations can play an important role in monitoring our use of AI and mitigating the potential fallout.

(more…)

Bednets versus Rocket Ships: Should we care more for people alive today or the future of humanity?

In this post, Elizabeth Hupfer (High Point University) discusses her article recently published in the Journal of Applied Philosophy on how to balance concern for the future of humanity with the needs of those alive today.

Made with Canva AI

Ever wonder why ChatGPT was invented? Or why billionaires have become so obsessed with rockets? The common thread in these questions is Longtermism. Longtermism is the view that concern for the long-term future is a moral imperative. The theory is caricatured by critics as a movement preoccupied with dystopian takeover by AI, a globe shrouded in nuclear winter, and colonization of distant planets. But at the heart of Longtermism are concepts intuitive to many: that future people’s lives matter and that it is good to ensure the survival of humanity. Yet, in our current world of scarce resources, Longtermist priority may go to future people at the expense of present people in need. In my paper I argue that Longtermists do not have a clear means of giving priority to people in need today without abandoning central tenets of the theory.

Longtermism

Longtermism has grown in popularity from a philosophical theory to a social movement that impacts Silicon Valley, US politics, international laws, and more. To understand this consequential theory, we need to look at two important components: time and quantity of future people.

First, Longtermists argue that time is not morally important. In What We Owe the Future, William MacAskill gives the example of a dropping a shard of glass on a hike. If you drop the glass and do not pick it up then you have harmed the person who steps on it, even if that person exists in the future.

Second, Longtermists argue that there are potentially tens of trillions of people who could exist in the future. There are various ways that Longtermists can calculate this number, but all that matters for our purposes is that it is a lot. A whole lot. More people than exist presently, and more people than have ever existed up to this point.

Combining the notion that time is not morally important and that there are a vast number of potential people, means that it is imperative to safeguard both the survival of humanity and the quality-of-life of future people.

Far-Future Priority Objection

What if this concern for the tens of trillions of future people comes at the expense of people who are living today? I call this the Far-Future Priority Objection: repeated instances of priority to far-future concerns will result in the systemic neglect of current people in the most need and potentially large-scale reallocation of resources to far-future interventions.

For example, Hilary Greaves and William MacAskill argue that the most effective way to save a current life through donation is providing insecticide-treated bednets in malaria zones. Their data shows that with these bednets, donating $100 is equivalent to saving 0.025 lives. But this is less effective than many Longtermists causes such asteroid deflection ($100 would result in around 300,000 additional lives), pandemic preparedness (200 million additional lives), and preventing AI takeover (one trillion additional lives). If Longtermists are concerned about efficiently doing the most good they can with a unit of resources (and I argue in my paper that they are), then Longtermist causes will trump even the most efficient causes for people alive today.

According to the Far-Future Priority Objection, repeated priority in this pattern could significantly shift overall resources away from those in need today over time, particularly those in low-income nations. Thus, widespread espousal of Longtermism may result in the global affluent turning their backs on these populations.

Potential Responses

In my paper, I analyse several potential responses the Longtermist could give to the Far-Future Priority Objection and argue that none of these responses can successfully mitigate the objection without abandoning basic tenets of Longtermism.

I will highlight one such argument here. Longtermists typically argue that far-future interventions cannot cause serious harm in the short term. According to my Far-Future Priority objection, individual instances of priority to the far future are not harmful but repeated instances may be. Take the following analogy: a law is enacted which is not explicitly discriminatory towards minority Group X. However, over time, implementation of the law results in resources, which would previously have gone to Group X, going to nearby (perhaps better off) Group Y. A decade later, Group X is significantly worse off. I think that one could reasonably argue that Group X was seriously harmed. Similarly, Longtermism does not intentionally or explicitly discriminate against current people, and it does not remove existing resources from them. Serious harm is likely caused nonetheless.

However, I argue that appealing to near-future serious harms results in either too strong or too weak of a response to the Far-Future Priority Objection and is not a viable avenue for the Longtermist. This is because one could be an absolutist about causing harm, which would mean that repeated priority to the future would be morally wrong and Longtermism would be undermined altogether. Alternatively, one could be a non-absolutist and say that the prevention of harm can be overridden when the stakes are high enough. Yet, since there could be tens of trillions of future lives at risk, the stakes will always be so high as to override the ban.

Conclusion

Longtermists have two options. First, they can bite the bullet and accept that Longtermism could result in systemic neglect of present people. This is counterintuitive to many. Second, they can create a new principle which allows for occasional priority for present people without abandoning basic tenets of the theory. In my paper, I analyse and dismiss several possible principles.


Elizabeth Hupfer’s research focuses on the intersection between normative/applied ethics and social/ political philosophy. She has published on distributive justice, coercion, humanitarianism, Effective Altruism, and Longtermism.

Should Universities Restrict Generative AI?

In this post, Karl de Fine Licht (Chalmers University of Technology) discusses his article recently published in the Journal of Applied Philosophy on the moral concerns of banning Generative AI in universities.

Rethinking the Ban

Universities face a challenging question: what should they do when the tools that help students learn also raise serious moral concerns?

Generative AI (GenAI) tools like ChatGPT offer immediate feedback, personalized explanations, and writing or coding assistance. But they also raise concerns: energy and water use, exploitative labor, privacy risks, and fears of academic dishonesty. In response, some universities have banned or severely restricted these tools. That may seem cautious or principled—but is it the right response?

In a recent academic paper, I argue that while these concerns are real, banning GenAI is often not the most justified or effective approach. Universities should instead pursue responsible engagement: adopting ethical procurement practices, educating students on thoughtful use, and leveraging their institutional influence to demand better standards.

Do Bans Make a Difference?

Many arguments for bans focus on harm. Using GenAI may contribute to carbon emissions, involve labor under poor conditions, or jeopardize student privacy. But how much difference does banning GenAI at a single university make?

Not much. Models are trained centrally and used globally. Universities typically rely on existing, pre-trained models—so their marginal contribution to emissions or labor practices is negligible. Even if all universities banned GenAI, it’s not clear this would shift global AI development or halt resource use. Worse, bans may backfire: students may use GenAI anyway, without oversight or support, leading to worse outcomes for learning and equity.

Take climate impact. Training models like GPT-4 requires substantial energy and water. But universities rarely train their own models; they use centralized ones whose training would happen regardless. Further, GenAI’s daily use is becoming more efficient, and in some applications—like architectural design, biodiversity monitoring, and climate modeling—GenAI may even help reduce emissions. The better route is for universities to demand energy-efficient models, support green cloud services, and explore carbon offsetting—not prohibit tools whose use is educationally beneficial and environmentally marginal.

Or consider labor exploitation. Training GenAI models often relies on underpaid workers performing harmful tasks, especially in the Global South. That’s a serious ethical issue. But again, banning use doesn’t necessarily help those workers or change the system. Universities could instead pressure companies to raise labor standards—leveraging their roles as clients, research partners, and talent suppliers. This collective influence is more likely to yield ethical improvements than a local ban.

The Reduction Problem

Even if you think universities are morally complicit by using GenAI—regardless of impact—you face a further problem: consistency. If the right response to morally tainted technologies is prohibition, why stop with GenAI?

Much of higher education depends on digital infrastructure. Computers, smartphones, and servers are produced under similarly problematic labor and environmental conditions. If the logic is “avoid complicity by avoiding use,” then many standard technologies should also be banned. But that leads to a reductio: if universities adopted this policy consistently, they would be unable to function.

This doesn’t mean ethical concerns should be ignored. Rather, it shows that avoiding all complicity isn’t feasible—and that universities must find ways to act responsibly within imperfect systems. The challenge is to engage critically and constructively, not withdraw.

Hidden Costs of Prohibition

There are also moral costs to banning GenAI.

Students continue to use AI tools, but in secret. This undermines educational goals and privacy protections. Vulnerable students—those with fewer resources, time, or academic support—may be most reliant on GenAI, and most harmed by a ban. When students use unvetted tools outside institutional guidance, the risks to their privacy and integrity increase.

Instead of banning GenAI, universities can offer licensed, secure tools and educate students on their appropriate use. This supports both ethical awareness and academic integrity. Just as we teach students to cite sources or evaluate evidence, we should teach them to engage with GenAI responsibly.

Setting Ethical Precedents

Some argue that even small contributions to harm are morally significant—especially when institutions help normalize problematic practices. But even if that’s true, it doesn’t follow that bans are the best response.

A more constructive alternative is to model responsible AI use. That includes setting ethical procurement standards, embedding AI literacy in curricula, and advocating for transparency and fair labor. Universities, especially when acting collectively, have leverage to influence AI providers. They can demand tools that respect privacy, reduce emissions, and avoid exploitative labor.

In other words, universities should take moral leadership—not by withdrawing, but by shaping the development and use of GenAI.

Choosing a Better Path

GenAI is not going away. The real question is how we engage with it—and on whose terms. Blanket bans may seem safe or principled, but they often achieve little and may create new harms.

Instead, universities should adopt a balanced approach. Acknowledge the risks. Respond to them—through institutional advocacy, ethical licensing, and student education. But also recognize the benefits of GenAI and prepare students to use it well.

In doing so, universities fulfill both moral and educational responsibilities: not by pretending GenAI doesn’t exist, but by helping shape the future it creates.

Karl de Fine Licht is Associate Professor in Ethics and Technology at Chalmers University of Technology. His research focuses on the ethical and societal implications of artificial intelligence, with particular emphasis on public decision-making and higher education. He has published extensively on trustworthy AI, generative AI, and justice in technology governance, and regularly advises public and academic institutions on responsible AI use.

Writing Assignments in the age of Gen AI

If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?

(more…)

Just do(pe) it? Why the academic project is at risk from proposals to pharmacologically enhance researchers.

In this post, Heidi Matisonn (University of Cape Town) and Jacek Brzozowski (University of KwaZulu-Natal) discuss their recently published article in the Journal of Applied Philosophy in which they explore the justifiability and potential risks of cognitive enhancement in academia.

Image created with ChatGPT.

The human desire to enhance our cognitive abilities, to push the boundaries of intelligence through education, tools, and technology has a long history. Fifteen years ago, confronted by the possibility that a ‘morally corrupt’ minority could misuse cognitive gains to catastrophic effect, Persson and Savulescu proposed that research into cognitive enhancement should be halted unless accompanied by advancements in moral enhancement.

In response to this, and following on from Harris’ worries about the mass suffering that could result from delaying cognitive enhancement until moral enhancement could catch up, in 2023, Gordon and Ragonese offered what they termed a ‘practical approach’ to cognitive enhancement research in which they advocated for targeted cognitive enhancement —specifically for researchers working on moral enhancement.

Our recent article in the Journal of Applied Philosophy suggests that while both sets of authors are correct in their concerns about the significant risks related to cognitive enhancement outrunning moral enhancement, their focus on the ‘extremes’ neglects some more practical consequences that a general acceptance of cognitive enhancement may bring — not least of which relate to the academic project itself.

(more…)

From the Vault: Universities, Academia and the academic profession

While Justice Everywhere takes a short break over the summer, we recall some of the highlights from our 2023-24 season. 

Trinity College Library, Dublin. antomoro (FAL or FAL), via Wikimedia Commons

Here are a few highlights from this year’s posts relating to academia, the modern university, and the academic profession:

Stay tuned for even more on this topic in our 2024-25 season!

***

Justice Everywhere will return in full swing in September with fresh weekly posts by our cooperative of regular authors (published on Mondays), in addition to our Journal of Applied Philosophy series and other special series (published on Thursdays). If you would like to contribute a guest post on a topical justice-based issue (broadly construed), please feel free to get in touch with us at justice.everywhere.blog@gmail.com.

Why Conscious AI Would Be Bad for the Environment

Image credit to Griffin Kiegiel and Sami Aksu

This is a guest post by Griffin Kiegiel.

Since the meteoric rise of ChatGPT in 2021, artificial intelligence systems (AI) have been implemented into everything from smartphones and electric vehicles, to toasters and toothbrushes. The long-term effects of this rapid adoption remain to be seen, but we can be certain of one thing: AI uses a lot of energy that we can’t spare. ChatGPT reportedly uses more than 500,000 kilowatt-hours of electricity daily, which is massive compared to the 29 kilowatt-hours consumed by the average American household.

As the global temperature and ocean levels rise, it is our responsibility to limit our collective environmental impact as much as possible. If the benefits of AI don’t outweigh the risks associated with increasing our rate of energy consumption, then we may be obligated to shut down AI for the sake of environmental conservation. However, if AI becomes conscious, shutting them down may be akin to murder, morally trapping us in an unsustainable system.

(more…)