Worse than AI writing is AI reading. What can we do?

While we’re all worried that assigning home-written essays stopped making sense because students are outsourcing the task to AI, and we’re all scrambling to invent alternative ways of assessment, this particular blogger is even more concerned about the effects of students relying on brief (or not so brief) AI-generated summaries of the readings that they should do before class. In my short post-LLM teaching experience, worse than AI writing is AI “reading”. And, I want to stress, that’s not merely because students aren’t doing the readings. Rather, it’s because they seem to think that what they get from, say, ChatGPT, is enough for them to understand the view of the author in question, and have justified opinions about it. This surely doesn’t work, at least not for readings in philosophy, which is what I teach. Students may know, in a nutshell, what conclusion the author wants to support and a couple of reasons in favour of it. But because they don’t struggle to extract these from an article, or a book chapter, with their own natural intelligence, they fail to appreciate the complexity of the issue we discuss, the difficulty of defending well a particular position on it, the temptation of thinking about that matter very differently and the extraordinary challenge which, sometimes, is to even formulate the right questions. The result is boredom.

In the classroom, boredom is the kiss of death; eyes would like to roll, hands yearn for the phone but remain still, because my students are mostly polite and because I ban mobile phones in class. Everybody seems to be having a mental cramp. Of course we do: since they have not been through the discovery adventure, but instead skipped to the outcome, students’ comments are flat, their questions – which they should prepare in advance of class and be ready to talk about with their colleagues – are pro forma, and most often vague, so as not to betray the lack of familiarity with the text. Boredom is contagious. People appear unable to imagine how one could think differently about the questions we discuss – something that a well-written paper would have made vivid to them. Even incendiary topics (say, culture wars material) are met with apathy.

For many years, Jo Wolff had a wise and funny series of editorials in the Guardian; one of the earliest was praising academic prose for being boring. It’s for fiction writers to create mystery and suspense; philosophers (for instance) should start with the punch line and then deliver the argument for it. I agree with sparing readers the suspense, but after a series of academic conversations with ChatGPT I discovered that, if pushed to the extreme – the formulation of a thesis and the bare bones argument for it – this kind of writing is the worst. It kills curiosity.

What should we do? Perhaps turn some of our classes into reading-together-in-silence events? Back to monastic education! I talked to colleagues, who told me about several things they’re trying to make students read again (without AI.) An obvious possibility is to ban all use of LLMs by students and explain the reasons: Our job is not primarily to populate their minds with theories, but to help them understand arguments, teach them how to pull them apart, and maybe to occasionally build them. I’m not sure about this solution either. For one thing, a well-prompted LLM is better at reconstructing a slightly unclearly and imprecisely presented argument than the average reader and many students; often, AI often produces much better abstracts of academic work than academics themselves, and well-written abstracts are really useful. Another problem is that policies which can’t be enforced are for that reason deficient, and, I suspect, the very attempt to directly police students on their use of AI would be just as anti-pedagogical as the use of AI itself. (Reader, do you learn from those you resent?)

Alternative suggestions are to change how we teach. Quite a few colleagues have started to read out excerpts in class, then discuss them on the spot. One of them goes as far as asking students to memorise them, in an attempt to revive proven methods of Greek and Roman antiquity. This sounds good, time consuming as it is; better do a little, and do it well, than do a lot for naught, though I’d stop short of requiring memorisation. Others ask students to annotate their readings before class, and check, or use Perusall and similar platforms to read the assignments collectively, in preparation for class. I did Perusall to great success in the Covid era, but when I tried it again recently it was a disaster of cheating and complaints. Some teachers are printing out readers, or organising hard copies of books for the students, in the hope that this dissuades them from uploading digital files to LLMs. One colleague introduced 5-10 minutes flash exams at the beginning of each class, to check that students have read. And another one picks two students in each class, randomly, and asks them to co-chair the discussion about the reading of that day.

In the medium term, perhaps universities should double – or triple – the length of time that students spend together, with an instructor, for each class, and earmark the extra time as “study group”, when students read and write. There’s something dystopian about this model and it would massively increase work loads for instructors, so in practice it should mean more jobs, perhaps with lesser compensation. But is this really worse than giving up on the goal of teaching students how to read and write essays? Everybody would resist, no doubt but by the time the value of degrees, including their market value, will be next to nothing, universities might face a choice between closing down and reforming in ways that we find hard to imagine now.

As for the next academic year, I wonder whether I should assign readings that I won’t cover at all in my lecturing, but which will be of great help to students in the discussion section. Those who come to class having read only the LLM-created abstract will be the poorer for it. But, since I won’t ask them to discuss the papers, we might – most of us – escape the boredom mill.

Any thoughts?

Anca Gheaus

I work on various issues concerning justice. I am particularly interested in the relevance of personal relationships to moral and political philosophy. I published papers about gender justice, parental rights and duties, the nature and value of childhood, the goods of work and the ideal-non-ideal theory debate.

You may also like...

4 Responses

  1. Louis says:

    Hi Anca,
    Thanks for this post! I share your worries, I had a similar experience with some of my students last year. I had asked them to read one chapter or paper for each class and then write a short critical summary. At the first lesson, what most of them did was to use AI to do the summary but without the critical part of it. I then took 5-10 min to review one or two summaries and tried to tell them how to do it. I did that every time we met. I basically said: skip the summary and just do the critical thing: give me at least 2 criticisms + 2 open questions related to the chapter. I hand-picked some of the questions and they had to answer them in group during class. The quality of their work improved a lot. Of course, some students (say 25% to 30%) still did not get it, but I guess that would have been the case even before AI.

    That said, I have another comment, related to the “market value of diplomas” (end of your post). My view (which is based on my own experience) is that the market value of diplomas has almost nothing to do with what students have actually learned. It has to do mostly with (1) the name of university and (2) your grades. If you have good grades from a good university, nobody will look at what you actually know. Many students know this, and invest their time in getting good degrees from good universities, not learn great stuff from great teachers (whatever the university). If AI can help them do this, why not using it? So I do not share your negative assessment of how AI will affect the value of diploma, because I think the value of diplomas is already quite disconnected from what people learn. Take a MA student who just graduated from Harvard with the highest grade, all thanks to AI (without anyone knowing), who will actually check what she can? I have seen the magic of “Harvard + good grade” work so well so often that I doubt that AI will change anything.

    That said, I agree that it may affect the non-market value of diplomas, which is extremely sad. My point is just about the “market value”.
    Thanks for a great post,
    Louis

  2. Anca Gheaus says:

    Great to hear from you Louis, and thanks for helping out with explaining what you do. I might try it next time round.

    University administrators, many students, and their paying parents, care about the market value of university degrees. It’s very sad they see university eduction mainly as a business, but this way of thinking won’t go away anytime soon. Yes, university degrees are far from good proxies for how much students actually learn, and we all know this; they might be the best we have at the moment, though. However, if (or when) AI-reading and AI-writing will be so notoriously widespread, the only universities that able to still claim that their degrees are acceptable proxies are those that will adopt methods of teaching and assessment that are AI-proof. This AI crisis *could* be turned into pedagogical advantage, because teachers might have an economic argument – the kind of thing that administrators, students and parents are likely to listen to.

  3. Sanat says:

    Thanks Anca for such an insightful post! As you know, I am still new to the game, and my experience so far has been that many, and unfortunately in my present class, most undergraduate students simply don’t read. In my case, I suspect they may not even be reading the AI generated summaries. Some of them do the necessary readings when it’s time to submit assignments but not before class.

    I wonder if this has always been the case? Of course how many students read would depend on the kind of students a university is attracting, whether it’s an elective course etc. But in general if it has indeed been the case that a signficant portion of undergraduate student rarely go through the readings, maybe Gen AI hasn’t made things worse? In fact, it may have motivated students who don’t read at all to atleast go through the summaries?

    Also, I think a relevant analogy here is between movies in the time of minute long social media reels. Despite the fact that it’s so easy to get the ‘gist’ of a movie through short reels, people continue to want to watch 2-3 hr long movies. Perhaps, we as teachers/writers do have to make some changes to make sure the students see the value in going beyond the reels, i.e. AI generated summaries?

    • Anca Gheaus says:

      It’s very good to hear from you. Sanat! The thought that prompted me to write this, and more generally to think about AI-reading is a sense that, as far as I am concerned, I’d rather students don’t read at all. While AI-generated summaries makes one more informed, it also takes away (I feel) curiosity and this is a greater issue than cluelessness. But I might be wrong about this, of course. Yes to trying to motivate students (people) to read. Good luck to us!

Leave a Reply

Your email address will not be published. Required fields are marked *