What we train need not be the same as what we assess: AI damage limitation in higher education
It has always been clear that ChatGPT’s general availability means trouble for higher education. We knew that letting students use it for writing essays would make it difficult if not impossible to assess their effort and progress, and invite cheating. Worse, that it was going to deprive them of learning the laborious art and skill of writing, which is good in itself as well as a necessary instruments to thinking clearly. University years (and perhaps the last few years of high school, although, I worry, only for very few) is the chance to learn one’s writing and thinking. When there is quick costless access to the final product, there is little incentive for students to engage in the process of creating that product themselves; and going through that process is, generally, a lot more valuable than the product itself. Last March, philosopher Troy Jollimore published a lovely essay on this theme. So, we knew that non-regulated developments in artificial intelligence are inimical to this main aim of higher education.

Even more concerning news are now starting to find us: Not only is the use of ChatGPT bad for students because the temptation to rely on it is too hard to withstand, but respectable studies such as a recent one authored by scholars at MIT show that AI has significant negative effects on users’ cognitive abilities. The study indicates that the vast majority of people using Large Language Models (LLMs), such as ChatGPT, in order to write, forget the AI-generated content within minutes. Neural connections for the group relying on natural intelligence-only were almost twice as strong as those in the group using LLMs. And regular users who were asked to write without the help of LLMs did worse than those who never used ChatGPT at all. The authors of the study are talking about a “cognitive debt”: the more one relies on AI, the more they lose thinking abilities. All these findings are true of most users; a silver line, perhaps, is that users with very strong cognitive capacities displayed higher neural connection when using LLMs.
In short, LLMs are here to stay, at least until proper regulation – which is not yet on the horizon – kicks in; if this study is right, they can give valuable support to the more accomplished scholars (perhaps at various stages of their career) while harming everybody else. Part of the university’s job is to develop the latter group’s cognitive abilities; encouraging students to use LLMs appears, in light of these considerations, a kind of malpractice. And assigning at home essays is, in practice, encouragement.
(more…)