Writing Assignments in the age of Gen AI

If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?
It’s striking how unimaginative university departments have been in adapting writing assignments to the rise of Generative AI. Several times now, I have heard teachers humorously recount stories of essays they quickly identified as AI-generated, thanks to their predictable, lifeless tone. The musing is frequently followed by a genuine sense of worry that there may be several AI generated assignments that they failed to detect. It surely seems wrong if some students were caught ‘cheating’ while others were lucky enough to escape detection. Departments and instructors worldwide have responded to the worry and this sense of wronging in a variety of ways.
Some have gone for a policy of zero toleration for the use of Gen AI in assignments. It’s tricky to monitor and enforce such a policy, however, – we lack reliable checkers to detect how much of an essay is in fact AI generated. It is not even clear whether we will ever have 100% accurate checkers. If a checker suggests old-school plagiarism in an essay, we can visit the relevant sources to cross-verify. What will such a cross-verification look like in the case of Gen AI checkers? It’s like trying to predict with certainty that a student’s friend wrote their essay. And nothing short of 100% accuracy seems acceptable – even one false positive can be too many since it can lead to some student being penalized unjustly.
To escape the enforcement problem entirely, some have gone back to the ‘old ways’ to make their assignments Gen AI proof – this often implies replacing take-home assignments with in-class written or oral exams. Often the faculty going back to such old ways are acutely aware that these are knee-jerk reactions at best; temporary measures till we have a better solution.
Others have had a more permissive outlook, allowing students to get extensive help from AI tools for their writing assignment as long as they produce a comprehensive report at the end, summarizing the extent to which they used these tools for the assignment. However, exactly how much should be allowed is often left unspecified, leaving it to the judgement of individual instructors. This goes against some level of standardization that we presumably need across courses, departments and institutions. Moreover, the monitoring problem resurfaces here as it is nearly impossible to verify if students are under-reporting AI use. Finally, filing and evaluating such reports is an additional pressure on students and faculty’s time respectively.
In discussions about the role of writing assignments in higher education, a question often dismissed too quickly is: Do we even need writing assignments anymore? This question deserves serious consideration. If Gen AI can consistently produce A grade articles across disciplines (for now, it seems they can’t, but they likely might), do we still need students to learn the art of writing well-researched long form texts? More importantly, do we need to test how well each student has mastered this art?
The last two questions need to be separated for a proper reflection on the place of writing assignments in university education today. There are several things we want university students to learn without necessarily testing them on it, at least not directly – grit, empathy, creativity, networking, civic engagement, time management, to name a few. To answer the first question, i.e. whether we still need students to learn the art of writing, it’s valuable to begin by reflecting on why writing came to occupy such a central role in many disciplines, especially in the arts and humanities. Typically, in the mainstream education systems, as one moves from middle school to high school to college, the emphasis on producing long-form independently produced articles increases. One is expected to regurgitate less and think more, to focus less on getting the right answer and more on asking the right questions, to spend less time memorizing and more time deepening one’s understanding of whatever it is that one is studying. Note that a resurgence of time-bound in-class examinations risks stopping this transition, as these exams often incentivize students to learn the art of memorizing and regurgitating the right answer under time pressure.
Arguably, the transition a typical student is expected to go through currently is desirable. In the early years, students build a strong foundation of established knowledge before gradually acquiring critical thinking and analytical reasoning skills—essential for becoming independent thinkers. The process of formulating research questions, conducting literature reviews, constructing arguments, justifying methodologies, and anticipating objections—central to academic writing—plays a crucial role in this intellectual development. However, it is worth questioning whether long-form writing is the only, or even the most effective, way to cultivate these skills. Alternative approaches—such as in-class debates, role-playing exercises, multimedia presentations, and project-based learning—can foster many of the same cognitive skills. The best method may depend on the discipline: debates might be ideal for philosophy, while role-playing could be more effective in literature.
Of course, it may turn out that writing indeed is the best way to develop the aforementioned cognitive skills across a wide range of disciplines. In that case, completely replacing writing with other kinds of academic exercises may be undesirable. Even then, however, additional argumentation is needed for why students’ grade should depend on their ability to produce original essays. I can think of two main reasons.
One argument is that writing is a valuable skill, particularly for students graduating from humanities programs, as it is essential for success in virtually any career they pursue after university. University grades should thus take into account students’ writing abilities to signal their competitiveness to future employers and admissions committees. If employers and admissions committees don’t think university grades convey meaningful information about a student’s candidacy, then university grades may become unimportant to them. In other words, the value of university grades as a currency in the job or education market may go down. This can have downstream undesirable effects. If students know that nobody will care much about their university grades, the grades may not serve the incentive function they are supposed to serve.
However, if, as we’ve assumed, generative AI can produce high-quality articles across nearly any domain, the value of traditional writing skills may diminish significantly in the not-so distant future. In such a case, tying grades to writing ability could make them less relevant. For instance, imagine if university grades were based largely on students’ ability to perform manual math operations, in a world where nearly everyone has access to a calculator. The signaling value of those grades would likely be undermined. From the perspective of future employers and even postgraduate programs, what may become more valuable is something like ‘AI fluency’—the ability to generate the right prompts to produce high-quality content instantly. A strict zero-tolerance policy toward AI use in universities could stifle the development of this crucial skill. Instead, grading students on their AI fluency could help preserve the signaling value of academic grades.
Another reason in favor of graded writing assignments is that writing not only helps develop critical thinking and analytical reasoning skills, but also serves as an effective way to assess them. Even in the age of AI, these are some of the most crucial skills that university graduates will need to succeed, regardless of their career path—whether they become corporate managers, academics, activists, policymakers, lawyers or journalists. Therefore, we want university grades to reflect students’ competence in these cognitive skills. Writing may be the best, or perhaps the only, medium to assess them effectively.
However, the last claim isn’t necessarily self-evident. We may be so accustomed to writing assignments that evaluating students’ critical thinking and analytical skills through other methods feels unthinkable. As mentioned earlier, writing may indeed be the most effective way to develop these skills—at least for now. That said, we could still imagine alternative ways to assess students’ cognitive development, such as by measuring their performance in in-class debates, role-playing exercises, multimedia presentations, or field projects. One approach might involve allowing students to submit ungraded writing assignments to instructors for feedback on their thinking skills. This would encourage students to submit original ideas, as the feedback would directly help them perform better on graded assignments where they are expected to demonstrate these cognitive skills in other ways.
Much of what I’ve said here may seem speculative, which is somewhat inevitable considering how new the problem is and the limited experimentation round it. However, not questioning the value of our existing assignment structures runs the risk of putting our university departments in a position of forever trying to play catch-up with the rest of the world. In fact, the question of writing assignments is one where I think analytical philosophers can and should take the lead – their eye for fine-grained distinctions can help us separate the baby from the bathwater.
What a thoughtful post – and so timely! I hope we collectively can use this crisis to re-think the purpose(s) of higher education. A few thoughts, prompted by your post:
Yes to ungraded written assignments (at home) to train thinking skills; incidentally, I had just arrived at this conclusion recently! This being said, going back to in-class written exams need not be just a knee-jerk reaction. If well designed, these can be very appropriate methods. If we want to train and possibly also to asses people’s thinking skills, giving students a number of questions from which to choose one and an hour or so to pen down an answer seems like a good way to do it. (We should, of course, relax some standards, and not expect much by way of sentence-construction, overall essay-architecture etc.)
Re grades: I don’t think that it is the job of universities to signal students’ competitiveness to future employers and admissions committees. The desirability of grades themselves is, I think, a huge issue that the current AI-induced situation should prompt us to re-think.
Finally, I really don’t see why we should aspire to uniformity of assessment, including toleration of AI in assessment, across disciplines. Different disciplines have different aims, hence they train and test different things. To keep with your analogy: if at one point one needs to rely on complicated maths in some random field, it seems ok to rely on the help of a calculator. But, presumably, not if one studies advanced maths.
Thanks again for writing this post, Sanat!