Can Our Current Academic Model Go On in the Age of AI?
It has been almost three years since ChatGPT was released in the public arena amid great hopes, worries, and, perhaps more than anything, hype. While artificial intelligence tools, including of the Large Language Model variety to which ChatGPT belongs, were already deployed in many areas by then, it was this event that sparked both widespread obsession with AI and the subsequent pouring of trillions of dollars into AI development in just three years. Meanwhile, though still relatively in its infancy, Generative AI has indeed impacted numerous fields, and these include education and research, which together form the core dimensions of academia. Some of the concerns raised by the usage of Generative AI in academia, especially when it comes to student evaluations, have already been taken up on this blog, here, here, here, and here. In fact, out of all the Justice Everywehere posts focusing on AI in the past three years, exactly half looked at this particular problem, which is unsurprising, since on the one hand most of the contributors to this blog are educators, and on the other one that political philosophy is methodologically built around the capacity to engage in original thinking. In this post, which inaugurates the new season of Justice Everywhere, I want to signal a broader issue, which is – to put it bluntly – that key aspects of the way in which academia currently works are likely to be upended by AI in the near future. And, crucially, that as a collective, we seem to be dangerously inert in the face of this scenario.

It is a plain fact of our professional life, ubiquitous among academics (but often unknown to the wider public), that academic value is mainly measured in terms of one’s research outputs. More precisely, the more research papers you publish, the better the journals where you publish, and the more cited your papers are, the more likely it is that you are to be seen as a valuable member of the academic community. Research outputs, it can be metaphorically said, are the currency of an academic career and the cornerstone of what can be labeled as our current academic model. They are what will get you jobs (even entry-level ones), promotions, tenure, and grant funding. The current academic model has therefore structured incentives in such a way that most of the energy of an average scholar is channeled into a relentless pursuit of an improved research profile, often times – it should be said – to the detriment of other important areas of activity, such as public engagement, or even teaching duties. This incentive structure has always run the risk of leading to ethical violations in search of quick academic success. While instances of brute plagiarism are fairly rare in the developed parts of contemporary academia (but a major problem in the developing parts), more sophisticated methods that are ethically controversial (e.g. salami-slicing) are sometimes used, not to mention outright data fabrication, with scandals taking place at the highest university levels in recent years, including in places such as Harvard, Duke, or UCLA, to name just a few. Still, for all their malignancy, these were relatively isolated incidents that could be contained within the current system.
Then ChatGPT came to the fore, with its ability to generate massive amounts of text of an origin that can be suspected as dubious, but not definitively demonstrated as such. Now, at the outset, ChatGPT was rather unappealing as a tool for academic writing, since it lacked access to the internet, it hallucinated at an unsustainable rate (especially when it comes to references), and it gave too simplistic answers to prompts. As of the time that I write this piece though, ChatGPT and other LLMs are much more advanced in all of these respects, especially when prompted in a piecemeal fashion, to the point where they raise serious concerns for research integrity. After all, if students are using them to improve their academic essays, or at the extreme, outright cheat, shouldn’t we expect some academics to do the same? Surely, an academic paper is more complex than a student essay, but academics also have a much more advanced level of knowledge that they could deploy more skillfully in the text generating process.
Many of the readers of this piece have surely had similar thoughts by now and may have even felt like they encountered AI-written text by their peers. To give an anecdotal example, a few weeks ago, in the span of just a couple of days I reviewed an article for a top journal in political theory which had sub-sections that were clearly written with AI, in the standard bullet-pointy fashion (inspite of this, it was not desk-rejected), while a colleague of mine had a referee report from a similarly well-placed journal that was – in both our assessments – AI-generated. Non-anecdotal evidence is beginning to accumulate as well. In a survey of 5,000 researchers, published in Nature in February 2025, almost 20% reported that they had used LLMs in peer-review (though the extent of this usage was not inquired into). Knowledge that this is occasionally happening has induced some scholars to put hidden texts in their papers prompting AIs to give positive reviews, on the chance that the referees will just use an LLM instead of actually doing the review themselves, basically turning the whole peer-review process into a big, unfunny, joke. Furthermore, responses to a number of vignettes in the survey show that 13% considered it appropriate to use ChatGPT to write a full draft of a research paper without disclosing the use of AI, with the percentage going as high as 24% in the case of abstracts. And while only 4% of respondents acknowledge that they actually used AI for writing drafts without disclosing it (though social-desirability bias is likely to distort the real figure), 37% reported that they would be willing to use it for writing full drafts (with or without disclosing it). [1] Both perceived acceptability and actual usage decrease with age, so we can expect more PhD students and early-career researchers to engage in this kind of behaviour, and we can probably expect significant differences among fields of study [2].
Even through a quick glance at these basic figures we can, at the very least, highlight that we are moving into a situation where research practices are beginning to raise serious problems related to fairness, since some, but not all scholars, are open to benefitting from claiming authorship over something that is, in crucial parts, not the original intellectual product of their own minds and efforts. Moreover, in the absence of broad consensus over what counts as an ethical usage of AI in writing academic texts, with a diverse range of attitudes in willingness to use AI, and in a context where AI usage is not necessarily detectable and not definitively demonstrable, this unfairness is not only here to stay, but is likely to compound in the near future.
Strands of unfairness are, of course, a staple of academic writing. For instance, the PhD student graduating from Oxford will ordinarily have had better research training, infrastructure, and access to resources, than her counterpart graduating from the University of Bucharest, and a native English speaker will ordinarily have an advantage over a non-native speaker in writing proficiency (a disadvantage that could actually be reduced through the usage of AI as a translator). So, while the problem may be more severe because of the differential usage of Generative AI as of now, it may be possible to think of ways to mitigate it to a sufficient extent so that nothing fundamental in the academic system must ultimately change.
The “as of now”, however, is crucially important. In the past couple of years we’ve seen LLMs improve to a great extent, even as the rate of improvement is not exponential and may be slowing down, so there is little reason to think this trend won’t continue in some form. But more than that, we should look at other AI tools that appear to be on the horizon, even though the timeline for their realization might be in question. As some of those who keep up with the news on AI development might recall, late 2024 was filled with claims by prominent techbro oligarchs on the imminent arrival of AI agents, as early as 2025 (spoiler alert, this did not materialize). What should be qualified as a (true) AI agent, whether some existing AI tools should be labeled as AI agents, what the difference between AI agent and agentic AI is, and so forth, are questions that can only be answered by drudging through a very messy conceptual terrain, from which I will steer clear for the moment. So to simplify, what I have in mind by the term is something akin to Gary Marcus’ description: “If chatbots answer your queries, AI agents are supposed to do things on your behalf, to actively take care of your life and business. They might shop for you, book travel, organize your calendar, summarize news for you in custom ways, keep track of your finances, maintain databases and even whole software systems, and much more. Ultimately agents ought to be able to do anything cognitive that you might ask a human to do for you”. In this sense, AI agents have not yet burst onto the scene, but while we should discount claims made by top executives in AI developing companies – who stand to benefit from the hype race they entertain – we should not discard the likelihood that these technological developments will be realized, albeit in a longer timeframe. As Marcus ends his previously cited article, “I don’t expect agents to go away; eventually AI agents will be among the biggest time-savers humanity has ever known. There is reason to research them, and in the end trillions of dollars to be made”. So while LLMs can already draft full academic papers, albeit of doubtful quality if not revised, we can probably expect that AI agents will be able to do genuinely in-depth research with minimal input, including generating new ideas, going over all relevant literature, writing in a particular style, formulating and answering objections, revising the text until it mirrors ordinary academic writing, and so forth. While speculation here is inevitable, it seems likely, then, that AI tools capable of writing in a manner indistinguishable from academics will at some point be available, and it is likely that this point is not decades away, but rather years away.
How will academia look like in this scenario? Some – especially senior scholars – will probably just go on doing the hard work involved in producing academic texts, from start to finish. But many will not. If your career depends on your publication outputs and a new AI-generated article is at your fingertips with minimal effort, virtually no cost of submission and no likelihood of reprisal, do we seriously think that controversial ethical guardrails will prevent you from doing it, especially when you know that others are doing it as well? I, for one, very much doubt it. Instead, our current academic model, which has not developed for a context where massive amounts of text can be generated effortlessly, gives rise to a straightforward collective action problem, since it would actually incentivize (instead of discourage) the submission of fully AI-generated papers, massive in number and uncertain in quality, thereby (1) turning the scientific community into a landscape marred by profound unfairness and gross ethical violations, and (2) clogging research fields altogether, since the peer-review process (already heavily strained) would become completely unsustainable.
To return then to the question I put forward in the title of this piece, my own response is a negative one. I simply do not see how a model where academic value is primarily taken to reflect research outputs can persist if those research outputs could be generated virtually instantaneously and costlessly. Such a system is bound to lead to a flood of inauthentic, AI-written content, damaging the career prospects of honest scholars and, most importantly, damaging the research process beyond recognition. So what can be done? There may be many avenues that we want to explore, but I will briefly mention just two, mainly for illustrative purposes. One proposal that has already been at various times pitched as a solution to the existing peer-review crisis and/or the ballooning number of (low-quality) publications is to cap the number of papers one can publish or submit for publication either on an annual basis or for one’s entire lifetime. All these versions, however, raise different problems and would ultimately not solve the authenticity concern, since you could just submit whatever you perceive to be the best AI-generated papers. Another, which I incline to tentatively favour, is to disincentivize research altogether, by moving to an academic model of career-building that prioritizes other features, such as teaching or public impact, over research outputs. This would, undoubtedly, be difficult to conceive and many people will naturally be disheartened or even outraged at the idea of such a fundamental professional shift [3]. But by disconnecting research outputs from career opportunities, academics would no longer have the incentives to maximize paper submissions and would only engage in research because of other reasons, such as that they genuinely want to produce authentic knowledge, leading to a diminishment, but not extinction, of publications (which, presumably, would also be of better quality on average). Furthermore, there might some positive upshots to such a shift as well, for example because it might push academics away from focusing on contributing to narrow and frequently sterile debates with, sometimes, tens or even fewer people working on a topic, and into the public arena where scientific and philosophical expertise is sorely needed but has often been rather underwhelming.
Still, my aim here is not to ultimately defend any particular solution to this problem, but rather to stress that it is crucial that we, as academics, put it on the agenda and open a serious conversation on it, rather than shrugging it off as something that is either innocuous, distant or that can be solved through applying some easy patch to the system. Neither of these are the case, and if we don’t begin to approach the problem now and move academic norms and practices in whatever direction we expect to be more fruitful, we will, again, remain the principle actors of a system shaped by the interests of others, from university managers, to major publishing houses, to private donors, and maybe even to tech CEOs, with little regard for our own knowledge, experiences and collective voice.
_
[1] I gloss over results reported when it comes to editing and translating texts, which do not seem to raise similar levels of concern, though they may not be totally unproblematic either. In fact, the divide is perhaps sharpest in this case, with 28% of respondents saying that they have used AI to edit their papers (either disclosing it or not), 43% saying that they have not but would be willing to, and 29% that they have not and would not consider it.
[2] It is probably the case that some fields will also be more resilient than others to the problems discussed here. My own vantage point is mainly that of a political philosopher, where research conducted is purely theoretical, but I can imagine that other theoretical fields will experience sufficiently similar problems, so that the framing of the article as solely concerning political philosophy would have been excessively narrow. Empirically-oriented fields of research (at least in social sciences) may encounter distinct issues. On the one hand, there will still be space for original contributions that often cannot be fully automated, for example in the collection of data. But on the other, datasets themselves could be more easily fabricated through AI usage (and, perhaps, more hard to detect as fraudulent?), amplifying ethical concerns instead of reducing them.
[3] I, personally, am one of these people, since I spent much of my youth attempting (though not always succeeding) to write good quality papers, whose rewards I was looking forward to reap as I move to the mid-level part of my career.