Yearly Archive: 2026

The Injustice of Not Feeling Wronged

In this post, Sushruth Ravish (IIT Kanpur) and Ritu Sharma (University of British Columbia) discuss their article recently published in the Journal of Applied Philosophy on experiential injustice in cases of marital rape.

“El Requiebro” by José Agustín Arrieta (from WikiCommons).

Can one fail to know that they have been wronged?

Often, our knowledge of being wronged arises not from detached reasoning but from the body’s own signals—anger, fear, humiliation, pain. These feelings are not just reactions to harm; they are how we recognise harm. They tell us that a boundary has been crossed and that something ought to be resisted. Now imagine losing that capacity altogether—to endure a wrong yet fail to sense its wrongness; to experience harm as ordinary, expected, or even obligatory. We argue in our recently published paper in the Journal of Applied Philosophy that such a loss is a distinct kind of injustice, namely experiential injustice.

When harms are unrecognisable

In 2018, the United Nations Office on Drugs and Crime reported that the home is the most dangerous place for women. The majority of women who are raped are assaulted by partners, family members, or acquaintances. Yet in many countries, marital rape is still not a criminal offence, or is treated as less severe than other forms of rape. Even where laws have changed, underreporting remains a widespread issue. Many survivors do not identify what they have experienced as rape, describing it instead as “just how marriage works.” Philosophers often interpret this through the concept of hermeneutical injustice. Miranda Fricker defines this as a harm that occurs when people are wronged in their capacity as knowers because they lack the shared interpretive resources needed to understand their experiences. In societies where “marital rape” is an unavailable or marginal concept, victims may endure violations without being able to recognise or articulate them.

Beyond hermeneutical injustice

While this account is important, it stops short of capturing the entire range of epistemic harms. Hermeneutical injustice assumes that victims can at least sense that something is wrong, even if they lack the words to describe it. But what if that sense itself collapses? In our recent article in the Journal of Applied Philosophy, we propose the concept of experiential injustice to capture a deeper kind of epistemic harm. Experiential injustice occurs when trauma, oppression, or internalised domination not only distort interpretation but also erode the very capacity to apprehend one’s experience as morally or epistemically significant. Put simply, hermeneutical injustice presupposes an intact sense of wrongness. Experiential injustice goes one step deeper—with the loss of that sense altogether.

Losing epistemic self-trust

Survivors of marital rape often describe going numb, dissociating, or complying mechanically. They may say they “stopped feeling anything” or came to believe that sex is “a wife’s duty.” From the outside, such reactions look purely psychological—symptoms of trauma or depression. But they are also epistemic. When we lose the capacity to perceive a violation as a violation, we lose access to a fundamental kind of knowledge. Our ordinary mechanisms for recognising and evaluating harm—our emotions, our bodily awareness, our moral perception—no longer function as they should. This marks a collapse of epistemic self-trust: the ability to rely on one’s own affective and perceptual cues as sources of knowledge.

How experiential injustice arises

Our paper identifies three mechanisms through which experiential injustice develops:

1. Trauma-induced disruption.

Repeated coercion can fracture the link between experience and meaning. Over time, the body suppresses sensations that signal danger as a means of survival. This is not merely psychological numbing—it is epistemic damage. The body is one of the primary sites through which we make sense of the world, and when it stops signalling wrongness, understanding falters.

2. Adaptive numbing.

In oppressive environments, emotional detachment often becomes a survival strategy. When resistance brings punishment or social ostracism, submission may seem like the only viable path. Over time, this adaptation hardens into a stable state of indifference, making it difficult to access one’s own sense of violation.

3. Internalised norms.

Patriarchal scripts about wifely duty and marital obligation can make coercion appear not only normal but morally appropriate. When refusal is framed as selfish or disobedient, compliance can feel virtuous. Here, moral evaluation itself has been reprogrammed. These processes often overlap: trauma feeds numbness, numbness eases internalisation, and internalisation prevents recovery.

Why this matters

Recognising experiential injustice alters how we perceive epistemic harm. It reminds us that knowing is not only conceptual in nature. It is also affective and embodied. Conceptual gaps, the focus of hermeneutical injustice, can often be addressed by social or legal reform. But experiential injustice resists such repair. You can introduce a new term like “marital rape,” yet for someone whose evaluative framework has collapsed, the term may carry no meaning. To restore epistemic agency, one must first restore the capacity to feel when something is wrong. This also means that epistemic repair must go beyond conceptual interventions. It must attend to the restoration of self-trust, bodily awareness, and emotional attunement. Survivors need conditions that allow them to feel and to trust those feelings again. Recognising experiential injustice illuminates the profound internal consequences of oppression.

Taking experiential injustice seriously means acknowledging that epistemic repair is not complete when victims can name their experiences. It is complete only when they can once again feel that what happened to them was wrong—and trust that feeling as constituting knowledge. Only then can survivors begin not merely to speak, but to recognise, in the most intimate sense, that what happened to them was wrong.


About the Authors:

Sushruth Ravish currently serves as an Assistant Professor in the Department of Humanities and Social Sciences at the Indian Institute of Technology Kanpur. He earned his PhD from IIT Bombay, where he was awarded the Naik and Rastogi Prize for Excellence in PhD Thesis. His research lies at the intersection of ethics and epistemology, focusing on the nature of epistemic norms and moral judgments, as well as exploring the limits of transparency and explainability in AI systems. His publications have appeared in journals such as the Journal of Applied PhilosophyPhilosophiaKriterionJournal of Philosophy, the Journal of the Indian Council of Philosophical Research, Indian Philosophical Quarterly, and the South African Journal of Philosophy

Ritu Sharma is a PhD Candidate in Philosophy at the University of British Columbia. She previously completed a PhD at the Indian Institute of Technology Bombay and has held teaching positions at the Thapar Institute of Engineering and Technology, Patiala, and at the Narsee Monjee Institute of Management Studies (NMIMS) in Mumbai. Her research lies at the intersection of Practical Ethics and Social Philosophy, with a current focus on marital rape, unjust sex, hermeneutical injustice, and questions of agency. Her work has appeared in the Journal of Applied Philosophy, Kriterion – Journal of Philosophy, and the Journal of the Indian Council of Philosophical Research.

Worse than AI writing is AI reading. What can we do?

While we’re all worried that assigning home-written essays stopped making sense because students are outsourcing the task to AI, and we’re all scrambling to invent alternative ways of assessment, this particular blogger is even more concerned about the effects of students relying on brief (or not so brief) AI-generated summaries of the readings that they should do before class. In my short post-LLM teaching experience, worse than AI writing is AI “reading”. And, I want to stress, that’s not merely because students aren’t doing the readings. Rather, it’s because they seem to think that what they get from, say, ChatGPT, is enough for them to understand the view of the author in question, and have justified opinions about it. This surely doesn’t work, at least not for readings in philosophy, which is what I teach. Students may know, in a nutshell, what conclusion the author wants to support and a couple of reasons in favour of it. But because they don’t struggle to extract these from an article, or a book chapter, with their own natural intelligence, they fail to appreciate the complexity of the issue we discuss, the difficulty of defending well a particular position on it, the temptation of thinking about that matter very differently and the extraordinary challenge which, sometimes, is to even formulate the right questions. The result is boredom.

In the classroom, boredom is the kiss of death; eyes would like to roll, hands yearn for the phone but remain still, because my students are mostly polite and because I ban mobile phones in class. Everybody seems to be having a mental cramp. Of course we do: since they have not been through the discovery adventure, but instead skipped to the outcome, students’ comments are flat, their questions – which they should prepare in advance of class and be ready to talk about with their colleagues – are pro forma, and most often vague, so as not to betray the lack of familiarity with the text. Boredom is contagious. People appear unable to imagine how one could think differently about the questions we discuss – something that a well-written paper would have made vivid to them. Even incendiary topics (say, culture wars material) are met with apathy.

For many years, Jo Wolff had a wise and funny series of editorials in the Guardian; one of the earliest was praising academic prose for being boring. It’s for fiction writers to create mystery and suspense; philosophers (for instance) should start with the punch line and then deliver the argument for it. I agree with sparing readers the suspense, but after a series of academic conversations with ChatGPT I discovered that, if pushed to the extreme – the formulation of a thesis and the bare bones argument for it – this kind of writing is the worst. It kills curiosity.

What should we do? Perhaps turn some of our classes into reading-together-in-silence events? Back to monastic education! I talked to colleagues, who told me about several things they’re trying to make students read again (without AI.) An obvious possibility is to ban all use of LLMs by students and explain the reasons: Our job is not primarily to populate their minds with theories, but to help them understand arguments, teach them how to pull them apart, and maybe to occasionally build them. I’m not sure about this solution either. For one thing, a well-prompted LLM is better at reconstructing a slightly unclearly and imprecisely presented argument than the average reader and many students; often, AI often produces much better abstracts of academic work than academics themselves, and well-written abstracts are really useful. Another problem is that policies which can’t be enforced are for that reason deficient, and, I suspect, the very attempt to directly police students on their use of AI would be just as anti-pedagogical as the use of AI itself. (Reader, do you learn from those you resent?)

Alternative suggestions are to change how we teach. Quite a few colleagues have started to read out excerpts in class, then discuss them on the spot. One of them goes as far as asking students to memorise them, in an attempt to revive proven methods of Greek and Roman antiquity. This sounds good, time consuming as it is; better do a little, and do it well, than do a lot for naught, though I’d stop short of requiring memorisation. Others ask students to annotate their readings before class, and check, or use Perusall and similar platforms to read the assignments collectively, in preparation for class. I did Perusall to great success in the Covid era, but when I tried it again recently it was a disaster of cheating and complaints. Some teachers are printing out readers, or organising hard copies of books for the students, in the hope that this dissuades them from uploading digital files to LLMs. One colleague introduced 5-10 minutes flash exams at the beginning of each class, to check that students have read. And another one picks two students in each class, randomly, and asks them to co-chair the discussion about the reading of that day.

In the medium term, perhaps universities should double – or triple – the length of time that students spend together, with an instructor, for each class, and earmark the extra time as “study group”, when students read and write. There’s something dystopian about this model and it would massively increase work loads for instructors, so in practice it should mean more jobs, perhaps with lesser compensation. But is this really worse than giving up on the goal of teaching students how to read and write essays? Everybody would resist, no doubt but by the time the value of degrees, including their market value, will be next to nothing, universities might face a choice between closing down and reforming in ways that we find hard to imagine now.

As for the next academic year, I wonder whether I should assign readings that I won’t cover at all in my lecturing, but which will be of great help to students in the discussion section. Those who come to class having read only the LLM-created abstract will be the poorer for it. But, since I won’t ask them to discuss the papers, we might – most of us – escape the boredom mill.

Any thoughts?

Xenophobic bias in Large Language Models

In this post Annick Backelandt argues that xenophobia should be understood as a distinct bias in Large Language Models, rather than being subsumed under racial bias. She shows how LLMs reproduce narratives of “foreignness” that particularly affect migrants and refugees, even without explicit racial references.

Image by HelenSTB from Flickr

(more…)

Funding Research Randomly

In this post, Louis Larue (Aalborg University, Denmark) discusses his article recently published in the Journal of Applied Philosophy on the appropriateness of selecting research applications randomly.

Philosopher in despair after his many applications for funding were rejected by Rembrandt, Musée du Louvre, Paris; Public Domain via Wikimedia Commons.

Applying for external funding is an integral part of academic life. Universities dedicate huge amounts of resources, and often have entire teams of administrators and advisors, to help researchers obtain external grants and manage the immense load of paperwork required to administrate successful applications. Researchers and teachers, at all stages of their careers, spend considerable time and resources to write, read, revise, and submit applications. If successful, they will then have to write various reports and will be required to master the complex and often obscure language of funding agencies. At a more advanced stage of their careers, they will also dedicate a significant share of their time to reviewing and evaluating applications submitted by others and to sit in various selection committees.

Most of the time, the selection procedure involves (in one or several steps) the evaluation of the scientific quality of the submitted applications, by one or several peer reviewers. When all evaluations have been gathered, a selection committee usually selects successful applicants. The ideal behind this procedure (which I have only sketched, and which varies across countries and institutions) is to select, impartially, the “best” applications, that is, those with the highest level of scientific quality, properly defined.

Let’s call this selection procedure the “Peer Review procedure” (or PR). In recent years, it has attracted much criticism. For many, it is a costly, biased, and conservative procedure that is unable to deliver on its promise to select the best applications. In response to these criticisms, many authors have advocated mixed procedures involving various degrees of peer review and random selection (for instance, here and here).  Following usage in the literature, I will call these mixed procedures “Modified Lotteries” (or ML).

The modified lottery is a two-stage procedure. At stage 1, the members of the selection committee select, among all eligible applications, the ones that they judge to be the most qualified applications, that is, those that meet minimal standards of scientific quality. At this stage, only the “worst” applications are rejected. The selection rate is thus allowed to be high, or, in any case, much higher than the current selection rate. At stage 2, a certain percentage of the applications selected at stage 1 is randomly selected. The percentage of applications selected at stage 2 is simply a function of the amount of money at the disposal of the funding agency.

In this post, I shall argue that the modified lottery procedure would strike a better balance between scientific quality, cost-effectiveness, impartiality, and fairness, than the current peer review procedure. (In the article, I also discuss, and reject, pure random selection, but I leave that part of the argument aside here).

Cost-effectiveness and scientific quality

A first intuitive argument for the use of random selection is that it would liberate time and money for researchers to do actual research. For the time dedicated to writing and reviewing applications amounts to time not dedicated to research and teaching. Considering the fact that most applications are rejected, this time is generally wasted.

However, the cost-reducing potential of random selection should not be over-estimated. A recent survey of applicants to the Health Research Council of New Zealand, which is among the first funders to use a Mixed Lottery, report that most of the applicants declared that they did not reduce the time spent writing their applications. Moreover, the time dedicated to reviewing proposals is not necessarily wasted. First, reviewers may be expected to set aside at least the proposals that do not meet minimal standards – an ability that should not be underestimated. Second, even if we assume that they cannot, getting rid of peer reviewers entirely may remove the incentive to write serious research proposals.

Hence, the relationship between the costs and benefits of investing time and money in selecting applications demands further consideration. In the article, I argue that costs are justified if they allow setting aside the applications that do not meet minimal standards of scientific quality; and that they are unjustified otherwise. Hence, dedicating time and money to peer reviewing applications is justified up to the point where peer reviewers can no longer perform their selection job. The empirical literature has for years stressed that peer reviewers are often unable to agree on the ranking of excellent applications, though they are more likely to agree on those applications that do not reach a minimal level of quality. The mixed lottery is thus to be preferred to the current system, because the limited space it gives to peer review allows to reduce its costs in a way that is not detrimental to scientific quality, since stage 1 is there to make sure that some peer reviewing still takes place. Though it may be impossible to find the “optimal” level of peer review, it is likely to be greater than zero and lower than the current level.

Impartiality and biases

A common complaint about peer review is that it is biased. There is evidence that the Peer Review procedure tends to be biased against women and ethnic minorities. Moreover, personal relationships as well as a preference for one’s own area of expertise tend to skew the peer reviewers’ evaluations. For all these reasons, a selection procedure based on peer review is unlikely to be impartial.

It is uncontroversial to say that these biases are bad, even morally wrong. Yet we may have reasons to accommodate some biases for the sake of retaining some place to peer review. In very short, my argument is the following: peer review is necessary to (at least) set aside the worst applications from the rest and to avoid removing the incentive of writing minimally good applications. Yet peer review is also inherently biased in some way. Hence, getting rid of all biases would require getting rid of peer review entirely, which would be detrimental to scientific quality. How do we get out of this dilemma?

My view is that, because peer review is unescapable, we should allow for the possibility that biases will influence the selection procedure. In that context, the modified lottery is preferrable to the current system, because it minimises the influence of biases, by leaving only a limited space to peer reviewers. However, those who would want to condemn biases more severely than I do, will have to contemplate the necessity to get rid of peer reviewers entirely and turn to pure random selection instead. My view is that the latter move would come at a cost for the advancement of science, because it would lower the probability to fund the best research. As I argue below, it may also be unfair.

Fairness

A further frequent complain against the current pee-review procedure is that it is unfair (see for instance here), though “unfair” is often confused with “biased”. However, this complaint may also be raised against proposals to select research proposals randomly (either partially or totally): isn’t that unfair to excellent applicants to consider all applications equally?

In the article, I use Broome’s idea that the fair distribution of a good requires that claims to a good should be satisfied in proportion to their strength. In our case, the good to be fairly distributed is research money. People’s claim to that good will depend on the extent to which their future research will be the most likely to produce the best science. Therefore, one may say that grants are distributed fairly when they are allocated to the proposals that have the strongest claim to research money, that is, to those that are the most likely to produce the best research.

In an ideal world without budget constraints, biases and other limitations, the peer review procedure would be the best and the fairest procedure: it would always select the most deserving applicants. But we do not live in such a world. First, in the real world, budget constraints may prevent funding bodies from giving money to all deserving applicants (i.e. those who have the strongest claim to it). Second, peer reviewers may be unable to reach a consensus on who the most deserving applicants are (a phenomenon that I call “epistemic limitations”). In that world, the modified lottery is the best choice.

As I have argued above, we may expect peer reviewers to be able to track scientific quality up to a certain point. If peer review has some value, the first stage of the modified lottery will allow to set aside the applications that have some minimal level merit (that is, a “minimal claim to research money”) from those who do not. The first stage therefore guarantees, at least to some extent, a certain degree of discrimination based on merit. But beyond that point, random selection is to be preferred, since no actual argument based on reasons may be used where epistemic uncertainty prevents reviewers from collectively distinguishing between applications. At stage 2, random selection ensures, at least, that all applications that have passed stage 1 have an equal chance to get funding, and that it is not biases or arbitrariness that decide among them.

The modified lottery is therefore not a fair procedure: it will not automatically distribute research money to those who have the strongest claim to it. But it is fairer than other procedures. It is fairer than pure random selection because it leaves some place to merit, which random selection fails to do; and it is fairer than the current system because, once the possibilities of peer review have been exhausted, it does not pretend to be able to select the best proposals among proposals whose relative merit is undistinguishable by reviewers (or disputed). Rather, it gives equal weight to all of them.

Some readers may still complain that the modified lottery disrespects excellent applicants, those who really deserve to be selected. In response, I would like to stress that the first stage of the proposal is meant to ensure that the best candidates are among the pool of short-listed applicants, and that they are selected according to shared standards of scientific quality. My view is that we cannot hope for more: it is beyond the capacity of peer reviewers to discover the “truly” best applicants. Moreover, the second stage limits the influence of non-scientific criteria (biases, etc), which might be present at stage 1, so that good candidates with profiles that are more likely to attract biases have a higher chance (compared to the present system) to be selected. So both stages actually contribute to increasing the ability of the procedure to track scientific excellence, rather than something else. Finally, we may have serious doubts that the current procedure is selecting the best applications. Lack of resources and various biases, as well as possible disagreements among evaluators on the quality of different applications, prevent the current system from doing its job well. Therefore, though there is a risk that the modified lottery will sometimes fail to select some of the best applications, this risk is probably not much higher than for the current peer review procedure.


Louis Larue is a researcher at the Aalborg University, Denmark; and a guest professor at the Hoover Chair of Social and Economic Ethics, UCLouvain, Belgium. He has published on the ethics of money and finance, and on several issues in the philosophy of economics. His first book, entitled Alternative Currencies: a Critical Approach, has just been published by Routledge.

Workshop announcement: Tackling speciesism and anthropocentrism in higher education

Before we return to our schedule of regular posts, I wanted to take the opportunity to share information about this online workshop.


From institutional pressures to competing demands from students, teachers are increasingly having to navigate complex political, pedagogical, and ethical challenges. For anti-speciesist teachers in the context of anthropocentric societies, there are several further layers of difficulty: how should we approach the teaching of core subjects and the general “canon”, when those often replicate speciesist norms and assumptions? Is it necessary to balance “objectivity” and advocacy? Is pedagogical or academic rigour threatened by moves towards animal-friendly pedagogy? How should we  engage with students and colleagues who are resistant to non-anthropocentric perspectives? What specific pedagogical strategies or curriculum design choices (e.g., choice of texts, use of various media, interactive activities, assessment design) can anti-speciesist teachers effectively employ to introduce non-anthropocentric materials without alienating students or triggering a defensive backlash?

This online workshop aims to bring together academics working in politics, philosophy, and adjacent fields to consider the challenges and opportunities associated with tackling speciesism and anthropocentrism in higher education. It will be an opportunity to share ideas, research, and experience. We invite contributions from anyone involved in teaching in relevant fields. We’re looking to provide a space to share reflections on experiences as well as formal paper-presentations. Keeping this in mind, we invite submissions of the following types:

  1. Research papers discussing topics related to the workshop theme, including but not limited to:
    1. Animal activism and teaching,
    2. Teaching controversial topics related to animals,
    3. Teaching the canon with animals in mind,
    4. The intersection between non-anthropocentrism/anti-speciesism, decolonisation, and/or diversification of the curriculum,
    5. The effectiveness of pedagogical interventions,
    6. The role (or reaction) of the broader institution in (or to) animal-friendly pedagogy.
  2. Case-studies, including but not limited to:
    1. Experience of developing non-anthropocentric/anti-speciesist curricula.
    2. Experience of teaching on topics such as non-anthropocentrism, animal rights, veganism, and so on.
    3. Experience of non-traditional forms of assessment, such as reflective journals, campaign projects for animal-related issues, policy design or review addressing animal-related issues. 

Submissions must be suitable for approx. 15-20 minute presentations and Q&A/discussion. Please send anonymised submissions to sara.vangoozen [at] york.ac.uk

The deadline for submissions is 30 March 2026

For any further information, please also contact Sara van Goozen.