No where else is the human-animal divide more enthusiastically defended than when someone talks about human dignity. According to advocates of this widespread idea, our “human dignity” captures the exceptional value and status that humans uniquely possess. Not only is it thought to elevate us above other animals, but it acts as the basis for distinctly human rights, as enshrined in several international covenants, and constitutions. In other words, dignity seems to do a lot of work in explaining why we have value above and beyond that which other animals possess.
The trouble is that a distinctly human dignity cannot be plausibly justified. I will explain why shortly, before going on to suggest that there is one saving grace: dignity can be made into a far more robust idea – and without giving up too much of what is valuable about it. But the catch is that this is only possible if it includes rather than excludes other animals, such as dogs, pigs, or birds.
Contemporary Western societies are often criticized for being excessively individualistic. One interpretation of this claim is that their citizens mainly care about their own well-being and not so much about that of others or about communal bonds. Another, complementary interpretation that I develop here argues that our ideas in economics and about justice overestimate the contributions individuals make to economic production. Recognising the extent to which our productivity and thus our standard of living depends on the cooperation of others has a humbling effect on what income we can legitimately think we are entitled to.
Housing deprivation is a manifest sign of injustice in many cities. It occurs when individuals either cannot access housing or when they face a high risk of losing their homes, with the implication that people end up living in the streets or in precarious situations. According to United Nations Habitat, 1.8 billion people lack adequate housing. In Latin America, housing deprivation affects more than 28 million lower-income households. In Brazil, data from the 2022 Census shows that 281.472 people are homeless and from the Brazilian IBGE estimates that more than 5 million people are living in irregular houses. Questions that arise are: why this is an injustice, and how can we best address it?
In recent years, these questions have gained increasing scholarly attention, in particular following the book on the subject written by Casey Dawkins (2021) and the work done by Katy Wells (2019; 2022). Both philosophers claim that housing deprivation is an injustice because it violates basic ideas of fundamental human needs – which have material and relational dimensions. However, they propose resourcist housing policies as a solution. In this post, although I agree with them that housing deprivation requires a multidimensional normative account, I argue that we should go beyond a resourcist policy.
Consider the following excerpt from an article written by a former student at the University of Oxford –
“The green and lush lawns of the colleges you observe are due to the policy Oxford has maintained for centuries of allowing only professors to step on the grass. Everyone is obliged to keep walking along the concrete path, even when talking to a professor who may be walking through the grass. The rule is indeed odd one since it creates a certain one-manship between the professors and other teaching and supporting staff, as well as students.”
I argue that this rule, which I refer to as ‘restrictive lawn policy’ henceforth, is not merely odd but it is also morally objectionable.
Almost 5 years ago today, on the 31st of October 2018, Extinction Rebellion was publicly launched outside the UK Parliament. Since then, it has been one of the most influential environmental movements in the UK and in other parts of the world, instrumental in changing the public conversation and in leading to the declaration of a climate emergency by the UK parliament in 2019. Using non-violent civil disobedience and mass arrests as the main tactics, in its first few years the movement organised a number of often theatrical actions which included blocking roads and bridges, with activists gluing and locking themselves in public spaces. The question of whether public disruption was indeed the right tactic for Extinction Rebellion, and the environmental movement more broadly, has dominated conversations inside and outside the movement ever since.
The US, like many other countries, is marked by pervasive racial inequalities, not least in the job market. Yet many US Americans, when asked directly, uphold egalitarian “colour-blind” norms: one’s race shouldn’t matter for one’s chances to get hired. Sure enough, there is substantial disagreement about whether it (still) does matter, but most agree that it shouldn’t. Given such egalitarian attitudes, one would expect there to be very little hiring discrimination. The puzzle is how then to explain the racial inequalities in hiring outcomes.
A second puzzle is the frequent occurrence of complaints about “reverse discrimination” in contexts such as the US. “You only got the job because you’re black” is a reaction familiar to many who do get a prestigious job while being black, as it were. Why are people so suspicious when racial minorities are hired?
In a recent paper, my colleague Nicolas Olsson Yaouzis and I offer an explanation for both puzzles: we model the workings of implicit racial bias in a population of egalitarian norm followers. Implicit biases have been shown to affect basically all of us. They are, roughly, automatically activated stereotypes about social groups. They are often unnoticed and unendorsed by their bearers. And they correlate with social inequalities on population levels. But how, exactly, should we understand the underlying mechanism? Here’s our model:
Imagine a big firm consisting of a large number of subsections, each headed by a manager. From time to time these managers hire new people. Assume that the firm was initially all-white (think of the early sixties Mad Men era). Managers knew that there was some great competence among the black applicants, which would benefit the firm overall. Still, each preferred to head a racially homogenous subsection, because it saved them trouble. They were thus all trapped in a prisoner’s dilemma: each doing what was (supposedly) better for them, while the firm missed out on competence.
Social norms solve prisoner’s dilemmas. Suppose that (in the early seventies) the managers become aware of an egalitarian social norm: “When hiring, hire the most competent candidate, regardless of their race”. This norm changes their incentives: as long as they believe that enough others will both comply with it and expect them in turn to comply, they want to comply with it themselves. Imagine now that each manager comes to believe this about the other managers. Each then complies with the egalitarian norm and hires black applicants whenever they are the most competent. Slowly, the racial composition of the firm will change.
However, if there are implicit racial biases among the managers, this will sometimes distort their actions. They want to comply with the norm but sometimes make mistakes. Due to the nature of implicit bias these mistakes are asymmetrical. That is, they sometimes occur when the most qualified candidate is black (such that a white candidate is hired instead), but never occur when the most qualified candidate is white (such that a black candidate would be hired).
Now, assume that there are many decisions and many managers, so mistakes add up. This could, on the whole, explain large scale hiring inequalities. But this would mean that each of them, observing that the egalitarian norm is violated time and again, would cease to believe that enough others complied with it. Moreover, observing that such frequent norm violations are not met with protests by the others, each manager would cease to believe that enough others expect them in turn to comply. Then, each would no longer want to comply and the norm would break down.
Yet this is not what seems to happen. Rather, the norm stays in place (people uphold the norm when asked) and large-scale hiring discrimination persists (causing the pervasive racial inequalities). Our model can account for this, by illuminating the intricate interplay between implicit bias and job competence. To see this, consider a specific recruitment case, where the most competent candidate is black. Their race is a clearly observable feature – their competence typically is not. Suppose that the hiring manager makes an implicit bias-mistake and hires a less competent white candidate. The other managers likely cannot directly observe that a norm violation has taken place. They can, however, observe the successful candidate’s race. If they (like most of us) hold implicit racial biases, they may perceive the white candidate as more competent than they actually are, and (falsely) infer that the egalitarian norm was followed. Thus no one protests, and no one changes their belief that enough others comply with the norm and expect them in turn to comply. The norm may be repeatedly violated, but does not break down. This explains the first puzzle.
Now consider the same case, but suppose the manager doesn’t make a mistake. The most competent black candidate is hired. Again, the others can observe the candidate’s race but not their competence. Again, if they hold implicit racial biases, they may perceive the black candidate as less competent than they actually are – and (falsely) infer that the egalitarian norm was violated. If this happens repeatedly, observers might eventually (falsely) conclude that the egalitarian norm has come to be replaced with a norm of “political correctness”: “When hiring, hire the most qualified minority candidate (to increase firm diversity)”. This explains the second puzzle.
In sum, we propose a toy model of hiring decisions. Those are, of course, much more complex in real life. Still, the model helps us see the double role of implicit bias: in the hiring decisions themselves, and in bystander evaluations of these decisions. It solves the two puzzles by explaining how hiring discrimination can be invisible in seemingly egalitarian social contexts — and why instead non-discrimination may appear suspicious.
A final word: in a world where explicit racism is on the rise, why worry about implicit bias? Our analysis does not imply that explicit racism doesn’t matter (and we certainly think it does). It just shows that focusing narrowly on eradicating explicit racism will not be enough.
Scholars familiar with the philosophical arguments in favor of robust free speech protections commonly identify three kinds of arguments given in favor of such protections:
1. Free speech helps us discover truth,
2. Free speech is required for democratic self-governance,
3. Free speech is an important part of autonomy.
Contemporary social and political circumstances—including the persistent spread of viral misinformation via social media—have called these traditional arguments into question.
Can we really claim that free speech helps us discover truth when the data suggest that falsehoods travel, on average, much faster and farther than truthful corrections? Does free speech, on balance, help preserve democracy when the integrity of elections is being undermined by orchestrated viral disinformation campaigns?
Such questions prompted by social, political, and material reality ought to be taken seriously. Taking such questions seriously may require us to reconsider what kinds of arguments best ground free speech rights. This may, in turn, require us to reconsider what good free speech law and policy should look like.
This is a guest post by Anh Le. Anh currently works in the NGO sector on environmental issues but previously taught at the University of Manchester, where he also got his PhD, writing on the ethics of force short of war.
It’s important to note at the outset that what unfolded on Saturday October 7th in Southern Israel when Hamas fighters overran the Israel – Gaza border, infiltrated deep into Israeli territory, murdered more than a thousand Israelis, and took more than a hundred hostages back across Gaza was a war crime (or at least most of it was, the killing of Israeli soldiers, even if most of them were unarmed can be argued to be the legitimate targeting of combatants in an armed conflict). Equally important to note is how the Israel Defence Force (IDF) has responded to the initial attack also violates the International Humanitarian Law, e.g. the blockade of Gaza, indiscriminate bombings of residential areas. At the time of writing, the IDF hasn’t officially conducted a land invasion of Gaza, although some ground incursions have occurred. In this post, I argue that, contrary to what has been taken as a fact – that Israel has the right to go to war against Hamas following its attack on Israel and the only question that is morally, and legally, relevant is how they go about doing that, a question of jus in bello – it’s not clear if Israel’s war meets the criteria of jus ad bellum – the right to go war, and thus if Israel has a right to go to war against Hamas.
I should first make clear that I will not weigh in on the ethics of the situation between Israel and Palestine. The history is protracted and there are others eminently more qualified to unpack it than myself.
Post-truth is often viewed as a threat to public affairs such as vaccination policy, climate change denialism, or the erosion of public discourse. Yet combating post-truth is rarely viewed as a priority for policymakers, and the preferred ways of combating it usually take the form of localised epistemic interventions such as fact-checking websites or information campaigns.
We’ve all done things we regret. It used to be possible to comfort ourselves with the thought that our misadventures would soon be forgotten. In the digital age, however, not only is more of our personal information captured and recorded, search engines can also serve up previously long-forgotten information at the click of a button.