a blog about philosophy in public affairs

Author: Justice Everywhere Archive Page 1 of 2

The (in)justice of critical philosophy of race?

In a recent presentation about a relatively new academic field called the critical philosophy of race, I was (repeatedly) questioned about the reasons for retaining the concept of race after it has been so clearly delegitimised. I was surprised how much I struggled to find a satisfactory answer, both for others and myself, to this question. Part of my struggle arose from the context of this discussion. While I appreciate the challenge posed with regard to the concept of race, the composition of the group made me uncomfortable as to its motivation. The group was composed of all white heterosexual male post-Christian European citizens; the epitome of what in the field of critical philosophy of race is referred to as white privilege. By contrast the field itself is one of the most diverse in terms of academic philosophy with strong representations of scholars from underrepresented groups in terms of gender, religious affiliations, non-European origins, etc.
The latter is significant in that these scholars – many of whom come from marginalised groups – recognise that because of the history of racism, the category of race, has been (at least rhetorically) delegitimised and yet there are several – justice based – reasons for retaining the concept of race. One such reason is to discredit the claim that we are living in a post-racial society. Clearly recent events such as the tragic political and legal injustice that arose in Fergusondemonstrate this. On a very different scale, which makes it easier to deny that racism is the root of the problem, are the recent debates in the Low Lands about ‘Zwarte Piet’.
 
Another reason is that by denying the category of race, it is much more difficult – both legally and socially – to fight current manifestations of racism, such as the cultural-racism central to islamophobia.  Partially because of the intentional efforts on the part of the (surviving) Jewish community after the Shoah to be ‘deracialised’, there has been a political campaign to detangle the categories of race and religion – as manifest in terms of anti-Semitism. This however makes it much more difficult for groups that currently fall within this (cultural) race-religion constellation, as do Muslims, to appeal to laws created to combat racism.
This problem brings me back to my original concern. While there may be good reasons to politically or legally retain this concept, the question being asked was also why a philosophical field might retain the concept of race. From the perspective of those posing this question, the concept has been intentionally delegitimised in European and needs to be forgotten. This claim is based on how European society responded to the shame of the Shoah by promoting campaigns, legal, political and social, to delegitimise the concept of race (see for example UNESCO’s substitution of the term race for culture in the 1950s). Accordingly it seems unjust bot to the group that was most destroyed by these events (this is not to deny that the Nazis did not persecute other groups) to retain the category of race and to Europeans as it reminds them of a past they have moved beyond.  
Yet isn’t the latter perhaps a reason to retain the concept, to remind us all that while we can move beyond the signifier, we have not moved beyond the signified? Do we not need a concept of race to help us make sense of this particular set of social relations of power that shaped and continue to shape our world? While I grant those in the room that the concept of race has morphed and changed since the Shoah and as such we need to constantly study and reflect upon these changes (a reflection that includes considering letting go of terms that are no longer philosophically significant), race neither in terms of philosophy nor politics is at this stage.  As such, I have to wonder if the desire to silence race talk in Europe arises from wanting to sweep responsibility, both past and present, under the carpet?
Clearly no one would contend that the central problem of exclusion, which has historically been achieved by the creation of hierarchical categories (whether race, religion, nation etc.) has not disappeared – so are there good reasons for retaining such delegitimised and offensive concepts?

Are we socially (and not just legally) obligated to presume innocence?

Content note: this post contains and links to discussions of rape and sexual harassment.


Social attitudes towards rape and sexual violence and harassment have over the last few years been undergoing what Laurie Penny has aptly called ‘rape culture’s Abu Ghraib moment’. From Steubenville, to Jimmy Saville, and academic philosophers, we have been confronted with both how widespread rape, sexual violence and harassment is, and how awfully this is dealt with by the police, courts and institutions. Closer to home for me, a few months ago the Oxford Union president was arrested for rape and attempted rape (the charges were later dropped). This resulted in a campaign to have him resign his position as president and for invited speakers to cancel their appearances until he did. The ‘public intellectual’ A.C. Grayling however refused to cancel his appearance, saying that the president was innocent until proven guilty and should not be tried in the ‘kangaroo court of public opinion’. This has become a common response to accusations of rape (with ‘kangaroo court‘ the favourite and somewhat tired description). The alleged rapist, it is argued, should not be subject to social sanctions and society should reserve judgement because of the principle that people are innocent until proven guilty.

I vehemently disagree with this. But when challenged I have in the past been somewhat unsure of my reasons for disagreeing. One argument is that though innocent until proven guilty is an extraordinarily important principle, it is primarily a legal principle. That means it applies to the courts and the legal process of convicting someone of a crime. If someone is to be subjected to state punishment (from fines, to jail, to being executed), then they have the right to be presumed innocent until proven guilty so that the obligation rests with the prosecution and not the accused to prove guilt beyond reasonable doubt. It is however not clear that public condemnation of an alleged rapist should be subject to the same principle. As has been pointed out the so-called ‘kangaroo court of public opinion’ is not actually a kangaroo court. A kangaroo court (such as white lynch mobs) disregards the standards of a fair trial to punish the accused. Public discussion and condemnation does not (usually) seek to actually replace the legal process and determine guilt and then exact the kind of punishment normally reserved for the state.

But I am unsure of this argument. First, it relies on a kind of reasoning where the legal and social is entirely divorced that I would normally reject. I do not for example accept the absurd argument that women, queer people and people of colour have achieved equal status on the basis that many (but certainly not all) legal discriminations have been removed, because this is undermined by the continued existence of social oppression upheld through patriarchal, white supremacist and heteronormative norms. Second, public condemnation and discussion is not the whole story. Social sanctions, which include being personally or professionally shunned and being removed or temporarily stepping down from public positions, are graver than public condemnation and can approach state punishment in the consequences for the accused. Trying to argue that carrying out these kind of social sanctions does not punish the accused in the way a court does, seems unconvincing. Justifying it requires more than saying that innocent until proven guilty is just a legal principle.

I think the more convincing defence of public condemnation and social sanctions, and thereby overruling innocent until proven guilty, is based on the flawed legal processes and social attitudes that surround rape and sexual harassment and violence. Rape culture and its associated myths infect every step of the legal process from the police to judges. Combined with the social shaming and condemnation of victims, this mean that rape remains (as the graphic above shows) a dramatically under-reported, under-prosecuted and under-sentenced crime. In the absence of a correctly functioning legal system and societal attitudes that support victims I think it is therefore justifiable to publicly condemn and socially sanction alleged rapists and harassers. Of course this will vary from case to case, based on which crime they are accused of and the actions taken by the institutions that are supposed to deal with it, and there is no easy formula for this. I think that these actions are however necessary to challenge the ideas embedded in rape culture and replace them with the kind of norms and institutions that would seriously reduce the prevalence of rape and harassment.

In closing it is worth reflecting why people place so much emphasis on innocent until proven guilty when it comes to rape and harassment. I suspect that this is in fact one more feature of rape culture. At its heart rests the profoundly mistaken view that false accusations of rape and harassment are rife. I think we should remember that insisting that the accused is innocent until proven guilty, is so often based on the assumption that the victim is ‘lying until proven truthful’. To counteract that, I think it is central to believe and support victims. As Stavvers has convincingly argued:

‘Silence is the biggest weapon patriarchy has in keeping rape culture alive, and “I believe her” starts to tear down this wall and encourage and empower survivors to speak out. Because of this, it is crucial that we resist the attacks on this notion, the slurring it as “mobs” and “kangaroo courts”, because it isn’t. It’s solidarity in the face of patriarchy, and we should be proud that it is starting to terrify those who would rather we shut up.’

Two arguments on Scottish Independence, one for and one against

I was not personally affected by the vote for Scottish independence, but like many political junkies, I was very much interested. Though it wasn’t merely intellectual curiosity that drove me to follow it: the vote was a unique and precedential event on the stage of global politics that may well have implications beyond the Kingdom-that-is-for-now-still-United. Among my British friends, there was a split between those were tentatively relieved and those were tentatively disappointed that Scotland did not, in fact, secede yet all of them had a hard time deciding. I believe this is partly because we don’t have good frameworks to think through issues of boundaries and succession, as the old political ideologies (like imperialism and nationalism) are losing their grip. Liberalism and democracy are typically perceived to have no say on questions of boundaries and membership, and that’s a big problem for anyone who believes in individual rights and democracy. With this kind of motivation in mind, I’d like to briefly present two arguments, neither conclusive, that were not featured prominently in the debate about Scottish independence – one for, one against.

What reasons do people give for and against Scottish independence? To put it very crudely, the Yes argument was mostly nationalistic and the No argument commonly economic (which means it was about material welfare). Thus, the Yes people said that Scots are a nation and therefore deserve to have political independence – it is their right to control their own collective affairs. The No people said that an independent Scotland will either do worse than it is doing now or terribly bad, with all sorts of catastrophic scenarios flying around. Of course, the Yes people have responded by saying that independence would not have such dire consequences and may even have some economic benefits but their argument was still, for the most part, about national self-determination.

That brings me to one argument in favor of Yes. It seems important to have a living example of a nation achieving independence via a vote. It’s an historical opportunity to witness a nation gain statehood by ballots, not bullets and poke a hole in the generalization that independence is gained with blood and tears or not at all. Some political leaders worried that other national minorities looked to the vote with thoughts of their own national aspirations. If the vote succeeded, the thought went, such aspirations would be strengthened and that would lead to instability. But it seems to me that the opposite is true: such a peaceful campaign is a remarkable example of the potential of discursive and non-violent means for achieving political goals, which might encourage minorities to pursue similar non-violent means in the quest for their political autonomy. That wouldn’t be the cause of any ensuing instability, but a much better way of addressing the already existing tensions, which is a euphemism for the fact that many national minorities suffer discrimination, mistreatment and oppression. If you value democracy, you want to see it succeed where much blood has been shed before: in the struggle for political independence.

This leads us to the problem with the Yes argument. That the conversation has been couched mostly in nationalistic terms is, I believe, a source of concern. For various reasons I can’t enumerate here I am very skeptical about the idea of nationalism in general and about nationalism as a basis for political independence in particular. One troubling aspect of nationalism is that the idea that nations should have their own states and states should be nation-states forces people to choose. Why can’t someone be both Scottish and British? If nations are to have their own state, each state should have a clear nation. If there’s a nation that doesn’t have a state – either it should have its own state, or live as a minority in a state that isn’t its own.

More importantly, I think that there is a potentially better argument for the Yes campaign that wasn’t as prominent in this discussion. That is the democratic aspect: would a new independent state improve the Scottish people’s ability to affect the matters the concern their own lives? Some Yes people have made that argument, usually within the nationalistic framework: as a nation, the Scots will be in a position to manage their own life. But I’m not interested in the Scots as a nation, but in Scots (and the English, and all other affected parties) as individuals. Would it improve individuals’ democratic standings? Will they have more say in decisions that impact their lives? I’m not sure, and I haven’t heard many people make a persuasive argument either way. Some Yes people think that an independent Scotland would result in an improvement in democracy because there are differences in preferences, generally speaking, between the population of Scotland and the rest of the UK: Scots tend to support more social policies, such as governmental funding of education and healthcare than the policies of the UK government. Therefore, an independent Scotland would reflect better the preferences of most Scots while the remaining citizens of the UK would have policies that reflect their preferences.

This might be true. However, there are various other issues that complicate the story. Will an independent government in Scotland be sufficiently strong to have its own policies in the face of pressures from international markets and a strong neighbor? For example, if the now independent Scotland attempts to regulate labour standards more rigorously will they be able to enforce it given the competition with their southern neighbors or will they have to end up complying with the standards of the Westminster government only that now it’ll be a much more conservative government in which they will have no say?

These are empirical questions that are hard to answer, but to my knowledge they have not been the focus of empirical study in recent years. Partly, that’s because the kind of democratic considerations I’m raising here have not been prevalent in discussion on boundaries and succession, though I think they should be.

What does it mean to be a spectator to injustice everwhere?

Given that this blog is inspired by Martin Luther King Jr.’s quote “injustice anywhere is a threat to justice everywhere”, it seemed obvious to me that the topic for this week’s blog had to be the injustice perpetrated by the state of Israel. However as I sat down to write I realized that there is very little I could write that hasn’t already been written and shared a million times over (often thanks to social media). So instead I would like to raise a few questions about the relationship between ‘injustice anywhere’ and ‘spectatorship’*. With regard to this relationship I would like to briefly raise the following six questions.
1.     What does our commitment to justice mean if we allow our attention to be easily distracted – whether by sports, consumerism, etc?
2.     Is it easier to get involved in a struggle for justice when one does not feel responsible?
3.     How, and why, has our sense of direct political responsibility for injustice changed over time? Has it become harder to find a reason to act out against injustice?
4.     Setting aside questions of privacy etc., has Facebook (and other such social media sites) helped make people more or less politically informed and/or active?
5.     What does it do to the spectator when we feel a strong sense of injustice combined with an immense feeling of helplessness?
6.     Does not knowing what a just solution would be for a particular situation make it harder to speak up against injustice?
1.    The past two weeks this blog focused on what was central to so many across the globe – world cup football. Looking at my Facebook feed – it is clear that for many people who identify (in some manner) as being committed to justice (e.g. as activists, academics etc.) our attention was divided between the horrors in Gaza and the desire to be distracted by the drama of football. But even with all the excitement of world cup football, politics and injustice were always in the shadows. Furthermore, thanks to some players issues such as sexism, racism, poverty and even Gaza were (momentarily) brought to the forefront of the viewers minds. While I can’t pretend that I didn’t appreciate the distraction of the world cup, I am disappointed in myself. Why was it so easy to get caught up the excitement of the Red Devils when it was surrounded by so much injustice – both that directly connected to world cup football (and discussed over the past two weeks on this blog) and in so many other parts of the world? The question I was forced to ask myself was: am I as committed to justice as I pretend to be? Can such a fickle commitment offer any serious challenge to injustice? Or, is it possible that these types of distractions, sports, consumerism, entertainment etc., are intentionally created as part of the structures of injustice (as was proposed by members of the Frankfurt Schule)?
2.  Another consideration is whether it is easier to be an active spectator in situations of injustice when one does not feel responsible? In other words, does participation – even for example something as simple as enjoying a football match – make it harder to speak out against the structural problems connected to FIFA, etc. This certainly seems to be the case with regard to the Israeli-Palestinian conflict. While there is no doubt that European history, European nation-states, and the EU have all played a significant role in this conflict most spectators do not feel personally responsible (except perhaps as consumerists of Israeli products). Could this be one of the reasons why there are many self proclaimed non-political people (e.g. on social media) who are now willing to make political statements?
3. A third question I wish to raise regarding the relationship between injustice and spectatorship is how, and why, has our sense of responsibility for injustice changed over time? Has it become harder to find a reason to act out against injustice? According to Margaret Canovan “Amid the turmoil of revolutionary activity in the nineteenth century, one of the less-noticed effects of the historical and sociological theories invented at that time was a weakening of man’s sense of direct responsibility for politics” (288). Canovan’s claim is that academic theories from the nineteenth century, which sought to introduce stability in chaotic times, actually contributed to the disempowerment of collective actions, such as those against injustice, and a lessening of our sense of responsibility for injustice. Or could it be the simple fact that we are now, more than ever, aware of how much injustice there is everywhere that we find it harder to decide which struggle to contribute to? Or are we in fact more aware of injustice and committed to justice today then ever before?
4.    Closely connected to the previous question, one of the interesting realities of this current Gaza conflict has been the struggle between classical media sources (tv, newspapers, radio) and social media. There are several national settings in which the attention paid to the tragedies in Gaza by way of social media forced the more pro-Israel classical media sources to report on events in Gaza, and to reframe stories in a more balanced manner.  The question this raises is whether Facebook (and other such social media sites) have helped to make more politically informed spectators? Has Facebook created a virtual public sphere and is this to more political participation?
5.     After less than a week since this most recent Israel-Palestine conflict began, many spectators have begun to express a sense of immense frustration and helplessness. What can they, across the world, behind their computer screen, possibly do to prevent this injustice? Setting aside the question of what can actually be done, I think it might be worth asking what does it do to a spectator when we feel a strong sense of injustice combined with an immense feeling of helplessness? Does it make us more or less likely to act or does it further contribute to a weakening sense of direct responsibility for politics?
6.   Last but not least, a question that is perhaps true for most situations of injustice but glaringly so with regard to the Middle East conflict: does not knowing what a just solution would be make it harder to speak up against injustice? Having spent my afternoon at a pro-Palestinian demonstration, I was struck by how divided both the actors and spectators were. While most participants were willing to make statements (in front of a camera) regarding the need to stop the injustices against Palestinians, it was much harder to find volunteers to make specific political proposals. Speaking to the spectators – in this case the people who came to observe the demonstration and who expressed outrage at the injustice of the state of Israel – many chose not to participate because they didn’t know what a just resolution to this conflict should be. Is it the case that the gap between identifying injustice and outlining justice prevents many spectators from becoming actors?

*A spectator is someone sitting safely behind their computer or television screen observing, reading, blogging, passionately debating etc. situations of injustice.

The Need for Content Notes and Trigger Warnings in Seminars

Photo by Goska Smierzchalska / CC BY-NC 2.0

Content note: this post contains a discussion of sexual violence and rape.

A few weeks ago I was at a seminar where the speaker unexpectedly diverted from the main topic of their paper and used a rape example to support their argument. As discussions of rape in philosophy seminars go it was not particularly insensitive. But what disturbed me was that from the pre-circulated paper’s title and abstract there was no indication that it would include a discussion of rape. A victim of rape or sexual violence would have had no warning that they were about to be confronted with an extended discussion of it. Given the appalling statistics on rape and sexual violence that would almost certainly have included several people in the room. For them the discussion of rape might not have been just another abstract thought experiment, but an intensely triggering experience that brought back memories they did not want to deal with at that point. It made me think that the speaker could have respected this possibility by sending a short ‘content note’ (like the one above) with the abstract warning people that the seminar would contain a discussion of rape.

Over the last few months there has in fact been a lot of online discussion over the use of content notes and trigger warnings1 in academia. The recent debate was sparked by students at several US universities calling for content notes/trigger warnings to be included in course syllabuses. The idea behind these is to warn students that certain readings in the course contain discussions of topics that might be stressful or triggering. Much of the ensuing criticism has taken the line that they represent a ‘serious threat to intellectual freedom’ and even ‘one giant leap for censorship‘. This criticism is unfortunate because it falsely suggests that content notes/trigger warnings are there to stop or censor discussions of sensitive topics. Instead the point of the them is to facilitate these discussions by creating a safe and supportive environment where people are given the choice over how and when they engage with topics that they know can be immensely painful for them. As Laurie Penny argues “Trigger warnings are fundamentally about empathy. They are a polite plea for more openness, not less; for more truth, not less. They allow taboo topics and the experience of hurt and pain, often by marginalised people, to be spoken of frankly. They are the opposite of censorship.”

Perhaps some of the hostility to content notes/trigger warnings comes from a lack of knowledge about how they could work. People seem to imagine them as these big intrusive and ugly warnings. I think an actual example of a content note shows us how far from the truth this is:

Course Content Note: At times this semester we will be discussing historical events that may be disturbing, even traumatizing, to some students. If you ever feel the need to step outside during one of these discussions, either for a short time or for the rest of the class session, you may always do so without academic penalty. (You will, however, be responsible for any material you miss. If you do leave the room for a significant time, please make arrangements to get notes from another student or see me individually.) 

If you ever wish to discuss your personal reactions to this material, either with the class or with me afterwards, I welcome such discussion as an appropriate part of our coursework.

Though much of the online discussion has focused on syllabuses and student seminars, I think it is important to recognise that the same arguments also apply to seminars among professional academics. I think we academics sometimes falsely assume that the standards and principles we apply to student and non-academic discussions do not apply to our own professional practices. An academic giving a paper or a lecture which includes discussions that are potentially triggering should give attendees advance notice of this. This allows people to prepare themselves and not have it sprung upon them, and even the opportunity to avoid coming at all if they feel they are not able to cope with the discussion that day. Of course this does not address what is said during the ensuing question period. It does not stop another academic from insensitively using an example of rape or sexual violence when they respond to the speaker. Content notes and trigger warnings cannot (and are not supposed) to cover every possibility. To address that we could start by educating academics about what its like to be a victim of rape and hear examples of rape used casually in philosophy seminars.

Some have argued that “life doesn’t come with a trigger warning” and tried to suggest that using them in any situation is therefore pointless. While we may not be able to change everything, seminars are a small sphere of life that we have the power to make less hostile and more welcoming.



1 Content notes and trigger warnings are frequently confused. The difference is that “Trigger warnings are about attempting to identify common triggers for panic attacks and related experiences and tagging media for the benefit of people who find it helpful to be warned when media contains this material. Content notes are simply flags with information about content, to be used at the discretion of the person who encounters them.”

An Ethical Checklist for Military Intervention

Large-scale loss of life shocks our collective conscience.* The developing situation in Ukraine, the Central African Republic (CAR) and South Sudan have triggered loud and frequent calls for military intervention. This month, these calls were heeded in the Central African Republic. The United Nations Security Council announced its decision to intervene. The mission has been given the catchy name: the United Nations Multidimensional Integrated Stabilization Mission in the Central African Republic, or MINUSCA for short. 10 000 troops, 1800 police and 20 corrections officers will be deployed. [1] The news has been greeted with jubilation on many social media sites.
 
 
This post is a note of caution. 

I do not understand the intricate dynamics of the conflict in the CAR. And, most likely, neither do you. This is the point. I will argue that without an in depth and detailed understanding of the conflict, and a certain (and well grounded) belief that an intervention will successfully stop the violence and do more good than harm we should not be calling for the deployment of military troops. When we argue for the use military force, we accept that troops can kill and, if things go wrong, be killed. The question of when military intervention is ever justified is not an easy one. 


Before even considering the deployment of foreign military troops, all other efforts to stop the conflict, both internal and external to the country, must be exhausted first. Such efforts include internal peace processes; diplomacy; supporting local, regional and international pressure to end the conflict; divestment; and many many more.


Given the shaky record of military interventions, we should be skeptical about using military force to end violent conflict. There have been cases in which military intervention, aimed at preventing the conflict, has made the situation worse. In Bosnia the United Nations peacekeeping force was implicated in enabling the massacre of 8000 Bosniaks. In Somalia, the United Nations sanctioned military intervention exacerbated the conflict and arguably killed more civilians than the concurrent delivery of humanitarian aid saved.[2] Doyle and Sambanis (2006) conducted a large-scale quantitative study to evaluate the success of military interventions. They found that United Nations Peacekeeping operations can improve the chances of peace. However, overall, they show that the record is ‘mixed’. Of the 121 peace operations they analysed, 63 were failures and 53 were successes. By ‘success’, they mean the end of violence and a degree of stability. On a more rigorous definition of success, which includes the introduction of institutions that prevent a re-ignition of the conflict in the long term, the results are much worse. In addition, they note that it is difficult to be able to determine if, of the 53 successes, the military intervention caused the ending of the conflict. This should be enough to dampen our enthusiasm for launching military interventions.


However, assuming that no alternatives to stopping the violence exist, some interventions may be able to facilitate an end to conflict. So, before the call to intervene is made, what should we consider? The difficult part of making a judgement is that we have to make a predictive claim that military intervention can do some ‘good’. I will now outline some of the issues that need to be considered.


Firstly, can the violence actually be stopped?


The interveners need to have enough resources and political will to get the job done. Common sense dictates, and there is a lot of research to back this up, that military intervention costs money. The resources need to be appropriate to the task in hand. A military campaign to stop countrywide violence in Luxembourg is going to take a lot less resources than a military campaign to stop countrywide violence in Russia. In addition, stopping violence can’t be achieved over night. Consequently there needs to be sufficient political will, in terms of being prepared to lose troops’ lives, to stay in the country long enough and to bear the financial costs of the intervention.


Even more importantly, it is all very well to have sufficient resources, but can a military intervention actually stop the parties from fighting? If the conflict can’t be resolved or ‘won’even with the best intentions and all the resources in the world, there may be no way of ending the violence. Therefore before arguing in favour of intervention, there needs to be a detailed analysis of the causes and reasons for the continuation of the conflict. Are there distinct and identifiable parties to the conflict? How many are there and what are their interests? How are they likely to respond to military intervention? Will they be incentivised to stop or will they start fighting more ferociously? Has there been military intervention in the conflict before? Will the memory of previous intervention attempts make ending the violence more easy or difficult? What are the chances of a military victory, by either party to the conflict or the intervener? In the event of interveners successfully ending the violence, will the conflict simply reignite when interveners leave the country? 


Each conflict is different, with specific political causes, actors and dynamics enabling its perpetuation. Sometimes an additional military actor, even one with benign interests, will only serve to heighten the feeling of insecurity of the belligerents and increase fighting in a country. This deserves close attention before sending troops in with the aim of ‘saving lives’.


Secondly, there may be reasons to value the fighting


The parties might be fighting for a good reason. For example the conflict could be caused by a liberation struggle; a fight to overthrow colonial oppressors; to remove an authoritarian dictator; to give rights to oppressed minorities. We should consider that there may be wider social goods, beyond an immediate concern to save human lives, that are important. As a result, letting the conflict continue, or even providing support to a particular side, may be the best option.


Finally, what about the unintended side effects of a military intervention? 


There can be good side effects. Military intervention could signal to other would-be-atrocity-committers that they won’t get away with it. However, other side effects are more ambiguous. Large military peacekeeping operations leave a significant economic footprint in a country. A recent study by Carnahan et al. (2007) suggests that the economic impact is often positive. However as current evidence remains inconclusive, potential economic impact should be considered.


A more harmful side effect, now well documented, is the growth of human trafficking when large-scale military operations are deployed.[3] In the last few years, the United Nations has made some positive steps to prevent this.[4] However, the risk still exists. Before an intervention, there should confidence that the chances of success outweigh the potential risks of the introduction of a large number of foreign troops into a country.

Deciding whether or not to intervene is a hugely complicated question. A multitude of factors need to be considered. And this blog post is by no means exhaustive. I have not raised important questions of government consent, the popular opinion of those living in the country of intervention and many more. But, to make my point simple and clear before arguing in favour of intervention, at the very least, we need to be able to answer yes to the following questions:


1) Are there no better alternatives to stop the violence?


2) Does a military intervention have strong chances of stopping the violence? 


3) Are we sure that the conflict should be stopped?


4) Are we prepared to accept the possible unintended consequences of intervening militarily?


This blog post is not an argument against military intervention per se. Rather a call for careful and serious consideration of these questions before supporting military intervention. My suspicion is that in the majority of cases where the United Nations and other organisations have intervened the answer to all of these four questions has not been ‘yes’.


This is not meant to be pessimistic. There are many other actions, outside of military intervention, that we can take to try and end large-scale human suffering. As citizens we can call on our governments to stop supporting violent regimes and selling arms in zones of violent conflict. However, when violence does erupt, despite the horror we feel at seeing fellow human beings suffer, we may have to face the stark reality that, right at that moment, military intervention is not the correct solution.


*A quick caveat:  The use of terms such as ‘our’ or ‘we’ in this post are not intended to denote the ‘West’ or the ‘international community’, as they are sometimes used in discussions of military intervention. I am talking to fellow peers who are considering arguing in favour of or against military intervention.

[1] See Aljazeera http://www.aljazeera.com/news/africa/2014/04/un-approves-peacekeepers-car-2014410141916684418.html
[2] Seybolt, Taylor, B. Humanitarian Military Intervention: The Conditions for Success and Failure. Oxford: Oxford University Press, 2008.
[3] Mendelson, Sarah, Barracks and Brothels: Peacekeepers and Human Trafficking in the Balkans, Washington DC: CSIS, 2005. Found at: http://csis.org/files/media/csis/pubs/0502_barracksbrothels.pdf
[4] http://www.stopvaw.org/un_peacekeeping_missions

What’s Mine Is Yours, or Is It?

In the past few years we have all become familiar with the idea of the ‘sharing economy’ and this is true even in the absence of a clear-cut definition of it. If I asked you to name some examples of the ‘sharing economy’, your list would probably include Wikipedia, Spotify, Linux, airbnb, ebay, Mendeley. This, I believe, has to do with the fact that there is nothing much new about the logic that grounds it: we have all experienced practices of renting, lending, swapping, bartering, gifting and we have all shared spaces, meals, textbooks, opinions, skills, knowledge, etc. with others. We engage in such practices for various reasons: sometimes it is because we simply cannot afford to buy, and access, rather than ownership, is a more viable option. Sometimes we prefer to re-use unwanted items than to throw them away because, in this way, we reduce the production of waste and, with it, our carbon footprint– all of which seems like a good idea in a world of scarse resources and increasing levels of pollution. Some other times sharing is just more enjoyable; it creates or fosters relationships with other people, and often it leads to new uses and innovative ideas. So, what is different now? Why do we come across a number of articles and blog-posts talking about the rise of the “sharing economy”? And what has turned ‘sharing’ into an ‘economy’ proper?
 
Digital technology, and the advent of Web 2.0 in particular, appears to be the main driver of this transformation. It’s not just that ‘sharing’ seems to be the fundamental and constitutive activity of the Web 2.0, and especially of social networks (John 2013); the Internet is also changing the way in which we engage in the ‘old’ practices of sharing. This is evident if you consider how easy access to the Web 2.0 scales-up the pool of potential ‘sharers’, and the amount of information about them, thus decreasing the high transaction costs associated with more traditional forms of sharing. In other words, the Web 2.0 allows the creation of systems of organized sharing both in the production (i.e. commons-based peer production)and consumption mode (i.e. collaborative consumption).
 
By leveraging information technology and the empowerment opportunities it makes available, the ‘sharing economy’ would seem to advance a new socio-economic model, where the production, exchange and consumption of goods and services is based on horizontal networks and distributed power within communities, rather than on the competition between hierarchical organizations. This seems like a valuable characteristic. And, indeed we find that much of the enthusiasmabout the sharing economy is generally expressed through the egalitarian language of cooperatives and civic groups. It invokes values like the dispersion of power, sustainability, community-level connectedness and solidarity, the opposition to hierarchical and rigid regulatory regimes in favour of peer-to-peer (P2P) schemes (e.g. “What’s Mine is Yours: The Rise of Collaborative Consumption“). 

But, does this mean that the sharing economy is also changing capitalism from within? Are we all witnessing a switch to a “camping-trip” type of economic organization? Is the Web 2.0 really fostering an egalitarian ethos in our societies? Or is the sharing economy just the hi-tech version of the invisible hand?
These are of course big questions and answering them requires more empirical evidence and critical reflection than I can provide here. That said, I would like to offer one critical insight, precisely about the appropriateness of describing the ‘sharing economy’ as an egalitarian practice. To do so, I will focus on the consumption side of it and the reason is that I find the egalitarian narrative surrounding collaborative consumption more troubling than that of commons-based production.

Solidarity without reciprocity? Often the egalitarian character of collaborative consumption is justified by drawing a comparison with economic systems like the gift-economies. What they have in common is that they are solidarity-producing or enhancing practices. But there is a crucial difference. In gift economies the act of gift giving was meant to be reciprocated: it created obligations to respond in kind and these mutual obligations were the ties that bound society together. Collaborative consumption does not seem to work like that: benefitting from what is shared through the Internet does not create an obligation to reciprocate, and the increasing involvement of capital in the sharing economy (e.g. airbnb has now raised more than $300M in investments) is rapidly substituting reciprocity-driven initiatives with entrepreneurial ones.
Indeed, the more collaborative consumption goes mainstream, the more it becomes just a new business model for large companies. Even when the language and rhetoric of sharing is maintained, what characterized platforms like airbnb and Lyft is the profit-seeking of shareholders, rather than the egalitarian solidarity of more traditional forms of sharing, such as gift giving. With this I am not suggesting that the idea is to be abandoned altogether, but that we should be more prudent in welcoming it as a practice that fosters our egalitarian inclinations. It also means that we should be more attentive to the quick developments that the ‘sharing economy’ is undergoing: indeed, there is very little egalitarianism in tactics like those of “surge pricing” (rising the price of a ride in busy times or during bad weather conditions in order to attract more drivers on the road) that Lyft and Uber have adopted recently.

What language should we use? Aesthetics vs. inclusiveness

The Economist is known for being a strident defender of all things capitalist (it was once saidthat “its writers rarely see a political or economic problem that cannot be solved by the trusted three-card trick of privatisation, deregulation and liberalisation”). One reason for why it has been so successful in pushing this agenda is its widely acknowledged quality of writing. It is so well known for its clear non-jargon writing that the Economist Style Guide has become a best-selling book. Idle browsing led me to  their advice on what titles to use when writing about someone:

The overriding principle is to treat people with respect. That usually means giving them the title they themselves adopt. But some titles are ugly (Ms)… 

Now, it had not even occurred to me that anyone would think that “Ms” was “ugly”. I was brought up taking it for granted that we should automatically use “Ms” rather than “Mrs” so it doesn’t strike even strike me as odd. Perhaps that reaction is different in older generations. (In any case I doubt that we should be using gendered titles at all).
But I wonder whether it even matters whether it is “ugly” or not. As the article suggests the “overriding principle is to treat people with respect” and whether or not a word or phrase sounds or looks nice seems to be a fairly unimportant consideration in comparison. Treating people with dignity and respect by using inclusive language seems to me obviously more important than aesthetic considerations. Using slightly longer or more unusual language seems such a small price to pay for being decent towards other people.
However a lot of people who do not like “politically correct” language seem to think differently. They scoff at differently abled rather than disabled, sex workers rather than prostitutes, transgender rather than transvestite. Their real motivation is usually that they do not believe in the underlying claims for respect and equality, but it is often dressed up as caring about the attractiveness of language itself. (For a perfect takedown of these “political correctness gone mad” people see this sketch by Stewart Lee).
 
Perhaps there is however a more respectable position than the anti-“political correctness” crowd when it comes to the trade off between more inclusive language and aesthetics. Perhaps there is something to the idea that language should not be altered so much so that it becomes sterile and bureaucratic. Maybe the aesthetic value of language is in fact greater than I have suggested. Let me even grant for a moment the point that some inclusive language can appear ‘unattractive’. Saying fisherperson rather than fisherman for example might truly strike some as weird.
But even on this I’m not convinced. Our understanding of what is and is not aesthetically pleasing language is not objective and unchanging. Just as with “Ms” and “Mrs” I think we can become quite quickly accustomed to new language and no longer consider it unattractive. Salesperson, spokesperson and police officer have all become so accepted that I doubt whether anyone still sees them as intrusions on attractive language. Our aesthetic judgements are intimately connected with our wider views about justice and equality. When our views on the latter change, it affects the former.
 
Of course the aesthetic costs of using inclusive language might vary from language to language. English for example does not have gendered articles (the, a) and it has relatively few gender specific nouns, and those that are can be made neutral fairly easily. That is not the case with many other languages. German for example has gendered articles (der/die, ein/eine) as well as most nouns. In German you can’t for example just say “the student” or “a professor” and be gender-neutral, because there are different versions of the noun to refer to either females or males. So in order to be gender-neutral you have to write der/die Schüler/-in” and “ein/-e Professor/-in” to include both female and male students and professors. That is more cumbersome and less attractive than it is in English. But the alternative is using a single gender (which nearly always means the male gender) to cover everyone. I think the consequences of that are much worse than using a few extra slashes and hyphens.
 
The temptation might be to try to find some middle ground position. But in this case my view is that inclusiveness trumps aesthetics every time when it comes to language. The language we use shapes the environment that people live in, and when that language excludes and insults people it contributes to a hostile and oppressive environment. I’m willing to sacrifice quite a lot of aesthetic value to avoid that.

Capping Working Hours

Recently, the scandalous decisions of some investment banks to treat their employees like human beings by suggesting they take Friday nights and Saturdays off has raised much debate amongst financial journalists and their ilk.
The issue of long working hours is not limited to investment banks; a survey in the US of 1000 professionals by Harvard Business School found that 94% worked fifty hours or more a week, and for almost half, working hours averaged over 65 hours a week. With the increase of automisation in production chains moving labour into customer facing service roles, more individuals will likely face this challenge in their daily lives.
There are good reasons to think that these hours are not useful at all. Economists have long known that as working hours increase, the marginal production of workers fall – mistakes increase and the quality of work produced falls.
More importantly than the impacts on productivity however, are morally relevant considerations that are related to cultures of long working hours:
Industries with long working hours are typically biased in favour of those who do not have other commitments which limit their available time – most notably child care and currently, in our society, this means women. Economist Claudia Goldin finds that gender gaps in wages are greatest in those industries which exhibit “non-linear pay structures” – essentially those in which individuals who can work extremely long hours are disproportionately rewarded. This describes most jobs in the corporate, financial and legal worlds.
There are important health implications of longer working hours with significant evidence that those who work longer than 48 hours a week on a regular basis are “likely to suffer an increased risk of heart disease,stress related illness, mental illness, diabetes and bowel problems.”
Finally there are various employment related issues worth considering – for example would unemployment be decreased if each 100 hour-per-week job were split into two of 50? Would such a policy help reduce the concentration of power in organisations as key managerial tasks would likely have to be increasingly shared?
While our society may gain significantly from moving away from working long hours, it will always be incredibly difficult for any firm to act unilaterally in this matter due to substantial co-ordination failures in this area.
The appropriate response, I believe, is for Government to intervene with a hard cap of 48 hours per week that applies across almost all industries, with no built in exceptions beyond those which are absolutely essential. The current EU working times directive which is supposed to provide a similar function, is farcical in its ability to constrain individuals from working, due to the amount of exceptions and opt out clauses built into it.
A hard cap of 48 hours would be hard to implement, would have some uncomfortable implications (for example – forcing individuals who enjoy their jobs to go home and stop working) and would likely have some negative consequences on the economy. However there would seem to be substantial positive gains to be made and I believe that these are large enough to justify developing such a cap.

*Update: Marxist economist, Chris Dillow, has an excellent post describing how problems like long working hours can naturally arise without actually benefiting anyone.

Which way will Europe go in May 2014? Is free-movement the key to the elections and a just Europe?

In the aftermath of the euro-crisis, there is an increased awareness (both in the hallways of the EU parliament and amongst the citizens of Europe) that what is most needed is some type of political union. It is my contention that the greatest threat to such a political union and any sense of solidarity between the many people and nations that make up Europe is the attack on free movement that first arose from several national right-wing parties. Preventing free movement is a top priority for all these parties: Marie Le Pen’s National Front in France, UK’s UKIP Party, Geert Wilder’s PVV party in the Netherlands, Norway’s Defense League, Sweden’s Democrats, Hungary’s Justice and Life Party, Bulgaria’s Attack Party, Austria’s Freedom Party, the Greek neo-fascist Golden Dawn party, Germany’s new anti-euro party (AfD), and closer to home the Flemish NVA Party led by Bart de Wever. All are expected to have record high turnout in May 2014; several are also trying to join forces to form an anti-European and anti-migration coalition. Their rhetoric that the euro-crisis and the rising unemployment figures are all due to the free-movement policies is immensely successful, so much so that it is being adopted by many more center and left-leaning parties. Within Europe, their discourse attacks the most recent members of the EU, those from Bulgaria and Romania. From beyond the borders of the EU, their discourse is one of a defense of the so-called Judeo-Christian tradition that grounds Western and European civilization. Tragically, both of these positions often boils down to a form of Islamophobia as the implicit assumption is that free-movement has allowed for Muslims (the Jews of the 21st century) to invade Europe. This point is even more acute this week with Monday being International Holocaust Remembrance Day.
If they have their way in May 2014, Europe will shift so far to the right that it will, like Humpty Dumpty, have a great big fall. The solution is that the voting public needs to realize that immigrants, and

third-party nationals, are the solution and not the problem. Here are three reasons why Europe needs to further open the borders, rather than further restrict movement.

  
    1.     The economic expense of restricting free movement is excessive. David Cameron, among other leading European politicians, is trying to keep this evidence from the public until after the elections. It is not only a historical truth that Europe could not have survived without immigration; it is the current economic reality of all major European nations as demonstrated by Phillipe Legrain (LSE). The other economic cost is the rising budget of Frontex, the EU’s border management agency. While it’s official budget was only €86 million (in 2013), this is only the administrative costs as all the equipment costs (which are well into the billions) are taken directly from the national budgets of the poorest European countries (who are required by EU policy) to ‘control’ their borders.
2.     The ethical price is too high to pay. As Nina Perkowski documents there have been almost 20 000 deaths from those trying to get into Fortress Europe. This scandalous number does not include the many, including the elderly, children and pregnant women, who have been seriously injured, imprisoned or exiled all for wanting nothing more than a better life for themselves or their loved ones.  
3.     The controlling of these borders, especially as implemented by Frontex since Eurosur (a pan-European surveillance system) went live in December of 2013, has broken many basic human rights and thus blatantly ignored the Charter of Fundamental Rights of the European Union. These include the overcrowded and unhygienic conditions in supposedly temporary prison cells, the ‘alleged’ use of torture, etc. 
(for the interactive map see: http://frontex.europa.eu/trends-and-routes/migratory-routes-map)
If the above three reasons to open Europe’s borders don’t convince you, perhaps the political or pragmatic truth will. As long as pro-European parties continue to adopt the anti-immigration rhetoric of the right and deny the importance of open borders, there is very little hope of either economic or political solidarity in Europe. Political solidarity in a democratic and just polity cannot be constructed upon a Schmittian friend/enemy distinction; the ‘us’ cannot only exist as long as there is a ‘them’ to define it.
What do you think? What way will Europe go? Is there another issue that you think will be more pivotal than free movement?

Page 1 of 2

Powered by WordPress & Theme by Anders Norén