a blog about philosophy in public affairs

Year: 2014 Page 2 of 3

21st Century Smoking

 

At the British Medical Association’s (BMA) annual representatives meeting this week, doctors voted overwhelmingly to push for a permanent ban on the sale of cigarettes to those born after 2000.* What are the different reasons that might motivate, and potentially justify, the state intervening in citizens’ smoking behaviour? Broadly speaking, the main distinctions are those drawn between: (1) welfare- (both individual and collective) and autonomy-based reasons; (2) ‘harm to self’ and ‘harm to others’, that is, for the sake of smokers versus for the sake of non-smokers generally; and, relatedly, (3) an aim to increase tobacco use cessation (i.e., stop smokers smoking) versus an aim to reduce tobacco use initiation (stop people from starting to smoke in the first place). Accordingly, an initial taxonomy of reasons might have the following six cells:

Welfare-based reasons
Autonomy-based reasons
Smokers
Welfare of smokers
Autonomy of smokers
Non-smokers
Welfare of non-smokers
Autonomy of non-smokers
Potential smokers
Welfare of potential smokers
Autonomy of potential smokers

Does systemic injustice justify Robin Hood Strategies?

Does systemic injustice justify Robin Hood strategies?
Many injustices arise because of patterns of behaviour, single instances of which seem harmless or at least pardonable. For example, if professors help the kids of their friend get access to university programs – and given the fact that professors and their friends tend to come from the same socio-economic background – this can lead to structural discrimination against applicants from other backgrounds (as discussed by Bazerman and Tenbrunsel here, p. 38-40). Other examples concern implicit biases against women and ethnic minorities. Much work has been done recently that helps us to understand how these mechanisms work (see e.g. here). Given how pervasive these mechanisms are, it is understandable that they cause moral outrage. The question is, however, what individuals should do in reaction to them.
Imagine that you are in a situation in which you have some amount of power, for example as a reviewer or as a member of a search committee. You might be tempted to use a “Robin Hood strategy”, i.e. a strategy that breaks the existing rules, for the sake of supporting those who are treated unjustly by these rules. Given how structural injustices work, many such “rules” are not formal rules, but rather informal patterns of behaviour. But it is still possible to work against them. For example, could it be justified to reject male applicants not because of the quality of their applications, but because they are white and male and come from a rich Western country?
One has to distinguish two levels of what such a strategy could imply. The first concerns correcting own biases that one might have, despite all good intentions (to check them, the various tests offered by Harvard University on this website can be helpful). The best way to do this, if possible, seems to be anonymity. When this is not feasible, the alternative is to scrutinize one’s patterns of thought and behaviour as best one can. The more power one has, the more it seems a requirement of justice to do this.
This is different from a second level of Robin Hood strategies, for which the name seems more appropriate: these concern not only own biases, but biases of the system. The idea is to work against them on one’s own, in one’s little corner, maybe hoping that if enough of us do this, the problems can be solved or at least attenuated. Could this be a defensible strategy?
The problem is, of course, that one risks introducing new injustices. One consciously deviates from what are supposed to be the criteria of selection, for example a candidate’s performance in previous jobs or the likelihood of being a good team member. In some cases, however, it is reasonable to assume that if a candidate comes from a group that suffers from discrimination, achieving the same level of merit as a candidate from another group takes much more effort. So according to this argument, and as long as these problems are not recognized by the official selection criteria, it seems defensible to privately factor in these previous structural inequalities.
But one’s epistemic position in judging such cases is often a weak one. For example, standard application material for many jobs includes a CV and some letters of reference. These materials are often insufficient for understanding the details of a specific case and the degree to which discrimination or stigmatization might have had an impact on the candidate’s previous career. One risks making mistakes and importing one’s own subjective biases and prejudices; taken together, this can make things worse, all things considered.
Robin Hood strategies do not provide what seems most needed: good procedures and public accountability. They do not get at the root of the problem, which is to create collective awareness of the issues, and to find collective mechanisms for addressing them (the gendered conference campaign is an example). Collective mechanisms are not only likely to be more effective, they also bring things out into the open, and create a public discourse on them. Although public discourses also have their weaknesses, there is at least a chance that the better argument will win, and there are opportunities for correcting what end up misguided strategies. Robin Hood strategies, in contrast, fight fire with fire: they remain within a logic of power, trying to find ways in which one can use counter-power to subvert the dominant power elites. But this does not change the fundamental logic of the game.

Thus, our preferred strategies should be different ones: strategies that really change the logic of the game, openly addressing problematic patterns of behaviour and looking for collective – and maybe formally institutionalized – solutions. Nonetheless, and despite all the reasons mentioned above, I cannot bring myself to thinking that Robin Hood strategies can never be justified in today’s world. Of course one has to be very careful with them, not only with particular cases, but also with regard to the slippery slope one might get onto. But are they ruled out completely? What do you think?

The Need for Content Notes and Trigger Warnings in Seminars

Photo by Goska Smierzchalska / CC BY-NC 2.0

Content note: this post contains a discussion of sexual violence and rape.

A few weeks ago I was at a seminar where the speaker unexpectedly diverted from the main topic of their paper and used a rape example to support their argument. As discussions of rape in philosophy seminars go it was not particularly insensitive. But what disturbed me was that from the pre-circulated paper’s title and abstract there was no indication that it would include a discussion of rape. A victim of rape or sexual violence would have had no warning that they were about to be confronted with an extended discussion of it. Given the appalling statistics on rape and sexual violence that would almost certainly have included several people in the room. For them the discussion of rape might not have been just another abstract thought experiment, but an intensely triggering experience that brought back memories they did not want to deal with at that point. It made me think that the speaker could have respected this possibility by sending a short ‘content note’ (like the one above) with the abstract warning people that the seminar would contain a discussion of rape.

Over the last few months there has in fact been a lot of online discussion over the use of content notes and trigger warnings1 in academia. The recent debate was sparked by students at several US universities calling for content notes/trigger warnings to be included in course syllabuses. The idea behind these is to warn students that certain readings in the course contain discussions of topics that might be stressful or triggering. Much of the ensuing criticism has taken the line that they represent a ‘serious threat to intellectual freedom’ and even ‘one giant leap for censorship‘. This criticism is unfortunate because it falsely suggests that content notes/trigger warnings are there to stop or censor discussions of sensitive topics. Instead the point of the them is to facilitate these discussions by creating a safe and supportive environment where people are given the choice over how and when they engage with topics that they know can be immensely painful for them. As Laurie Penny argues “Trigger warnings are fundamentally about empathy. They are a polite plea for more openness, not less; for more truth, not less. They allow taboo topics and the experience of hurt and pain, often by marginalised people, to be spoken of frankly. They are the opposite of censorship.”

Perhaps some of the hostility to content notes/trigger warnings comes from a lack of knowledge about how they could work. People seem to imagine them as these big intrusive and ugly warnings. I think an actual example of a content note shows us how far from the truth this is:

Course Content Note: At times this semester we will be discussing historical events that may be disturbing, even traumatizing, to some students. If you ever feel the need to step outside during one of these discussions, either for a short time or for the rest of the class session, you may always do so without academic penalty. (You will, however, be responsible for any material you miss. If you do leave the room for a significant time, please make arrangements to get notes from another student or see me individually.) 

If you ever wish to discuss your personal reactions to this material, either with the class or with me afterwards, I welcome such discussion as an appropriate part of our coursework.

Though much of the online discussion has focused on syllabuses and student seminars, I think it is important to recognise that the same arguments also apply to seminars among professional academics. I think we academics sometimes falsely assume that the standards and principles we apply to student and non-academic discussions do not apply to our own professional practices. An academic giving a paper or a lecture which includes discussions that are potentially triggering should give attendees advance notice of this. This allows people to prepare themselves and not have it sprung upon them, and even the opportunity to avoid coming at all if they feel they are not able to cope with the discussion that day. Of course this does not address what is said during the ensuing question period. It does not stop another academic from insensitively using an example of rape or sexual violence when they respond to the speaker. Content notes and trigger warnings cannot (and are not supposed) to cover every possibility. To address that we could start by educating academics about what its like to be a victim of rape and hear examples of rape used casually in philosophy seminars.

Some have argued that “life doesn’t come with a trigger warning” and tried to suggest that using them in any situation is therefore pointless. While we may not be able to change everything, seminars are a small sphere of life that we have the power to make less hostile and more welcoming.



1 Content notes and trigger warnings are frequently confused. The difference is that “Trigger warnings are about attempting to identify common triggers for panic attacks and related experiences and tagging media for the benefit of people who find it helpful to be warned when media contains this material. Content notes are simply flags with information about content, to be used at the discretion of the person who encounters them.”

Should we have a compulsory national civilian service?

The best blog posts are fashionable. They deal with questions, events, or ideas that are current or topical. This blog post does not do this. It deals with an idea that is very much out of fashion. Indeed, so much out of fashion that I believe it is not given a fair hearing. It is the idea of a compulsory national civilian service.By a compulsory national civilian service, I have in mind the following idea: At the age of eighteen, all citizens are required by law to perform a one-year-long civilian service in return for a subsistence wage. The work that each citizen undertakes will differ, but generally speaking citizens will perform work that, although socially useful, is not well provided by job markets. As an example, let’s consider work in nursing and social care.

There are several sets of considerations that count in favour of the proposal. Let me briefly mention three. First, the proposal would benefit those on the receiving end of the nursing and social care provided. The work provided by these citizens is not well provided by the market and so, in the absence of the introduction of this proposal, many more citizens are left vulnerable and in need of vital nursing and social care.

Second, the proposal would benefit the citizens who perform the civilian service. The point is not that they are likely to enjoy the work. Perhaps they will not; after all, there is often a reason for why these jobs are not provided by the market. The point is that the experience is likely to broaden their horizons, teach them various important life skills, and is likely later to be regarded as a positive, meaningful experience. In short, the experience may end up being liberating and autonomy-enhancing.

Third, the rest of society is likely to benefit from proposal also. The hope is that a compulsory national civilian service will produce better, more civically-engaged citizens who will live in a way that is sensitive to the vulnerabilities and needs of others. Part of the problem with current society is that too many people, and often those with power, have no experience of what it means to be vulnerable. The proposal under consideration would have the effect of attending to this fact. (Similar arguments are made about military service.)

There are several types of objection that could be levelled in response. Let me briefly mention two. The first concedes that the proposal would be beneficial in all the ways described, but it claims that we should resist it on the grounds that it involves the violation of citizens’ rights. In particular, perhaps the proposal amounts to a violation of citizens’ right to free occupational choice?

This does not strike me as a very promising line of reasoning given that it involves only a one-year restriction on citizens’ occupational choice. The restriction on occupational choice sanctioned by this proposal is surely no greater than the restriction on the many citizens facing frequent unemployment or only dull, meaningless work.

The second objection argues that the proposal will fail to meet the ends that it sets itself. There are three versions of this objection, corresponding to the three benefits that the proposal hopes to bring about. The strongest version of this objection claims that the proposal will not benefit those on the receiving end of the nursing and social care provided. This is because those performing the work may be unfit to carry out the work.

This point is valid but it simply forces us to take care when implementing the proposal. In particular, it draws our attention to the need to provide proper training, and to select work that can appropriately be carried out by those on civilian service. There are many other complications that must be taken into account, but none of these challenge the attractiveness of the idea of a compulsory national civilian service as such. They are problems that we must attend to when it comes to implementation.

Should Teaching be Open Access?

Many universities have begun making teaching material freely available online. In 2012 the UK’s Open University launched a platform, FutureLearn, where one can take a ‘Massive Open Online Course’, from a substantial range offered by 26 university partners and three non-university partners. There are also providers in America, Asia, and Australia.  Meanwhile, some universities – Yale is a prominent example – simply place recordings of their modules on a website, many collated at iTunesU, and, indeed, one can watch some directly via YouTube, including Michael Sandel’s course on justice:
These developments raise various ethical questions.  Here is a central one: why, if at all, should teaching be open access?  I suspect that the answer to this question depends on the kind of teaching and what precisely is meant by ‘open access’. Thus, (leaving open whether the arguments are generalizable) here I will consider a narrower suggestion: all university lecture series (where feasible) should be freely available online. Here are two reasons that speak in favour of this idea.
First, people (worldwide) should have the opportunity to know what is known. Knowledge is both intrinsically and instrumentally valuable and university lecture series are one (important) place where knowledge is housed. These points alone suggest there is some reason to give people access where possible. (Similar thoughts can be advanced in favour of internet available commons, such as Wikipedia or the Stanford Encyclopedia of Philosophy, as discussed previously on this blog). Perhaps there are cases in which access to certain knowledge must be restricted – certain intelligence information during a just war, for example. But the vast majority of information delivered through university courses is harmless (in that sense) and granting access to it would simply mean granting access to cutting edge research in a form engineered for easy consumption.
Second, the move could have a (Pareto-efficient) egalitarianising effect on university education. To wit, by giving students access to lectures from courses similar to those on their own degree, we might reduce various differences in educational and developmental opportunities that exist between attendees of different universities. Benefits would include better access to teaching more suited to one’s learning style and better accessibility for a more diverse range of users, points often emphasised about digitalising learning materials.
Here, meanwhile, are responses to some worries and objections:
Who would pay for it? The exercise would be fairly costless: many universities are already equipped with the necessary facilities and posting lectures online is fairly straightforward. In that sense, it would be funded largely by existing revenue streams from governments, research councils, and students.

Why should these actors pay for others to learn? To the extent that what is provided is a public good, I see no problem with it being government subsidised. Revenue from students is more difficult, but (a) students would continue to receive distinctive returns for their payment (such as library access and tutorials) and (b) the issue, anyway, casts as much question on whether university education should be student or government financed as on the proposal above.

Are not the courses the intellectual property of the lecturers and, thus, within their right to disseminate as they choose (including, if they wish, only for a fee)? I have some doubts about whether university courses, especially those publically funded, can be deemed individual intellectual property, but, even if so, lecturers would not need to exercise this right and the case here would imply that they should not do so.

Would it impact badly on student attendance? Might it even, as some lecturers have worried, undermine the viability of some universities and cost jobs if students can study by watching online lectures posted by other institutions? I doubt either of these effects: evidence shows access to online material typically does not decrease attendance and, as noted above, universities will continue to attract numbers and attendance based on the other, more site-specific components of their teaching profile.
Do online educational resources actually help people learn?  Much here might depend on ideas about learning theory.  Those who think we learn through stimulus and repetition (‘behaviouralists’ and, to some extent, ‘cognitivists’) are likely to place greater value on the idea than those who think we learn through communication and collaboration (‘collectivists’ or ‘constructivists’). But formats might be tinkered to respond to what would be most beneficial here, and, in any case, does not the potential of the benefits outlined above suggest that it is worth a try?
It is standard practice to ask applicants for academic jobs – at least for jobs in philosophy – to submit reference letters. Yet, an increasing number of people have been recently expressing scepticism about the practice. (See, for instance, the comments to this blog post.) This may be a good time for public discussions about it; the present post is a sketch of the pros and cons of using letters of reference in the process of selecting people for academic jobs in philosophy.
One worry with hiring on the basis of reference letters is that this tends to reinforce the regrettable importance of ‘pedigree’ – that is, of the university where candidates got their doctoral degree and of the people who recommend the candidate. There are relatively few permanent jobs in the current job market, and significantly more qualified candidates than jobs, whether permanent and temporary. One consequence is that there are many (over-)qualified candidates for (almost?) each job, and this makes the selection process really difficult. Considering dozens, sometimes hundreds, of applications for one position is onerous, so it’s appealing to take pedigree into consideration because this is an expedient method to decide whom to short-list or even whom to hire. (As I mention below, this doesn’t mean expedience is the only appeal of relying on pedigree.) But this is unfair: those candidates who weren’t supervised by influential letter-writers, or who otherwise didn’t make their work sufficiently know to an influential letter-writer, have fewer chances on the job market. Moreover, relying on letters of reference can also be bad for quality, to the extent to which letters fail to closely track merit. This kind of problem will not entirely go away just by eliminating reference letters – the prestige of a candidate’s university will continue to matter – but its dimensions would be more modest.
Another worry is that reference letters reflect and perpetuate biases, perhaps unconscious ones, against members of groups that are under-represented in philosophy for reasons of historical or present discrimination, such as women or racial minorities. There are studies suggesting that reference letters written for female candidates tend to make recommendations in terms less likely to ensure success than those used in letters recommending male candidates. If this is true, letters of reference can, again, be unfair.
Another group of reasons to give up reference letters has to do with avoiding corruption – of the hiring process and of personal relationships within the academia – and their unhappy consequences. As long as they are used in hiring, reference letters are highly valuable assets. Those able to write the most influential letters can treat letters as tokens in exchange for illegitimate benefits from the candidates they recommend. Hiring without reference letters would diminish the potential of unfairness towards candidates who resist such exchanges, and the likely unfairness towards others when they don’t. At the same time, it would eliminate one source of illegitimate power of letter writers over the candidates they recommend. To depend on a supervisor, or on other powerful people in the field, for a reference letter, seems in itself undesirable: a relationship in which one person has enormous power to make or break another’s career looks like a relationship of domination. But even if it isn’t, there might be a good reason against such dependency: to protect the possibility of genuine friendships between established and budding academics. Genuine friendships are more difficult to flourish when structural conditions make it likely that people who pursue the relationships have ulterior motives. It is great not to have to worry that your professor cultivates your company because they want from you something they could ask in exchange for a reference letter. It is great not to worry that your student or younger colleague cultivates your company because they hope you’ll give them a good letter. (Not all such worries will be averted by giving up on reference letters; one can and some do advance their students’ careers by unduly facilitating for them publications in prestigious venues or by over-promoting them.)
Finally, there are reasons of general utility to hire without reference letters: Writing them takes time and, unlike other writings, they rarely have value beyond getting someone hired. (Plus, it’s not the most exciting way to spend one’s time). Interpreting reference letters properly can also be a drag in the context of praise inflation. And it is stressful to ask people to write letters recommending you. Admittedly, these are not, in themselves, very strong arguments, but they should nevertheless count in an overall cost-benefit analysis of reference letters.
All this is not to say that there are no advantages of having reference letters in the hiring process. They may be useful proxies to determine that a candidate received proper philosophical training: in philosophy, at least, we learn an awful lot by simply seeing how others do good philosophy, and being allowed to participate in the process. The mere fact that one has been taught by a respected philosopher should count for something. But, in fact, in this day and age it is becoming increasingly easy to witness good philosophy independent from who mentors you. There are numerous conferences, and it became easier to travel to them; the internet is turning into an inexhaustible source of filmed philosophical events. Almost everybody can study the best philosophers in action, and many can interact with them on a regular basis at philosophical events. These philosophers will continue to feel particularly responsible towards their own students’ careers (and so write letters for them) but, thanks to contemporary media, they can benefit an ever higher number of students in philosophy. Of course, search committees will not know which candidates who were not taught by the most prestigious philosophers did in fact benefit from the easily available resources (conferences, recordings of lectures and other events.) But nor can they assume a wide gulf between candidates who have and candidates who have not been taught by very respected philosophers.
A very important function of reference letters is, to my mind, that of giving prospective employers a way to check certain aspects concerning candidates, in case of doubt. This pro is specific to references, and as such has nothing to do with pedigree. Is a prospective employee a good team player? How much did she contribute to a particular publication? How close to publication are the papers listed on a c.v. as ‘in progress’? But this aim can be satisfied in the absence of a general requirement that job applications include letters. It is enough if candidates are asked to list, in their application, a few people who can comment on them, with the understanding that referees are only to be contacted occasionally/exceptionally.
It appears that, on the balance of reasons, the pros count less than the cons. Letters may be important when employing people in many other fields because, together with interviews, they form the main basis for assessing a candidate’s ability. But in philosophy hirings, where both written samples and job talks are required, we could and probably should do without them.

An Ethical Checklist for Military Intervention

Large-scale loss of life shocks our collective conscience.* The developing situation in Ukraine, the Central African Republic (CAR) and South Sudan have triggered loud and frequent calls for military intervention. This month, these calls were heeded in the Central African Republic. The United Nations Security Council announced its decision to intervene. The mission has been given the catchy name: the United Nations Multidimensional Integrated Stabilization Mission in the Central African Republic, or MINUSCA for short. 10 000 troops, 1800 police and 20 corrections officers will be deployed. [1] The news has been greeted with jubilation on many social media sites.
 
 
This post is a note of caution. 

I do not understand the intricate dynamics of the conflict in the CAR. And, most likely, neither do you. This is the point. I will argue that without an in depth and detailed understanding of the conflict, and a certain (and well grounded) belief that an intervention will successfully stop the violence and do more good than harm we should not be calling for the deployment of military troops. When we argue for the use military force, we accept that troops can kill and, if things go wrong, be killed. The question of when military intervention is ever justified is not an easy one. 


Before even considering the deployment of foreign military troops, all other efforts to stop the conflict, both internal and external to the country, must be exhausted first. Such efforts include internal peace processes; diplomacy; supporting local, regional and international pressure to end the conflict; divestment; and many many more.


Given the shaky record of military interventions, we should be skeptical about using military force to end violent conflict. There have been cases in which military intervention, aimed at preventing the conflict, has made the situation worse. In Bosnia the United Nations peacekeeping force was implicated in enabling the massacre of 8000 Bosniaks. In Somalia, the United Nations sanctioned military intervention exacerbated the conflict and arguably killed more civilians than the concurrent delivery of humanitarian aid saved.[2] Doyle and Sambanis (2006) conducted a large-scale quantitative study to evaluate the success of military interventions. They found that United Nations Peacekeeping operations can improve the chances of peace. However, overall, they show that the record is ‘mixed’. Of the 121 peace operations they analysed, 63 were failures and 53 were successes. By ‘success’, they mean the end of violence and a degree of stability. On a more rigorous definition of success, which includes the introduction of institutions that prevent a re-ignition of the conflict in the long term, the results are much worse. In addition, they note that it is difficult to be able to determine if, of the 53 successes, the military intervention caused the ending of the conflict. This should be enough to dampen our enthusiasm for launching military interventions.


However, assuming that no alternatives to stopping the violence exist, some interventions may be able to facilitate an end to conflict. So, before the call to intervene is made, what should we consider? The difficult part of making a judgement is that we have to make a predictive claim that military intervention can do some ‘good’. I will now outline some of the issues that need to be considered.


Firstly, can the violence actually be stopped?


The interveners need to have enough resources and political will to get the job done. Common sense dictates, and there is a lot of research to back this up, that military intervention costs money. The resources need to be appropriate to the task in hand. A military campaign to stop countrywide violence in Luxembourg is going to take a lot less resources than a military campaign to stop countrywide violence in Russia. In addition, stopping violence can’t be achieved over night. Consequently there needs to be sufficient political will, in terms of being prepared to lose troops’ lives, to stay in the country long enough and to bear the financial costs of the intervention.


Even more importantly, it is all very well to have sufficient resources, but can a military intervention actually stop the parties from fighting? If the conflict can’t be resolved or ‘won’even with the best intentions and all the resources in the world, there may be no way of ending the violence. Therefore before arguing in favour of intervention, there needs to be a detailed analysis of the causes and reasons for the continuation of the conflict. Are there distinct and identifiable parties to the conflict? How many are there and what are their interests? How are they likely to respond to military intervention? Will they be incentivised to stop or will they start fighting more ferociously? Has there been military intervention in the conflict before? Will the memory of previous intervention attempts make ending the violence more easy or difficult? What are the chances of a military victory, by either party to the conflict or the intervener? In the event of interveners successfully ending the violence, will the conflict simply reignite when interveners leave the country? 


Each conflict is different, with specific political causes, actors and dynamics enabling its perpetuation. Sometimes an additional military actor, even one with benign interests, will only serve to heighten the feeling of insecurity of the belligerents and increase fighting in a country. This deserves close attention before sending troops in with the aim of ‘saving lives’.


Secondly, there may be reasons to value the fighting


The parties might be fighting for a good reason. For example the conflict could be caused by a liberation struggle; a fight to overthrow colonial oppressors; to remove an authoritarian dictator; to give rights to oppressed minorities. We should consider that there may be wider social goods, beyond an immediate concern to save human lives, that are important. As a result, letting the conflict continue, or even providing support to a particular side, may be the best option.


Finally, what about the unintended side effects of a military intervention? 


There can be good side effects. Military intervention could signal to other would-be-atrocity-committers that they won’t get away with it. However, other side effects are more ambiguous. Large military peacekeeping operations leave a significant economic footprint in a country. A recent study by Carnahan et al. (2007) suggests that the economic impact is often positive. However as current evidence remains inconclusive, potential economic impact should be considered.


A more harmful side effect, now well documented, is the growth of human trafficking when large-scale military operations are deployed.[3] In the last few years, the United Nations has made some positive steps to prevent this.[4] However, the risk still exists. Before an intervention, there should confidence that the chances of success outweigh the potential risks of the introduction of a large number of foreign troops into a country.

Deciding whether or not to intervene is a hugely complicated question. A multitude of factors need to be considered. And this blog post is by no means exhaustive. I have not raised important questions of government consent, the popular opinion of those living in the country of intervention and many more. But, to make my point simple and clear before arguing in favour of intervention, at the very least, we need to be able to answer yes to the following questions:


1) Are there no better alternatives to stop the violence?


2) Does a military intervention have strong chances of stopping the violence? 


3) Are we sure that the conflict should be stopped?


4) Are we prepared to accept the possible unintended consequences of intervening militarily?


This blog post is not an argument against military intervention per se. Rather a call for careful and serious consideration of these questions before supporting military intervention. My suspicion is that in the majority of cases where the United Nations and other organisations have intervened the answer to all of these four questions has not been ‘yes’.


This is not meant to be pessimistic. There are many other actions, outside of military intervention, that we can take to try and end large-scale human suffering. As citizens we can call on our governments to stop supporting violent regimes and selling arms in zones of violent conflict. However, when violence does erupt, despite the horror we feel at seeing fellow human beings suffer, we may have to face the stark reality that, right at that moment, military intervention is not the correct solution.


*A quick caveat:  The use of terms such as ‘our’ or ‘we’ in this post are not intended to denote the ‘West’ or the ‘international community’, as they are sometimes used in discussions of military intervention. I am talking to fellow peers who are considering arguing in favour of or against military intervention.

[1] See Aljazeera http://www.aljazeera.com/news/africa/2014/04/un-approves-peacekeepers-car-2014410141916684418.html
[2] Seybolt, Taylor, B. Humanitarian Military Intervention: The Conditions for Success and Failure. Oxford: Oxford University Press, 2008.
[3] Mendelson, Sarah, Barracks and Brothels: Peacekeepers and Human Trafficking in the Balkans, Washington DC: CSIS, 2005. Found at: http://csis.org/files/media/csis/pubs/0502_barracksbrothels.pdf
[4] http://www.stopvaw.org/un_peacekeeping_missions

What’s Mine Is Yours, or Is It?

In the past few years we have all become familiar with the idea of the ‘sharing economy’ and this is true even in the absence of a clear-cut definition of it. If I asked you to name some examples of the ‘sharing economy’, your list would probably include Wikipedia, Spotify, Linux, airbnb, ebay, Mendeley. This, I believe, has to do with the fact that there is nothing much new about the logic that grounds it: we have all experienced practices of renting, lending, swapping, bartering, gifting and we have all shared spaces, meals, textbooks, opinions, skills, knowledge, etc. with others. We engage in such practices for various reasons: sometimes it is because we simply cannot afford to buy, and access, rather than ownership, is a more viable option. Sometimes we prefer to re-use unwanted items than to throw them away because, in this way, we reduce the production of waste and, with it, our carbon footprint– all of which seems like a good idea in a world of scarse resources and increasing levels of pollution. Some other times sharing is just more enjoyable; it creates or fosters relationships with other people, and often it leads to new uses and innovative ideas. So, what is different now? Why do we come across a number of articles and blog-posts talking about the rise of the “sharing economy”? And what has turned ‘sharing’ into an ‘economy’ proper?
 
Digital technology, and the advent of Web 2.0 in particular, appears to be the main driver of this transformation. It’s not just that ‘sharing’ seems to be the fundamental and constitutive activity of the Web 2.0, and especially of social networks (John 2013); the Internet is also changing the way in which we engage in the ‘old’ practices of sharing. This is evident if you consider how easy access to the Web 2.0 scales-up the pool of potential ‘sharers’, and the amount of information about them, thus decreasing the high transaction costs associated with more traditional forms of sharing. In other words, the Web 2.0 allows the creation of systems of organized sharing both in the production (i.e. commons-based peer production)and consumption mode (i.e. collaborative consumption).
 
By leveraging information technology and the empowerment opportunities it makes available, the ‘sharing economy’ would seem to advance a new socio-economic model, where the production, exchange and consumption of goods and services is based on horizontal networks and distributed power within communities, rather than on the competition between hierarchical organizations. This seems like a valuable characteristic. And, indeed we find that much of the enthusiasmabout the sharing economy is generally expressed through the egalitarian language of cooperatives and civic groups. It invokes values like the dispersion of power, sustainability, community-level connectedness and solidarity, the opposition to hierarchical and rigid regulatory regimes in favour of peer-to-peer (P2P) schemes (e.g. “What’s Mine is Yours: The Rise of Collaborative Consumption“). 

But, does this mean that the sharing economy is also changing capitalism from within? Are we all witnessing a switch to a “camping-trip” type of economic organization? Is the Web 2.0 really fostering an egalitarian ethos in our societies? Or is the sharing economy just the hi-tech version of the invisible hand?
These are of course big questions and answering them requires more empirical evidence and critical reflection than I can provide here. That said, I would like to offer one critical insight, precisely about the appropriateness of describing the ‘sharing economy’ as an egalitarian practice. To do so, I will focus on the consumption side of it and the reason is that I find the egalitarian narrative surrounding collaborative consumption more troubling than that of commons-based production.

Solidarity without reciprocity? Often the egalitarian character of collaborative consumption is justified by drawing a comparison with economic systems like the gift-economies. What they have in common is that they are solidarity-producing or enhancing practices. But there is a crucial difference. In gift economies the act of gift giving was meant to be reciprocated: it created obligations to respond in kind and these mutual obligations were the ties that bound society together. Collaborative consumption does not seem to work like that: benefitting from what is shared through the Internet does not create an obligation to reciprocate, and the increasing involvement of capital in the sharing economy (e.g. airbnb has now raised more than $300M in investments) is rapidly substituting reciprocity-driven initiatives with entrepreneurial ones.
Indeed, the more collaborative consumption goes mainstream, the more it becomes just a new business model for large companies. Even when the language and rhetoric of sharing is maintained, what characterized platforms like airbnb and Lyft is the profit-seeking of shareholders, rather than the egalitarian solidarity of more traditional forms of sharing, such as gift giving. With this I am not suggesting that the idea is to be abandoned altogether, but that we should be more prudent in welcoming it as a practice that fosters our egalitarian inclinations. It also means that we should be more attentive to the quick developments that the ‘sharing economy’ is undergoing: indeed, there is very little egalitarianism in tactics like those of “surge pricing” (rising the price of a ride in busy times or during bad weather conditions in order to attract more drivers on the road) that Lyft and Uber have adopted recently.

What is the value of (even fair) equality of opportunity in an unjust society?

People across the political spectrum care a lot about social mobility. For instance, a recent BBC documentary entitled ‘Who Gets the Best Jobs?’, about how little upwards mobility there is in the British society today, seems to have hit a nerve – judging form the large number of views on youtube but also form the large number of passionate comments from the public:

And there are people who would equate perfect social mobility with justice, and who therefore deplore its absence as the most worrisome form of injustice today.

I assume this is so because equality of opportunities of one form or another is a very widely accepted political value today. Why would anyone be against fair chances? For people on the right, the principle of equal opportunities embodies an ideal of meritocracy which in turn promises that desert will be rewarded. Those on the left are keen on a version of the principle designed to condemn as unjust the obstacles that class (and other kinds of social inequalities) pose in front of individuals’ efforts to obtain socially desirable positions. For instance, John Rawls‘ principle of fair equality of opportunities(henceforth FEO) requires this: ‘[s]upposing there is a distribution of native endowments, those who have the same level of talent and ability and the same willingness to use these gifts should have the same prospects of success regardless of their social class of origin.’ The rest of this post explores the value of this version of the principle.
One of the most compelling reasons for valuing FEO is the belief that it is unfair when factors such as the level of education or wealth in one’s family of origin determines one’s chances to lead a good life. This reason is either unstable or less compelling than it seems at first glance. If it is interpreted to say that how well someone’s life goes ought not to depend on factors that are outside her control and for which she cannot be held responsible, then the principle is unstable: we are as little responsible for our native talent, and, possibly, for our ambition, as we are for our parents’ money and education. If it is interpreted to say that people deserve to reap and keep the benefits of their talents the principle is contentious precisely because people have no merit in being born with particular native talents.
Here is an illustration: imagine that two children have an equally strong desire to get a job and that occupying it will contribute equally to their overall wellbeing; one is more talented, while the other has more parental support. If we bracket the question of the social utility of that job being occupied by one, rather than the other, person, does it make any moral difference which of the two gets the job? In any case there will be a winner and a loser, and in any case a form of luck will dictate who is who.
You may think that the issue of social utility ought not to be bracketed. A second ground for valuing FEO is that the ideal of meritocracy it embodies makes possible various kinds of excellence that all of us have reason to desire (who does not want to be served by competent doctors or civil engineers?). This reason has more weight when in comes to selection for some occupations than for others, and it may be often irrelevant in an unjust society. There is some reason to want, all other things equal, that the people who are most talented for medicine (amongst those who wish to practice it) become doctors. But with respect to other professions that are sought after, ‘good enough’ will probably do (say, when it comes to who ends up practising civil law.) FEO is not necessary to avoid the social disvalue of incompetent individuals landing jobs where they can do harm. A method of eliminating candidates below a certain threshold of competence will do.
An even more important point here is that people who live in unjust societies have reasons not to want the most talented people to occupy those desired positions that are connected to the exercise of power. The most talented lawyers serving a corrupt legal system are likely to do more damage than more mediocre ones; the most talented conservative politicians will be most efficient at keeping in place unfair institutions; and similar things can be said about the most talented bankers, tax advices and top managers working in unjust societies.
One last thought on why it is not socially useful if the most talented land the best paid social positions in unjust societies: such societies tend to have mechanisms that harness talent in the service of a well-off minority – one such mechanism is, obviously, better pay. To take again an example from medicine, much research talent is currently used to seek cures for trivial conditions that afflict mostly the rich. There is no social utility in the most talented researchers who compete for these positions getting them. Therefore (and other things equal) it is far from clear that FEO in an unjust society contributes to the maximisation of social utility.
Third, FEO can also be instrumental to improving the situation of the worst off. In Rawls’ theory of justice FEO is nested in a more complex principle, demanding FEO to regulate the competition for desirable social positions in a society whose basic institutions are organised to ensure that the worst off members of society are as well off as possible. If the most talented individuals occupy the best rewarded social positions in a well-ordered society, this will supposedly lead to the maximisation of the social product. As a result, everybody, including the worst off members of society, will end up better off than the would if other, less talented individuals were to occupy those positions. This is the meritocratic component of FEO. The best results in this sense will be achieved if individuals can develop their talents equally, unencumbered by their class, gender, race etc. This is the fairness component of FEO. In this justificatory story, the value of FEO is entirely dependent on its operation in an otherwise distributively just society. Rawls himself came to the conclusion that the realisation of the difference principle requires market socialism or property owning democracy. One may dispute this, but few dispute that current societies fail to realise the difference principle. How important is it to make sure that people have fair chances to win unfair competitions?
So, it seems a mistake to worry too much about social mobility. We should be a lot more worried about substantive inequalities than about the distribution of chances to end up as a winner rather than as a loser.

Migrant Domestic Workers in Lebanon: An unjust system, how should individuals act?

In Lebanon the law covering the work of migrant domestic workers (MDWs) is deeply unjust. The situation of MDWs in Lebanon and the Middle East has been described to be a “little better than slavery”. That the law and practice should be reformed is clear. Whether this will happen any time in the near future is much less clear. What I want to focus on in this blog post is the question of how individuals who object to the law and practice should act.
A brief background: There are an estimated 200,000 MDWs employed by Lebanese families. The vast majority are women from Sri Lanka, Ethiopia, the Philippines and Nepal. MDWs are employed on short-term contracts. They are admitted into Lebanon on work visas that link them to a specific employer (a sponsor) and obliges them to live at the home of their employer. Their contracts are not covered by Lebanese labour law. This means they are excluded from entitlement to the Lebanese minimum wage guarantees, maximum number of working hours, vacation days and any compensation for unfair termination of contract. The contracts the migrants sign in their home countries with recruitment agencies are not recognized in Lebanon. Upon arrival they sign a contractual agreement (in Arabic), binding them to a specific employer (sponsor) often with different terms than the contract they signed home. The fact that their stay in the country is tied to their employer means they have practically no room for negotiating the terms of the contract. The government has recently imposed a standard contract for employing MDWs but it is far from being fairand is in any case poorly enforced. The facts are that there is a high incidence of abuse against MDWs. This ranges from “mistreatment by recruiters, non-payment or delayed payment of wages, forced confinement to the workplace, a refusal to provide any time off for the worker, forced labor, and verbal and physical abuse”
                                                       Source: Al-Akhbar (http://english.al-akhbar.com/node/18752)
The question: International organizations and more recently local NGOshave been advocating and proposing reforms. Meanwhile, what should individuals do?  They should take a clear stance in favor of the reforms, support NGO initiatives and awareness campaigns and combat attitudes of racism. That much seems obvious. A more difficult question, I find, is whether individuals ought to
  (A) refrain from employing MDWs as long as the practice is unjust; or 
  (B) employ MDWs while individually applying the terms and conditions that a fair law would  require.
On reason in favor of (A) is that employing MDWs counts as contributing to sustaining an unjust practice. One can easily avoid participating in the system as there is no sense in which refraining from employing an MDW imposes an unreasonable cost on individuals. Additionally, even if one could improve the conditions through individual arrangements with the MDW herself/himself (the vast majority of MDWs in Lebanon are women), one has no control over the other dehumanizing factors starting with the recruitment procedures in their home countries. Moreover, it is not only the contractual framework that is unjust. The justice system offers little protection, and a biased media and widespread racism make it the case that MDWs are highly vulnerable to mistreatment and abuse. I find this position rather convincing, but I also have the worry that it seems to be the easy way out.
I also think there are strong arguments in favour of (B). One can point out that the vast majority of MDWs migrate to escape severe poverty and send most of their earnings back home (remittances were estimated at $90million dollars in the first half of 2009; and remittances make up a high shareof the GDP of some countries).[1]Surely, it is better to offer them employment under fair conditions, notwithstanding the objections above, especially when noting that they are going to seek employment in Lebanon anyhow? The difficulty with this line of argument, however, lies in the host of tricky questions it raises. To mention only some, should one ensure that her prospective employee was not coerced (or misinformed) into taking up the job in her home country?  If so, when does the cost of doing so become unreasonable? What counts for a fair wage and working conditions? Is that the country’s minimum wage? What if someone cannot afford paying the fair wage? Does that mean one should opt for (A)?
These questions raise a difficulty for the following reason: if the rationale behind choosing (B) over (A) is that (B) improves on some person’s conditions and as such reduces the harm whereas (A) merely allows the harm to happen, then any improvement on the current conditions, no matter how small, would justify (B). This seems problematic. My intuition, is that one should try to maximally provide what ideal conditions would require. Take the example of wages for instance. I am assuming that determining what counts as a fair wage, whether equal or higher than the minimum wage, is not very complicated. Now, if the ideal wage exceeds what one is willing to pay for the service of an MDW, then one should not necessarily opt for (A) but rather pay the maximum amount beyond which the service is no longer attractive. This would imply, I presume, that people with higher incomes should pay higher wages. The assumption here, of course, is that individuals are genuinely interested in making the right ethical choice.
I am not sure I can fully defend the above intuition. Therefore, I would like to hear your views on this. I find the choice between (A) and (B) difficult, and this is a dilemma faced by many friends and family members home.
 

[1] Still trying to find more recent figures!

Page 2 of 3

Powered by WordPress & Theme by Anders Norén