Category: Old Blog

On reference letters in academic hiring

It is standard practice to ask applicants for academic jobs – at least for jobs in philosophy – to submit reference letters. Yet, an increasing number of people have been recently expressing scepticism about the practice. (See, for instance, the comments to this blog post.) This may be a good time for public discussions about it; the present post is a sketch of the pros and cons of using letters of reference in the process of selecting people for academic jobs in philosophy.
One worry with hiring on the basis of reference letters is that this tends to reinforce the regrettable importance of ‘pedigree’ – that is, of the university where candidates got their doctoral degree and of the people who recommend the candidate. There are relatively few permanent jobs in the current job market, and significantly more qualified candidates than jobs, whether permanent and temporary. One consequence is that there are many (over-)qualified candidates for (almost?) each job, and this makes the selection process really difficult. Considering dozens, sometimes hundreds, of applications for one position is onerous, so it’s appealing to take pedigree into consideration because this is an expedient method to decide whom to short-list or even whom to hire. (As I mention below, this doesn’t mean expedience is the only appeal of relying on pedigree.) But this is unfair: those candidates who weren’t supervised by influential letter-writers, or who otherwise didn’t make their work sufficiently know to an influential letter-writer, have fewer chances on the job market. Moreover, relying on letters of reference can also be bad for quality, to the extent to which letters fail to closely track merit. This kind of problem will not entirely go away just by eliminating reference letters – the prestige of a candidate’s university will continue to matter – but its dimensions would be more modest.
Another worry is that reference letters reflect and perpetuate biases, perhaps unconscious ones, against members of groups that are under-represented in philosophy for reasons of historical or present discrimination, such as women or racial minorities. There are studies suggesting that reference letters written for female candidates tend to make recommendations in terms less likely to ensure success than those used in letters recommending male candidates. If this is true, letters of reference can, again, be unfair.
Another group of reasons to give up reference letters has to do with avoiding corruption – of the hiring process and of personal relationships within the academia – and their unhappy consequences. As long as they are used in hiring, reference letters are highly valuable assets. Those able to write the most influential letters can treat letters as tokens in exchange for illegitimate benefits from the candidates they recommend. Hiring without reference letters would diminish the potential of unfairness towards candidates who resist such exchanges, and the likely unfairness towards others when they don’t. At the same time, it would eliminate one source of illegitimate power of letter writers over the candidates they recommend. To depend on a supervisor, or on other powerful people in the field, for a reference letter, seems in itself undesirable: a relationship in which one person has enormous power to make or break another’s career looks like a relationship of domination. But even if it isn’t, there might be a good reason against such dependency: to protect the possibility of genuine friendships between established and budding academics. Genuine friendships are more difficult to flourish when structural conditions make it likely that people who pursue the relationships have ulterior motives. It is great not to have to worry that your professor cultivates your company because they want from you something they could ask in exchange for a reference letter. It is great not to worry that your student or younger colleague cultivates your company because they hope you’ll give them a good letter. (Not all such worries will be averted by giving up on reference letters; one can and some do advance their students’ careers by unduly facilitating for them publications in prestigious venues or by over-promoting them.)
Finally, there are reasons of general utility to hire without reference letters: Writing them takes time and, unlike other writings, they rarely have value beyond getting someone hired. (Plus, it’s not the most exciting way to spend one’s time). Interpreting reference letters properly can also be a drag in the context of praise inflation. And it is stressful to ask people to write letters recommending you. Admittedly, these are not, in themselves, very strong arguments, but they should nevertheless count in an overall cost-benefit analysis of reference letters.
All this is not to say that there are no advantages of having reference letters in the hiring process. They may be useful proxies to determine that a candidate received proper philosophical training: in philosophy, at least, we learn an awful lot by simply seeing how others do good philosophy, and being allowed to participate in the process. The mere fact that one has been taught by a respected philosopher should count for something. But, in fact, in this day and age it is becoming increasingly easy to witness good philosophy independent from who mentors you. There are numerous conferences, and it became easier to travel to them; the internet is turning into an inexhaustible source of filmed philosophical events. Almost everybody can study the best philosophers in action, and many can interact with them on a regular basis at philosophical events. These philosophers will continue to feel particularly responsible towards their own students’ careers (and so write letters for them) but, thanks to contemporary media, they can benefit an ever higher number of students in philosophy. Of course, search committees will not know which candidates who were not taught by the most prestigious philosophers did in fact benefit from the easily available resources (conferences, recordings of lectures and other events.) But nor can they assume a wide gulf between candidates who have and candidates who have not been taught by very respected philosophers.
A very important function of reference letters is, to my mind, that of giving prospective employers a way to check certain aspects concerning candidates, in case of doubt. This pro is specific to references, and as such has nothing to do with pedigree. Is a prospective employee a good team player? How much did she contribute to a particular publication? How close to publication are the papers listed on a c.v. as ‘in progress’? But this aim can be satisfied in the absence of a general requirement that job applications include letters. It is enough if candidates are asked to list, in their application, a few people who can comment on them, with the understanding that referees are only to be contacted occasionally/exceptionally.
It appears that, on the balance of reasons, the pros count less than the cons. Letters may be important when employing people in many other fields because, together with interviews, they form the main basis for assessing a candidate’s ability. But in philosophy hirings, where both written samples and job talks are required, we could and probably should do without them.

An Ethical Checklist for Military Intervention

Large-scale loss of life shocks our collective conscience.* The developing situation in Ukraine, the Central African Republic (CAR) and South Sudan have triggered loud and frequent calls for military intervention. This month, these calls were heeded in the Central African Republic. The United Nations Security Council announced its decision to intervene. The mission has been given the catchy name: the United Nations Multidimensional Integrated Stabilization Mission in the Central African Republic, or MINUSCA for short. 10 000 troops, 1800 police and 20 corrections officers will be deployed. [1] The news has been greeted with jubilation on many social media sites.
 
 
This post is a note of caution. 

I do not understand the intricate dynamics of the conflict in the CAR. And, most likely, neither do you. This is the point. I will argue that without an in depth and detailed understanding of the conflict, and a certain (and well grounded) belief that an intervention will successfully stop the violence and do more good than harm we should not be calling for the deployment of military troops. When we argue for the use military force, we accept that troops can kill and, if things go wrong, be killed. The question of when military intervention is ever justified is not an easy one. 


Before even considering the deployment of foreign military troops, all other efforts to stop the conflict, both internal and external to the country, must be exhausted first. Such efforts include internal peace processes; diplomacy; supporting local, regional and international pressure to end the conflict; divestment; and many many more.


Given the shaky record of military interventions, we should be skeptical about using military force to end violent conflict. There have been cases in which military intervention, aimed at preventing the conflict, has made the situation worse. In Bosnia the United Nations peacekeeping force was implicated in enabling the massacre of 8000 Bosniaks. In Somalia, the United Nations sanctioned military intervention exacerbated the conflict and arguably killed more civilians than the concurrent delivery of humanitarian aid saved.[2] Doyle and Sambanis (2006) conducted a large-scale quantitative study to evaluate the success of military interventions. They found that United Nations Peacekeeping operations can improve the chances of peace. However, overall, they show that the record is ‘mixed’. Of the 121 peace operations they analysed, 63 were failures and 53 were successes. By ‘success’, they mean the end of violence and a degree of stability. On a more rigorous definition of success, which includes the introduction of institutions that prevent a re-ignition of the conflict in the long term, the results are much worse. In addition, they note that it is difficult to be able to determine if, of the 53 successes, the military intervention caused the ending of the conflict. This should be enough to dampen our enthusiasm for launching military interventions.


However, assuming that no alternatives to stopping the violence exist, some interventions may be able to facilitate an end to conflict. So, before the call to intervene is made, what should we consider? The difficult part of making a judgement is that we have to make a predictive claim that military intervention can do some ‘good’. I will now outline some of the issues that need to be considered.


Firstly, can the violence actually be stopped?


The interveners need to have enough resources and political will to get the job done. Common sense dictates, and there is a lot of research to back this up, that military intervention costs money. The resources need to be appropriate to the task in hand. A military campaign to stop countrywide violence in Luxembourg is going to take a lot less resources than a military campaign to stop countrywide violence in Russia. In addition, stopping violence can’t be achieved over night. Consequently there needs to be sufficient political will, in terms of being prepared to lose troops’ lives, to stay in the country long enough and to bear the financial costs of the intervention.


Even more importantly, it is all very well to have sufficient resources, but can a military intervention actually stop the parties from fighting? If the conflict can’t be resolved or ‘won’even with the best intentions and all the resources in the world, there may be no way of ending the violence. Therefore before arguing in favour of intervention, there needs to be a detailed analysis of the causes and reasons for the continuation of the conflict. Are there distinct and identifiable parties to the conflict? How many are there and what are their interests? How are they likely to respond to military intervention? Will they be incentivised to stop or will they start fighting more ferociously? Has there been military intervention in the conflict before? Will the memory of previous intervention attempts make ending the violence more easy or difficult? What are the chances of a military victory, by either party to the conflict or the intervener? In the event of interveners successfully ending the violence, will the conflict simply reignite when interveners leave the country? 


Each conflict is different, with specific political causes, actors and dynamics enabling its perpetuation. Sometimes an additional military actor, even one with benign interests, will only serve to heighten the feeling of insecurity of the belligerents and increase fighting in a country. This deserves close attention before sending troops in with the aim of ‘saving lives’.


Secondly, there may be reasons to value the fighting


The parties might be fighting for a good reason. For example the conflict could be caused by a liberation struggle; a fight to overthrow colonial oppressors; to remove an authoritarian dictator; to give rights to oppressed minorities. We should consider that there may be wider social goods, beyond an immediate concern to save human lives, that are important. As a result, letting the conflict continue, or even providing support to a particular side, may be the best option.


Finally, what about the unintended side effects of a military intervention? 


There can be good side effects. Military intervention could signal to other would-be-atrocity-committers that they won’t get away with it. However, other side effects are more ambiguous. Large military peacekeeping operations leave a significant economic footprint in a country. A recent study by Carnahan et al. (2007) suggests that the economic impact is often positive. However as current evidence remains inconclusive, potential economic impact should be considered.


A more harmful side effect, now well documented, is the growth of human trafficking when large-scale military operations are deployed.[3] In the last few years, the United Nations has made some positive steps to prevent this.[4] However, the risk still exists. Before an intervention, there should confidence that the chances of success outweigh the potential risks of the introduction of a large number of foreign troops into a country.

Deciding whether or not to intervene is a hugely complicated question. A multitude of factors need to be considered. And this blog post is by no means exhaustive. I have not raised important questions of government consent, the popular opinion of those living in the country of intervention and many more. But, to make my point simple and clear before arguing in favour of intervention, at the very least, we need to be able to answer yes to the following questions:


1) Are there no better alternatives to stop the violence?


2) Does a military intervention have strong chances of stopping the violence? 


3) Are we sure that the conflict should be stopped?


4) Are we prepared to accept the possible unintended consequences of intervening militarily?


This blog post is not an argument against military intervention per se. Rather a call for careful and serious consideration of these questions before supporting military intervention. My suspicion is that in the majority of cases where the United Nations and other organisations have intervened the answer to all of these four questions has not been ‘yes’.


This is not meant to be pessimistic. There are many other actions, outside of military intervention, that we can take to try and end large-scale human suffering. As citizens we can call on our governments to stop supporting violent regimes and selling arms in zones of violent conflict. However, when violence does erupt, despite the horror we feel at seeing fellow human beings suffer, we may have to face the stark reality that, right at that moment, military intervention is not the correct solution.


*A quick caveat:  The use of terms such as ‘our’ or ‘we’ in this post are not intended to denote the ‘West’ or the ‘international community’, as they are sometimes used in discussions of military intervention. I am talking to fellow peers who are considering arguing in favour of or against military intervention.

[1] See Aljazeera http://www.aljazeera.com/news/africa/2014/04/un-approves-peacekeepers-car-2014410141916684418.html
[2] Seybolt, Taylor, B. Humanitarian Military Intervention: The Conditions for Success and Failure. Oxford: Oxford University Press, 2008.
[3] Mendelson, Sarah, Barracks and Brothels: Peacekeepers and Human Trafficking in the Balkans, Washington DC: CSIS, 2005. Found at: http://csis.org/files/media/csis/pubs/0502_barracksbrothels.pdf
[4] http://www.stopvaw.org/un_peacekeeping_missions

What’s Mine Is Yours, or Is It?

In the past few years we have all become familiar with the idea of the ‘sharing economy’ and this is true even in the absence of a clear-cut definition of it. If I asked you to name some examples of the ‘sharing economy’, your list would probably include Wikipedia, Spotify, Linux, airbnb, ebay, Mendeley. This, I believe, has to do with the fact that there is nothing much new about the logic that grounds it: we have all experienced practices of renting, lending, swapping, bartering, gifting and we have all shared spaces, meals, textbooks, opinions, skills, knowledge, etc. with others. We engage in such practices for various reasons: sometimes it is because we simply cannot afford to buy, and access, rather than ownership, is a more viable option. Sometimes we prefer to re-use unwanted items than to throw them away because, in this way, we reduce the production of waste and, with it, our carbon footprint– all of which seems like a good idea in a world of scarse resources and increasing levels of pollution. Some other times sharing is just more enjoyable; it creates or fosters relationships with other people, and often it leads to new uses and innovative ideas. So, what is different now? Why do we come across a number of articles and blog-posts talking about the rise of the “sharing economy”? And what has turned ‘sharing’ into an ‘economy’ proper?
 
Digital technology, and the advent of Web 2.0 in particular, appears to be the main driver of this transformation. It’s not just that ‘sharing’ seems to be the fundamental and constitutive activity of the Web 2.0, and especially of social networks (John 2013); the Internet is also changing the way in which we engage in the ‘old’ practices of sharing. This is evident if you consider how easy access to the Web 2.0 scales-up the pool of potential ‘sharers’, and the amount of information about them, thus decreasing the high transaction costs associated with more traditional forms of sharing. In other words, the Web 2.0 allows the creation of systems of organized sharing both in the production (i.e. commons-based peer production)and consumption mode (i.e. collaborative consumption).
 
By leveraging information technology and the empowerment opportunities it makes available, the ‘sharing economy’ would seem to advance a new socio-economic model, where the production, exchange and consumption of goods and services is based on horizontal networks and distributed power within communities, rather than on the competition between hierarchical organizations. This seems like a valuable characteristic. And, indeed we find that much of the enthusiasmabout the sharing economy is generally expressed through the egalitarian language of cooperatives and civic groups. It invokes values like the dispersion of power, sustainability, community-level connectedness and solidarity, the opposition to hierarchical and rigid regulatory regimes in favour of peer-to-peer (P2P) schemes (e.g. “What’s Mine is Yours: The Rise of Collaborative Consumption“). 

But, does this mean that the sharing economy is also changing capitalism from within? Are we all witnessing a switch to a “camping-trip” type of economic organization? Is the Web 2.0 really fostering an egalitarian ethos in our societies? Or is the sharing economy just the hi-tech version of the invisible hand?
These are of course big questions and answering them requires more empirical evidence and critical reflection than I can provide here. That said, I would like to offer one critical insight, precisely about the appropriateness of describing the ‘sharing economy’ as an egalitarian practice. To do so, I will focus on the consumption side of it and the reason is that I find the egalitarian narrative surrounding collaborative consumption more troubling than that of commons-based production.

Solidarity without reciprocity? Often the egalitarian character of collaborative consumption is justified by drawing a comparison with economic systems like the gift-economies. What they have in common is that they are solidarity-producing or enhancing practices. But there is a crucial difference. In gift economies the act of gift giving was meant to be reciprocated: it created obligations to respond in kind and these mutual obligations were the ties that bound society together. Collaborative consumption does not seem to work like that: benefitting from what is shared through the Internet does not create an obligation to reciprocate, and the increasing involvement of capital in the sharing economy (e.g. airbnb has now raised more than $300M in investments) is rapidly substituting reciprocity-driven initiatives with entrepreneurial ones.
Indeed, the more collaborative consumption goes mainstream, the more it becomes just a new business model for large companies. Even when the language and rhetoric of sharing is maintained, what characterized platforms like airbnb and Lyft is the profit-seeking of shareholders, rather than the egalitarian solidarity of more traditional forms of sharing, such as gift giving. With this I am not suggesting that the idea is to be abandoned altogether, but that we should be more prudent in welcoming it as a practice that fosters our egalitarian inclinations. It also means that we should be more attentive to the quick developments that the ‘sharing economy’ is undergoing: indeed, there is very little egalitarianism in tactics like those of “surge pricing” (rising the price of a ride in busy times or during bad weather conditions in order to attract more drivers on the road) that Lyft and Uber have adopted recently.

What is the value of (even fair) equality of opportunity in an unjust society?

People across the political spectrum care a lot about social mobility. For instance, a recent BBC documentary entitled ‘Who Gets the Best Jobs?’, about how little upwards mobility there is in the British society today, seems to have hit a nerve – judging form the large number of views on youtube but also form the large number of passionate comments from the public:

And there are people who would equate perfect social mobility with justice, and who therefore deplore its absence as the most worrisome form of injustice today.

I assume this is so because equality of opportunities of one form or another is a very widely accepted political value today. Why would anyone be against fair chances? For people on the right, the principle of equal opportunities embodies an ideal of meritocracy which in turn promises that desert will be rewarded. Those on the left are keen on a version of the principle designed to condemn as unjust the obstacles that class (and other kinds of social inequalities) pose in front of individuals’ efforts to obtain socially desirable positions. For instance, John Rawls‘ principle of fair equality of opportunities(henceforth FEO) requires this: ‘[s]upposing there is a distribution of native endowments, those who have the same level of talent and ability and the same willingness to use these gifts should have the same prospects of success regardless of their social class of origin.’ The rest of this post explores the value of this version of the principle.
One of the most compelling reasons for valuing FEO is the belief that it is unfair when factors such as the level of education or wealth in one’s family of origin determines one’s chances to lead a good life. This reason is either unstable or less compelling than it seems at first glance. If it is interpreted to say that how well someone’s life goes ought not to depend on factors that are outside her control and for which she cannot be held responsible, then the principle is unstable: we are as little responsible for our native talent, and, possibly, for our ambition, as we are for our parents’ money and education. If it is interpreted to say that people deserve to reap and keep the benefits of their talents the principle is contentious precisely because people have no merit in being born with particular native talents.
Here is an illustration: imagine that two children have an equally strong desire to get a job and that occupying it will contribute equally to their overall wellbeing; one is more talented, while the other has more parental support. If we bracket the question of the social utility of that job being occupied by one, rather than the other, person, does it make any moral difference which of the two gets the job? In any case there will be a winner and a loser, and in any case a form of luck will dictate who is who.
You may think that the issue of social utility ought not to be bracketed. A second ground for valuing FEO is that the ideal of meritocracy it embodies makes possible various kinds of excellence that all of us have reason to desire (who does not want to be served by competent doctors or civil engineers?). This reason has more weight when in comes to selection for some occupations than for others, and it may be often irrelevant in an unjust society. There is some reason to want, all other things equal, that the people who are most talented for medicine (amongst those who wish to practice it) become doctors. But with respect to other professions that are sought after, ‘good enough’ will probably do (say, when it comes to who ends up practising civil law.) FEO is not necessary to avoid the social disvalue of incompetent individuals landing jobs where they can do harm. A method of eliminating candidates below a certain threshold of competence will do.
An even more important point here is that people who live in unjust societies have reasons not to want the most talented people to occupy those desired positions that are connected to the exercise of power. The most talented lawyers serving a corrupt legal system are likely to do more damage than more mediocre ones; the most talented conservative politicians will be most efficient at keeping in place unfair institutions; and similar things can be said about the most talented bankers, tax advices and top managers working in unjust societies.
One last thought on why it is not socially useful if the most talented land the best paid social positions in unjust societies: such societies tend to have mechanisms that harness talent in the service of a well-off minority – one such mechanism is, obviously, better pay. To take again an example from medicine, much research talent is currently used to seek cures for trivial conditions that afflict mostly the rich. There is no social utility in the most talented researchers who compete for these positions getting them. Therefore (and other things equal) it is far from clear that FEO in an unjust society contributes to the maximisation of social utility.
Third, FEO can also be instrumental to improving the situation of the worst off. In Rawls’ theory of justice FEO is nested in a more complex principle, demanding FEO to regulate the competition for desirable social positions in a society whose basic institutions are organised to ensure that the worst off members of society are as well off as possible. If the most talented individuals occupy the best rewarded social positions in a well-ordered society, this will supposedly lead to the maximisation of the social product. As a result, everybody, including the worst off members of society, will end up better off than the would if other, less talented individuals were to occupy those positions. This is the meritocratic component of FEO. The best results in this sense will be achieved if individuals can develop their talents equally, unencumbered by their class, gender, race etc. This is the fairness component of FEO. In this justificatory story, the value of FEO is entirely dependent on its operation in an otherwise distributively just society. Rawls himself came to the conclusion that the realisation of the difference principle requires market socialism or property owning democracy. One may dispute this, but few dispute that current societies fail to realise the difference principle. How important is it to make sure that people have fair chances to win unfair competitions?
So, it seems a mistake to worry too much about social mobility. We should be a lot more worried about substantive inequalities than about the distribution of chances to end up as a winner rather than as a loser.

Migrant Domestic Workers in Lebanon: An unjust system, how should individuals act?

In Lebanon the law covering the work of migrant domestic workers (MDWs) is deeply unjust. The situation of MDWs in Lebanon and the Middle East has been described to be a “little better than slavery”. That the law and practice should be reformed is clear. Whether this will happen any time in the near future is much less clear. What I want to focus on in this blog post is the question of how individuals who object to the law and practice should act.
A brief background: There are an estimated 200,000 MDWs employed by Lebanese families. The vast majority are women from Sri Lanka, Ethiopia, the Philippines and Nepal. MDWs are employed on short-term contracts. They are admitted into Lebanon on work visas that link them to a specific employer (a sponsor) and obliges them to live at the home of their employer. Their contracts are not covered by Lebanese labour law. This means they are excluded from entitlement to the Lebanese minimum wage guarantees, maximum number of working hours, vacation days and any compensation for unfair termination of contract. The contracts the migrants sign in their home countries with recruitment agencies are not recognized in Lebanon. Upon arrival they sign a contractual agreement (in Arabic), binding them to a specific employer (sponsor) often with different terms than the contract they signed home. The fact that their stay in the country is tied to their employer means they have practically no room for negotiating the terms of the contract. The government has recently imposed a standard contract for employing MDWs but it is far from being fairand is in any case poorly enforced. The facts are that there is a high incidence of abuse against MDWs. This ranges from “mistreatment by recruiters, non-payment or delayed payment of wages, forced confinement to the workplace, a refusal to provide any time off for the worker, forced labor, and verbal and physical abuse”
                                                       Source: Al-Akhbar (http://english.al-akhbar.com/node/18752)
The question: International organizations and more recently local NGOshave been advocating and proposing reforms. Meanwhile, what should individuals do?  They should take a clear stance in favor of the reforms, support NGO initiatives and awareness campaigns and combat attitudes of racism. That much seems obvious. A more difficult question, I find, is whether individuals ought to
  (A) refrain from employing MDWs as long as the practice is unjust; or 
  (B) employ MDWs while individually applying the terms and conditions that a fair law would  require.
On reason in favor of (A) is that employing MDWs counts as contributing to sustaining an unjust practice. One can easily avoid participating in the system as there is no sense in which refraining from employing an MDW imposes an unreasonable cost on individuals. Additionally, even if one could improve the conditions through individual arrangements with the MDW herself/himself (the vast majority of MDWs in Lebanon are women), one has no control over the other dehumanizing factors starting with the recruitment procedures in their home countries. Moreover, it is not only the contractual framework that is unjust. The justice system offers little protection, and a biased media and widespread racism make it the case that MDWs are highly vulnerable to mistreatment and abuse. I find this position rather convincing, but I also have the worry that it seems to be the easy way out.
I also think there are strong arguments in favour of (B). One can point out that the vast majority of MDWs migrate to escape severe poverty and send most of their earnings back home (remittances were estimated at $90million dollars in the first half of 2009; and remittances make up a high shareof the GDP of some countries).[1]Surely, it is better to offer them employment under fair conditions, notwithstanding the objections above, especially when noting that they are going to seek employment in Lebanon anyhow? The difficulty with this line of argument, however, lies in the host of tricky questions it raises. To mention only some, should one ensure that her prospective employee was not coerced (or misinformed) into taking up the job in her home country?  If so, when does the cost of doing so become unreasonable? What counts for a fair wage and working conditions? Is that the country’s minimum wage? What if someone cannot afford paying the fair wage? Does that mean one should opt for (A)?
These questions raise a difficulty for the following reason: if the rationale behind choosing (B) over (A) is that (B) improves on some person’s conditions and as such reduces the harm whereas (A) merely allows the harm to happen, then any improvement on the current conditions, no matter how small, would justify (B). This seems problematic. My intuition, is that one should try to maximally provide what ideal conditions would require. Take the example of wages for instance. I am assuming that determining what counts as a fair wage, whether equal or higher than the minimum wage, is not very complicated. Now, if the ideal wage exceeds what one is willing to pay for the service of an MDW, then one should not necessarily opt for (A) but rather pay the maximum amount beyond which the service is no longer attractive. This would imply, I presume, that people with higher incomes should pay higher wages. The assumption here, of course, is that individuals are genuinely interested in making the right ethical choice.
I am not sure I can fully defend the above intuition. Therefore, I would like to hear your views on this. I find the choice between (A) and (B) difficult, and this is a dilemma faced by many friends and family members home.
 

[1] Still trying to find more recent figures!

Supply, investment, and the allocation of scarce goods: How not to argue against rent control

Access to affordable housing is widely recognized as a basic right or, at the very least, an important moral interest. At the same time, residents of many major cities are faced with spiralling housing costs. London provides a particularly striking example. During the last year alone, average rents in London rose by more than 10 percent. Since this figure describes an aggregate trend, rent increases faced by individual tenants are often significantly higher. (When the last flat that I lived in changed its owner, the rent went up by 30 percent, notably without any changes to the condition of the property.) In light of this situation, it is no surprise that calls to address the problem of rising rents have become louder

One straightforward way of addressing the problem would consist of policies that place legal limits on the extent to which rents may be increased. Yet, the idea of rent control faces outspoken opposition. Opponents often defend their view by pointing out that rising rents have an underlying cause in the shortage of supply of housing in a given area. Constraining rents, they argue, does nothing to alter the shortage of supply or, worse, exacerbates it by reducing the returns on investment for property developers, thus undermining the economic incentives for an increase in supply. This line of argument, however, appears unconvincing. 

Shortages of supply in housing cannot easily be solved in the short term and are partly determined by geographical factors that cannot be altered at all. To the extent to which rent control policies fail to address the underlying problem of supply without worsening it, why should they not be considered as an interim measure? It is, of course, easy to conceive of policies that would further exacerbate the problem, for example if they took the shape of absolute rent ceilings that would make it impossible for developers to recoup their investment. There are, however, obvious policy alternatives that would place limits on rents and rent increases while being flexible enough to ensure a sufficient return on investment. In fact, if policies were structured such that returns on investment in new developments are higher than returns on investment in existing properties, they could create additional incentives for the construction of new homes, rather than undermining them. The very lack of rent controls, in turn, can be seen as compounding the imbalance between supply and demand in that it creates demand for existing properties on the part of speculative investors that would not exist if rent controls limited the returns on speculative investment. 

A further prominent argument against rent controls, even if understood as second-best or interim measures, relies on the appeal of free markets as a mechanism for the allocation of scarce goods. If a good is in short supply and prices are left to move freely, they will rise up to the point at which an equilibrium is reached between the amount of goods available and the amount demanded at the price in question. From the point of view of economic theory, this process is often considered to be attractive on the basis that it ensures that scarce goods are allocated to those who value the good most highly. If the price was artificially kept low, in contrast, the allocation of goods would be determined by factors that may be less normatively appealing or left to pure chance. Applied to the present context, if there is a shortage of housing in a given location, would it not be a morally attractive outcome if tenants with the strongest preference for the location would get to live there? 

Maybe it would. As an objection to the regulation of real-world housing markets, however, the argument is fundamentally flawed. The claim that equilibrium prices allocate goods to those who value the good most highly is plausible only in conjunction with the idealised assumption that the bidding parties are roughly equal in their ability to pay. In a real-world context in which potential tenants differ significantly in their wealth and thus their ability to pay, differences in willingness to pay rent cannot be taken as a direct reflection of the subjective value that a give property has to them. Since the absence of rent control measures does nothing to ensure that housing is allocated according to strength of preference, the appeal to this allocative ideal cannot serve as an objection against rent controls. 

In the absence of other arguments, the controversy about rent controls appears to boil down to a conflict between the interest in affordable housing on the one hand, and the interest of property investors on the other. It seems clear to me that the interest in affordable housing is the morally weightier one. This is not meant to deny that investments made under existing rules may give rise to legitimate expectations. Honouring such expectations, however, should not prevent us from changing rules that apply to future investments.

Higher Education Pay Disputes and Industrial Action

It has recently been announced that University pay for academic and senior professional staff in UK universities has fallen by 13% in real terms since 2009, despite students’ tuition fees having trebled over the same period. The University and College Union (UCU) assert that a consequence of this is that pay for academic and senior professional staff ‘will fall still further behind the cost of living’. In response to this, members have pursued industrial action in order to attempt to secure fairer pay offers. Should we support this industrial action?
 
 
I think that there are four types of reasons that can be offered in defence of supporting the strikes, though I am unsure of how decisive any of these are, even in combination. The first reason is the one most explicit in the UCU’s literature. It relates to the fact that academic and senior professional staff ‘are being asked to work harder and take home less money to their families year after year’. This is thought to be particularly objectionable given the vast pay increases witnessed by Vice-Chancellors and Principals. This reason is a poor one. The vast majority of academic and senior professional staff in universities live very comfortable lives, getting paid generous salaries for work that they in general enjoy doing. In essence, I simply struggle to understand how an academic who earns an annual salary of £30,000+, which is comfortably above the average in the UK, can have that much of a complaint against being asked to work harder and take home less. 
 
A second reason relates more specifically to the pay of junior academic staff and graduate students. These people, who typically take on the brunt of teaching, are notoriously under-paid. Perhaps this provides an argument in defence of supporting the strikes. I am not convinced by this argument either. My sense is that junior academics and graduate students are in general very talented people who have many more opportunities open to them than most. I appreciate that getting by can be tough, but when they complain about the state of their pay my gut reaction is to say ‘Well, if you don’t like it, do something else!’. To those with some familiarity with contemporary political theory, I am tempted to say that academia is in an important sense just like an expensive taste.
 
A third defence relates to the pay of non-academic members of staff including, for example, the pay of cleaners, porters, administrators, etc. There is a much stronger cases in defence of protecting further their interests. The problem with this, though, is that, as far as I can tell, this is not one of the aims of recent industrial action. Notably, for example, the UCU represent only academic and senior professional staff in UK universities and much of their literature discussing these issues makes no reference to the pay of non-academic members of staff. It is not clear, therefore, the extent to which the strikes will further this goal.
 
The fourth reason is suggested by this statement made by the UCU: ‘If the pay cuts don’t stop and the universities do not start to invest some of the amassed money into tackling the issue of falling pay, the quality and reputation of our higher education system will suffer’. On this reading, the justification for industrial action is not (principally) self-interest; rather it is partly out of a general concern to protect quality in higher education. (The fact that those on strike stand to gain financially from doing so is simply a convenient coincidence!) In order for this justification to prove decisive, it must be both that quality in education is (strongly) correlated with the pay of academic and senior professional staff, and that this would be the best (feasible) use of the money. Both of these claims can be doubted. 

As someone who considers themself on the political left, I am generally sympathetic to the use of industrial action. However, in this case, I am yet to be persuaded. What do you think?

What language should we use? Aesthetics vs. inclusiveness

The Economist is known for being a strident defender of all things capitalist (it was once saidthat “its writers rarely see a political or economic problem that cannot be solved by the trusted three-card trick of privatisation, deregulation and liberalisation”). One reason for why it has been so successful in pushing this agenda is its widely acknowledged quality of writing. It is so well known for its clear non-jargon writing that the Economist Style Guide has become a best-selling book. Idle browsing led me to  their advice on what titles to use when writing about someone:

The overriding principle is to treat people with respect. That usually means giving them the title they themselves adopt. But some titles are ugly (Ms)… 

Now, it had not even occurred to me that anyone would think that “Ms” was “ugly”. I was brought up taking it for granted that we should automatically use “Ms” rather than “Mrs” so it doesn’t strike even strike me as odd. Perhaps that reaction is different in older generations. (In any case I doubt that we should be using gendered titles at all).
But I wonder whether it even matters whether it is “ugly” or not. As the article suggests the “overriding principle is to treat people with respect” and whether or not a word or phrase sounds or looks nice seems to be a fairly unimportant consideration in comparison. Treating people with dignity and respect by using inclusive language seems to me obviously more important than aesthetic considerations. Using slightly longer or more unusual language seems such a small price to pay for being decent towards other people.
However a lot of people who do not like “politically correct” language seem to think differently. They scoff at differently abled rather than disabled, sex workers rather than prostitutes, transgender rather than transvestite. Their real motivation is usually that they do not believe in the underlying claims for respect and equality, but it is often dressed up as caring about the attractiveness of language itself. (For a perfect takedown of these “political correctness gone mad” people see this sketch by Stewart Lee).
 
Perhaps there is however a more respectable position than the anti-“political correctness” crowd when it comes to the trade off between more inclusive language and aesthetics. Perhaps there is something to the idea that language should not be altered so much so that it becomes sterile and bureaucratic. Maybe the aesthetic value of language is in fact greater than I have suggested. Let me even grant for a moment the point that some inclusive language can appear ‘unattractive’. Saying fisherperson rather than fisherman for example might truly strike some as weird.
But even on this I’m not convinced. Our understanding of what is and is not aesthetically pleasing language is not objective and unchanging. Just as with “Ms” and “Mrs” I think we can become quite quickly accustomed to new language and no longer consider it unattractive. Salesperson, spokesperson and police officer have all become so accepted that I doubt whether anyone still sees them as intrusions on attractive language. Our aesthetic judgements are intimately connected with our wider views about justice and equality. When our views on the latter change, it affects the former.
 
Of course the aesthetic costs of using inclusive language might vary from language to language. English for example does not have gendered articles (the, a) and it has relatively few gender specific nouns, and those that are can be made neutral fairly easily. That is not the case with many other languages. German for example has gendered articles (der/die, ein/eine) as well as most nouns. In German you can’t for example just say “the student” or “a professor” and be gender-neutral, because there are different versions of the noun to refer to either females or males. So in order to be gender-neutral you have to write der/die Schüler/-in” and “ein/-e Professor/-in” to include both female and male students and professors. That is more cumbersome and less attractive than it is in English. But the alternative is using a single gender (which nearly always means the male gender) to cover everyone. I think the consequences of that are much worse than using a few extra slashes and hyphens.
 
The temptation might be to try to find some middle ground position. But in this case my view is that inclusiveness trumps aesthetics every time when it comes to language. The language we use shapes the environment that people live in, and when that language excludes and insults people it contributes to a hostile and oppressive environment. I’m willing to sacrifice quite a lot of aesthetic value to avoid that.

Scoring For Loans, or the Matthew Effect in Finance

Scoring for loans, or: the Matthew effect in finance
 
 
source: wikipedia
Last year, we moved to a lovely but not particularly well-off area in Frankfurt. If we applied for a loan, this means that we might have to pay higher interest rates. Why? Because banks use scoring technologies in order to determine the credit-worthiness of individuals. The data used for scoring include not only individual credit histories, but also data such as one’s postal code, which can be used as a proxy for socio-economic status. This raises serious issues of justice.
Sociologists Marion Foucarde and Kieran Healy have recently argued that in the US credit market scoring technologies, while having broadened access, exacerbate social stratification. In Germany, a court decided that bank clients do not have a right to receive information about the formula used by the largest scoring agency, because it is considered a trade secret.
This issue raises a plethora of normative questions. These would not matter so much if most individuals, most of the time, could get by without having to take out loans. But for large parts of the population of Western countries, especially for individuals from lower social strata, this is impossible, since labour income and welfare payments often do not suffice to cover essential costs. Given the ways in which financial services can be connected to existential crises and situations of duress, this topic deserves scrutiny from a normative perspective. Of course there are deeper questions behind it, the most obvious one being the degree of economic inequality and insecurity that a just society can admit in the first place. I will bracket it here, and focus directly on two questions about scoring technologies.
1) Is the use of scoring technologies as such justified? The standard answer is that scoring expands access to formal financial services, which can be a good thing, for example for low-income households who would otherwise have to rely on loan sharks. Banks have a legitimate interest in determining the credit-worthiness of loan applicants, and in order to do so cheaply, scoring seems a welcome innovation. The problem is, however, that scoring technologies use not only individual data, but also aggregative data that reflect group characteristics. These are obviously not true for each individual within the group. The danger of such statistical evaluations is that individuals who are already privileged (e.g. living in a rich area or having a “good” job) are treated better than individuals who are already disadvantaged. Also, advantaged individuals are usually better able, because of greater “financial literacy”, to get advice on how they need to behave in order to develop a good credit history, or on how to game the system (insofar as this is possible). The use of such data thus leads to a Matthew effect: the have’s profit, the have-not’s lose out.
         There are thus normative reasons for and against the use of scoring technologies, and I have to admit that I don’t have a clear answer at the moment (one might need more empirical data to arrive at one). One possible solution might to reduce the overall dependence on profit-maximing banks, for example by having a banking system in which there are also public  and co-operative banks. But this is, admittedly, more a circumvention of the problem than an answer to the question of whether scoring as such can be justified.
2) Is secrecy with regard to credit scores justified? Here, I think the answer must be a clear “no”. Financial products have become too important for the lives of many individuals to think that the property rights of private scoring companies (and hence their right to have trade secrets) would outweigh the interest citizens have in understanding the mechanisms behind them, and in seeing how their data are used for calculating their score. In addition, social scientists who explore social inequality have a legitimate interest in understanding these mechanisms in detail. It must be possible to have public debates about these issues. Right now, the only control mechanisms for scoring agencies seems to be the market mechanism, i.e. whether or not banks are willing to buy information from them. But one can think of all kinds of market failures in this area, from monopolies and quasi-monopolies to herding behaviour among banks.
      One might object that without trade secrecy there would be no scoring agencies at all, and hence one could not use scoring technologies at all (note that this only matters if one’s answer to the first question is positive). But it seems simply wrong that transparent scoring mechanisms could not work. After all, there is patent law for protecting intellectual property, and in case this really doesn’t work, one might consider public subsidies for scoring agencies. The only objection I would be worried about would be a scenario in which transparency with regard to scoring agencies would reinforce stigmatization and social exclusion. But the problem is precisely that this seems to be already going on – behind closed doors. We cannot change it unless we open these doors.

Nudge, Nudge? Privatizing Public Policy

“Like all major changes to democratic accountability, it happened with a minimum of fuss. By the time we heard about it, it was already over.”

Photo: Illustration by Bill Butcher 

This week the government announced that the Behavioural Insights Team (BIT), commonly referred to as the ‘nudge unit’, has been ‘spun out’ of Whitehall into a mutual joint venture. The new “social purpose company” is now owned, in roughly equal shares, by BIT employees, the government, and Nesta (an independent charity established by the previous government using £250 million of National Lottery money). The privatisation deal has been described as “one of the biggest experiments in British public sector reform” (Financial Times), on account of this being the first time that privatisation has reached beyond public services and utilities to include an actual government policy team. My intuition, like many other people’s I would imagine, is that this marks a dangerous new precedent in the rise of private power over the public. But what precisely is it that is doing the work for this intuition?

(more…)