Justice Everywhere

a blog about justice in public affairs

Author: Lisa Herzog (Page 2 of 2)

Leaders and their responsibility for knowledge

This article in the Guardian, which some members of our team have shared on Facebook, suggests that the British prime minister David Cameron may have (had) no clue about what his policies did to local services. If we assume that this is true, it raises a moral question of great importance for today’s societies: how can leaders make sure that they know enough about the consequences of their decisions to make decisions at all?

Read More

(One of) Effective Altruism’s blind spot(s), or: why moral theory needs institutional theory

Blick aus dem Bürofenster kleinThere has been much talk about effective altruism recently (see e.g. here or here) – the idea that you should try to do as much good as you can, using the most effective means. It reads a bit like an update of good old Jeremy Bentham and “the greatest happiness of the greatest number” by a McKinsey consultant. It is easy to ridicule, and ridicule is indeed a frequent reaction because humour eases the tension that one can feel when confronted with these ideas. For there seems to be more than a grain of truth in effective altruists’ claim that we could do so much more to help those who were less fortunate in the “natural lottery” of where and when they were born. One thing that speaks in their favor, after all, is that effective altruists ask serious questions about what it means to be a moral agent in today’s world. What I here want to pick out from the debate is their picture the social world and of human institutions, which I take to be flawed. It is an illustration of why moral philosophy should not neglect the world we live in and the institutions that structure it.

Read More

Mass incarceration

One thing that I learned as a PhD student at Oxford was that philosophically interesting questions and questions about existing injustice do not always overlap – some existing practices are so obviously wrong from a normative perspective, I was told, that there is no point in writing normative theories about them. This seems right for certain cases, but I still haven’t quite made up my mind about whether it is always true.

I remember this Oxford seminar while reading this utterly depressing piece about incarceration and its effect on black communities in the U.S. in this month’s issue of the Atlantic.

Read More

Can Grading Love & Care be an Injustice?

Can grading love and care (and other goods) be an injustice?
It is a widespread intuition that some things in life cannot and should not be measured. For example, quantifying our love for a partner seems problematic. We do not want to rate our affection on a scale of 0-100.*  It is an important question, though, whether we can have a complaint of justice about measuring certain goods.  Here I consider two lines of argument for thinking that measuring certain things in quantifiable terms can be objectionable.
The first is indirect. It concerns unjust effects of things being measured that were not measured previously. An example is the measurement of the willingness to pay for parking spaces, which Joshua Kopstein recently discussed. Some start-up companies have developed apps through which people bid for spare parking spaces. Kopstein suggests that this system turns a public good into a private good that is allocated according to willingness and ability to pay, thus privileging the rich. This example does suggest that certain kinds of measurement can lead to complaints of justice, if they introduce an allocation mechanism that is not appropriate for the good. But in such cases it is the possibility of wrongful use, not the measuring itself, that can be criticized.
The second way in which measuring could raise complaints of injustice is more direct. Consider a stylized example. Assume that elderly relatives have a legitimate claim to receive some acts of love and care from younger family members. Assume that a start-up company develops an app that evaluates family members, on a score from 0 to 100, on how well their acts deliver care to elderly relatives. And assume that using the app becomes a social trend, such that most people start using it. This might have some beneficial effects. For example, it might become easier to share knowledge about how to cheer up grandma “efficiently” when she is gloomy. But could it also mean that what the elderly relatives receive are not, any longer, acts of love and care, but something else: acts calculated to enhance the wellbeing of elderly relatives? If this is the case, it seems that they could raise a claim of justice. They are denied what they have a legitimate claim to receive. Schematically put, they have a legitimate claim to good X (love and care), but what they receive is good Y (acts that will efficiently enhance wellbeing), because by measuring and quantifying X, it is transformed into Y.
One problem here is whether we can specify a sufficiently clear and plausible account of what good X is and why good Y is different from it.** One possible issue might be that good X is a complex and multi-dimensional good, but by measuring it, we necessarily reduce it to fewer dimensions. Although modern technologies offer increasingly sophisticated ways of measuring things, they still cannot capture all the dimensions of what it means, for example, to have a trusting and loving relationship with someone. Another issue could be that offering good X requires openness to new challenges or a certain degree of spontaneity. Again, these cannot be easily captured in quantitative terms and are, thus, likely to be excluded if one tried to measure X. For example, an important aspect of a loving relationship is that one is sensitive to subtle changes in the other person’s situation, and maybe even that one understands such changes before the person herself fully understands them. It is therefore unclear how they could be included in quantitative measures.
Certain forms of measurement may be simply dysfunctional. In finance, there is Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” This might also hold for other areas and make it simply unwise to try to utilise measurements there. But in additional to dysfunctionality, we should not exclude the possibility that measuring certain things may be an injustice.  At least in the case of care and love, it seems there is reason to believe that that is the case.
*In Dave Egger’s The Circle there is an episode in which one of the protagonist’s lovers asks for an evaluation of his qualities, on a scale from 0 to 100, directly after the sexual act. The protagonist is somewhat startled, and then resorts to a white lie.
**Aspects of this question have been explored in the debate about limits of the market, where one concern is whether the socially defined “meaning” of goods can be a basis for not measuring goods in market terms. See for example Debra Satz’s discussion of Elizabeth Anderson’s approach in her Why Some Things Should Not Be For Sale.

Does systemic injustice justify Robin Hood Strategies?

Does systemic injustice justify Robin Hood strategies?
Many injustices arise because of patterns of behaviour, single instances of which seem harmless or at least pardonable. For example, if professors help the kids of their friend get access to university programs – and given the fact that professors and their friends tend to come from the same socio-economic background – this can lead to structural discrimination against applicants from other backgrounds (as discussed by Bazerman and Tenbrunsel here, p. 38-40). Other examples concern implicit biases against women and ethnic minorities. Much work has been done recently that helps us to understand how these mechanisms work (see e.g. here). Given how pervasive these mechanisms are, it is understandable that they cause moral outrage. The question is, however, what individuals should do in reaction to them.
Imagine that you are in a situation in which you have some amount of power, for example as a reviewer or as a member of a search committee. You might be tempted to use a “Robin Hood strategy”, i.e. a strategy that breaks the existing rules, for the sake of supporting those who are treated unjustly by these rules. Given how structural injustices work, many such “rules” are not formal rules, but rather informal patterns of behaviour. But it is still possible to work against them. For example, could it be justified to reject male applicants not because of the quality of their applications, but because they are white and male and come from a rich Western country?
One has to distinguish two levels of what such a strategy could imply. The first concerns correcting own biases that one might have, despite all good intentions (to check them, the various tests offered by Harvard University on this website can be helpful). The best way to do this, if possible, seems to be anonymity. When this is not feasible, the alternative is to scrutinize one’s patterns of thought and behaviour as best one can. The more power one has, the more it seems a requirement of justice to do this.
This is different from a second level of Robin Hood strategies, for which the name seems more appropriate: these concern not only own biases, but biases of the system. The idea is to work against them on one’s own, in one’s little corner, maybe hoping that if enough of us do this, the problems can be solved or at least attenuated. Could this be a defensible strategy?
The problem is, of course, that one risks introducing new injustices. One consciously deviates from what are supposed to be the criteria of selection, for example a candidate’s performance in previous jobs or the likelihood of being a good team member. In some cases, however, it is reasonable to assume that if a candidate comes from a group that suffers from discrimination, achieving the same level of merit as a candidate from another group takes much more effort. So according to this argument, and as long as these problems are not recognized by the official selection criteria, it seems defensible to privately factor in these previous structural inequalities.
But one’s epistemic position in judging such cases is often a weak one. For example, standard application material for many jobs includes a CV and some letters of reference. These materials are often insufficient for understanding the details of a specific case and the degree to which discrimination or stigmatization might have had an impact on the candidate’s previous career. One risks making mistakes and importing one’s own subjective biases and prejudices; taken together, this can make things worse, all things considered.
Robin Hood strategies do not provide what seems most needed: good procedures and public accountability. They do not get at the root of the problem, which is to create collective awareness of the issues, and to find collective mechanisms for addressing them (the gendered conference campaign is an example). Collective mechanisms are not only likely to be more effective, they also bring things out into the open, and create a public discourse on them. Although public discourses also have their weaknesses, there is at least a chance that the better argument will win, and there are opportunities for correcting what end up misguided strategies. Robin Hood strategies, in contrast, fight fire with fire: they remain within a logic of power, trying to find ways in which one can use counter-power to subvert the dominant power elites. But this does not change the fundamental logic of the game.

Thus, our preferred strategies should be different ones: strategies that really change the logic of the game, openly addressing problematic patterns of behaviour and looking for collective – and maybe formally institutionalized – solutions. Nonetheless, and despite all the reasons mentioned above, I cannot bring myself to thinking that Robin Hood strategies can never be justified in today’s world. Of course one has to be very careful with them, not only with particular cases, but also with regard to the slippery slope one might get onto. But are they ruled out completely? What do you think?

Scoring For Loans, or the Matthew Effect in Finance

Scoring for loans, or: the Matthew effect in finance
 
 
source: wikipedia
Last year, we moved to a lovely but not particularly well-off area in Frankfurt. If we applied for a loan, this means that we might have to pay higher interest rates. Why? Because banks use scoring technologies in order to determine the credit-worthiness of individuals. The data used for scoring include not only individual credit histories, but also data such as one’s postal code, which can be used as a proxy for socio-economic status. This raises serious issues of justice.
Sociologists Marion Foucarde and Kieran Healy have recently argued that in the US credit market scoring technologies, while having broadened access, exacerbate social stratification. In Germany, a court decided that bank clients do not have a right to receive information about the formula used by the largest scoring agency, because it is considered a trade secret.
This issue raises a plethora of normative questions. These would not matter so much if most individuals, most of the time, could get by without having to take out loans. But for large parts of the population of Western countries, especially for individuals from lower social strata, this is impossible, since labour income and welfare payments often do not suffice to cover essential costs. Given the ways in which financial services can be connected to existential crises and situations of duress, this topic deserves scrutiny from a normative perspective. Of course there are deeper questions behind it, the most obvious one being the degree of economic inequality and insecurity that a just society can admit in the first place. I will bracket it here, and focus directly on two questions about scoring technologies.
1) Is the use of scoring technologies as such justified? The standard answer is that scoring expands access to formal financial services, which can be a good thing, for example for low-income households who would otherwise have to rely on loan sharks. Banks have a legitimate interest in determining the credit-worthiness of loan applicants, and in order to do so cheaply, scoring seems a welcome innovation. The problem is, however, that scoring technologies use not only individual data, but also aggregative data that reflect group characteristics. These are obviously not true for each individual within the group. The danger of such statistical evaluations is that individuals who are already privileged (e.g. living in a rich area or having a “good” job) are treated better than individuals who are already disadvantaged. Also, advantaged individuals are usually better able, because of greater “financial literacy”, to get advice on how they need to behave in order to develop a good credit history, or on how to game the system (insofar as this is possible). The use of such data thus leads to a Matthew effect: the have’s profit, the have-not’s lose out.
         There are thus normative reasons for and against the use of scoring technologies, and I have to admit that I don’t have a clear answer at the moment (one might need more empirical data to arrive at one). One possible solution might to reduce the overall dependence on profit-maximing banks, for example by having a banking system in which there are also public  and co-operative banks. But this is, admittedly, more a circumvention of the problem than an answer to the question of whether scoring as such can be justified.
2) Is secrecy with regard to credit scores justified? Here, I think the answer must be a clear “no”. Financial products have become too important for the lives of many individuals to think that the property rights of private scoring companies (and hence their right to have trade secrets) would outweigh the interest citizens have in understanding the mechanisms behind them, and in seeing how their data are used for calculating their score. In addition, social scientists who explore social inequality have a legitimate interest in understanding these mechanisms in detail. It must be possible to have public debates about these issues. Right now, the only control mechanisms for scoring agencies seems to be the market mechanism, i.e. whether or not banks are willing to buy information from them. But one can think of all kinds of market failures in this area, from monopolies and quasi-monopolies to herding behaviour among banks.
      One might object that without trade secrecy there would be no scoring agencies at all, and hence one could not use scoring technologies at all (note that this only matters if one’s answer to the first question is positive). But it seems simply wrong that transparent scoring mechanisms could not work. After all, there is patent law for protecting intellectual property, and in case this really doesn’t work, one might consider public subsidies for scoring agencies. The only objection I would be worried about would be a scenario in which transparency with regard to scoring agencies would reinforce stigmatization and social exclusion. But the problem is precisely that this seems to be already going on – behind closed doors. We cannot change it unless we open these doors.

Is Luck in Labour Markets an Issue of Justice?

Labour markets can be just and unjust in many ways that go beyond the distribution of income. One is luck and predictability. Their distribution is highly unequally, and I think that this raises issues of justice.
First, take individual predictability. In order to plan your life (where you want to live, with whom, whether/when to have children etc.) it is helpful to know what kind of job you can expect to have over the next few years. If job markets are to a high degree based on luck, rather than other criteria such as merit or age, they are less predictable. Now, whether or not labour markets could or should be structured around merit (and in what sense of merit) is a controversial question. But one Union? advantage is that you can have a reasonable guess, based on your prior achievements, of what your job prospects for the next few years will be. Psychological tendencies such as over-optimism or cognitive dissonance can of course kick in, but even more so if there is less predictability.
Second, collective predictability. There are factors in the legal and social set-up of labour markets that determine, for societies as a whole, how predictable labour markets are. For example, a government can take anti-cyclical measures in a depression that keep people in jobs. Or, as Albena Azmanova has recently pointed out, the welfare state can be Classic: designed in ways that increase or decrease individuals’ flexibility, maybe offering “universal minimal employment” as a fallback option.
My impression is that much goes wrong in these respects today, and that this raises issues of justice (in addition to many other forms of injustice in labour markets).
First, unpredictability gives greater power to employers, because employees will reasonably be more risk averse, and will try to keep jobs they have, even if the conditions are such that they would otherwise want to quit. This looks like an issue of justice as such, and it can have harmful consequences if it prevents people from 11 standing up to injustices within their job, blow the whistle, etc. Secondly, and more importantly, issues of unpredictability hit different groups in society with differential force. Depending on whether you have inherited wealth or not, marketable or less marketable human capital, a family rooted in one place or full geographic flexibility, etc., unpredictable labour markets make your life more or less difficult to live.
Nonetheless, it would not be worth raising these issues as issues of justice if they could not be changed, or only at the cost of violating other values. In designing policy instruments that make job markets more predictable, one would have to be careful – otherwise one might end up, for example, with an in-group with 100% predictability and an out-group with 0% predictability. Or one might, in the long run, stifle markets so much that the economic wellbeing of the worst off is endangered. But it seems worth experimenting with different models, and learning from the experiences in other countries, in order to see what can be done (maybe we can discuss examples below). And I think there can also be cases micro-injustices about predictability, for example if a boss tells three people that they have “good chances” to be promoted, while only one can really be promoted.
One thing, however, can and should change, in my view. The role of luck in the job market should be acknowledged, and professional success (or the lack of it) should not be seen as a sign of personal worthiness (or the lack of it). We are equal as human cheap jerseys beings and as citizens, and while some may work harder than others, or be more talented than others, these things do not determine our value. So while there might be arguments in favour of de facto trying to tie job market structures more to achievement, for the sake of predictability (although I think that collective measures are far more important), we should stop fetishizing professional success. The role of luck is always going to be there, and acknowledging it might lead to a bit more solidarity among co-citizens and fellow human beings. 

Page 2 of 2

Powered by WordPress & Theme by Anders Norén