Relational egalitarians hold what matters for justice is that all members of a society “stand in relations of equality to others.” The idea that all human beings are moral equals is widely shared: it underlies the Universal Declaration of Human Rights and many national constitutions. How will this norm be affected by the arrival of “big data,” the collecting and analysing of huge amounts of data about individuals? Internet companies and government services collect data about individuals’ activities, including geographic locations, shopping behaviour and friendships. Many individuals voluntarily share such information on social media, some also track their physical activities in meticulous details. Experts expect that “people analytics” – big data applied to the measurement of work performance – will have a revolutionary impact on labour markets.

I will not discuss the potential abuses of data, e.g. the infringements of the right to privacy, or the use of online surveillance by authoritarian regimes. What I am here interested in is the impact of big data on the social norms of our societies. Much of this data is visible: we can see how many Facebook friends we have compared to others, how much attention our work gets, or how fit we are compared to reference groups. What does this mean for how we see ourselves and others?

Anderson has drawn attention to the reality of “hierarchies of esteem, whereby those on the top command honour and admiration, while those below are stigmatized and held in contempt, as objects of ridicule, loathing, or disgust.” In egalitarian societies, hierarchies of esteem must not be based on “property and the circumstances of one’s birth – race, ethnicity, caste, tribe, family line, gender, and so forth,” nor must they be endorsed by the state. But they cannot be completely excluded, because “esteem and contempt are an inescapable part of human life.” Following Rawls, Anderson argues that the best strategy for dealing with them is to allow a plurality of standards, so that everyone can judge for themselves whom they take to be worthy of esteem.

The visibility and comparability of numerous dimensions of life might seem a positive development from that perspective. We can become more aware of the various qualities people have, such as being good at various hobbies (against the tendency to focus mostly on professional achievements) or being interesting online commentators. Seeing this can bust the stereotypical boxes in which we all too often put people, and help us appreciate the multiplicity of opportunities for esteem.

But there are also dangers. Ever increasing measurability might mean that we increasingly think of our own life and that of others in terms of scoring points. To be sure, comparisons of status have always taken place between human beings – but now, there is simply so much more data available for it! It is easy to imagine dystopian scenarios in which social pressures rise to measure anything and everything we do. Those who feel that they cannot keep up, or simply don’t want to be measured, are left behind.

This is all the more problematic because it is obviously not true that everything can be measured. Things that cannot be measured get de-emphasized in a world fixated on measurement – even though these might be the things that are most meaningful in life. This can create an urge to make them measurable, or at least documentable. But it is not clear whether this creates the right attitudes and motivations, or rather distorts them. Can we still enjoy a beautiful sunset or time with our friends if we focus on taking pictures that will get us “likes” in the social media? Or will something inevitably be lost?

And finally, it is not clear that the hope for pluralism in hierarchies of esteem will be fulfilled. Different dimensions can be linked, and high scores in some areas can lead to advantages in others, thereby creating a Matthew effect that rewards those who have and punishes those who don’t have. As sociologists Fourcade and Healy point out, market actors have an interest in evaluating individuals according to aggregated scores. Fourcade and Healy use the term “übercapital” for describing a form of capital that arises from the sum of one’s digital traces, e.g. social networks, eating habits, and productivity. If the scores of such “übercapital” became visible for everyone, we might return to a kind of feudal framing in which one’s overall standing is determined by one’s rank.

Can we reduce these dangers? It would certainly be helpful, as scholars argue, to create more transparency about how algorithms create rankings, to make sure there are no biases. But regulation may not be sufficient to address the impact on the ways in which we see each other. We need to remind ourselves that all scoring systems are highly imperfect. Such imperfection, and competition between different systems, is probably a good thing – reducing self-reinforcing effects and reminding us never to give in to the fantasy that we have found the one and only system that provides a final judgment about ourselves and others.

This is a collective action problem: can we keep up social norms according to which scores are not taken too seriously, and, most of all, are not read as expressing differential value of human beings qua human beings? We should remind ourselves, again and again, that what is being measured, and what creates high scores, is not necessarily what really matters. And that underneath all the comparisons, all the rankings, all the scores, what matters most is our common humanity.

Lisa Herzog

I work on various questions at the intersection of economics and philosophy, currently focussing on ethics and organizations and ethics in finance. Methodologically, I sit between many chairs and I have come to like the variety. I think of my work as critical, empirically informed social philosophy.