Scoring for loans, or: the Matthew effect in finance
|
source: wikipedia |
Last year, we moved to a lovely but not particularly well-off area in Frankfurt. If we applied for a loan, this means that we might have to pay higher interest rates. Why? Because banks use scoring technologies in order to determine the credit-worthiness of individuals. The data used for scoring include not only individual credit histories, but also data such as one’s postal code, which can be used as a proxy for socio-economic status. This raises serious issues of justice.
Sociologists Marion Foucarde and Kieran Healy have recently argued that in the US credit market scoring technologies, while having broadened access, exacerbate social stratification. In Germany, a court decided that bank clients do not have a right to receive information about the formula used by the largest scoring agency, because it is considered a trade secret.
This issue raises a plethora of normative questions. These would not matter so much if most individuals, most of the time, could get by without having to take out loans. But for large parts of the population of Western countries, especially for individuals from lower social strata, this is impossible, since labour income and welfare payments often do not suffice to cover essential costs. Given the ways in which financial services can be connected to existential crises and situations of duress, this topic deserves scrutiny from a normative perspective. Of course there are deeper questions behind it, the most obvious one being the degree of economic inequality and insecurity that a just society can admit in the first place. I will bracket it here, and focus directly on two questions about scoring technologies.
1) Is the use of scoring technologies as such justified? The standard answer is that scoring expands access to formal financial services, which can be a good thing, for example for low-income households who would otherwise have to rely on loan sharks. Banks have a legitimate interest in determining the credit-worthiness of loan applicants, and in order to do so cheaply, scoring seems a welcome innovation. The problem is, however, that scoring technologies use not only individual data, but also aggregative data that reflect group characteristics. These are obviously not true for each individual within the group. The danger of such statistical evaluations is that individuals who are already privileged (e.g. living in a rich area or having a “good” job) are treated better than individuals who are already disadvantaged. Also, advantaged individuals are usually better able, because of greater “financial literacy”, to get advice on how they need to behave in order to develop a good credit history, or on how to game the system (insofar as this is possible). The use of such data thus leads to a Matthew effect: the have’s profit, the have-not’s lose out.
There are thus normative reasons for and against the use of scoring technologies, and I have to admit that I don’t have a clear answer at the moment (one might need more empirical data to arrive at one). One possible solution might to reduce the overall dependence on profit-maximing banks, for example by having a banking system in which there are also public and co-operative banks. But this is, admittedly, more a circumvention of the problem than an answer to the question of whether scoring as such can be justified.
2) Is secrecy with regard to credit scores justified? Here, I think the answer must be a clear “no”. Financial products have become too important for the lives of many individuals to think that the property rights of private scoring companies (and hence their right to have trade secrets) would outweigh the interest citizens have in understanding the mechanisms behind them, and in seeing how their data are used for calculating their score. In addition, social scientists who explore social inequality have a legitimate interest in understanding these mechanisms in detail. It must be possible to have public debates about these issues. Right now, the only control mechanisms for scoring agencies seems to be the market mechanism, i.e. whether or not banks are willing to buy information from them. But one can think of all kinds of market failures in this area, from monopolies and quasi-monopolies to herding behaviour among banks.
One might object that without trade secrecy there would be no scoring agencies at all, and hence one could not use scoring technologies at all (note that this only matters if one’s answer to the first question is positive). But it seems simply wrong that transparent scoring mechanisms could not work. After all, there is patent law for protecting intellectual property, and in case this really doesn’t work, one might consider public subsidies for scoring agencies. The only objection I would be worried about would be a scenario in which transparency with regard to scoring agencies would reinforce stigmatization and social exclusion. But the problem is precisely that this seems to be already going on – behind closed doors. We cannot change it unless we open these doors.