It’s been over a decade since behavioral insights have been incorporated into policy making through so-called nudge units. Nudge proponents have suggested that by altering choice environments in order to steer the decision-making of individuals, by triggering their automatic psychological processes, we can do much to improve their wellbeing, or promote important pro-social goals. For instance, we can use subtle visual cues to make consumers eat healthier, we can use careful wording to minimize bad financial choices, or we can make sure through default effects that donated organs are never in short supply.
Author: Viktor Ivanković
Individuals are notoriously self-serving in assessing their competences either in absolute terms or when comparing themselves to others. We are likely to think we are more sociable than others, better than most at judging character and sincerity, or that we perform above average in our workplaces. We often overestimate our levels of knowledge when we objectively know very little. In fact, this bias seems most potent when we are oblivious about some matter. At such times, we may act quite unrestrained in peddling the most absurd notions as facts to others. Our virtual lives of late, cluttered with half-baked claims and notions about the pandemic, offer plentiful evidence for this. What is even more disheartening is that, as the famous Dunning-Kruger effect teaches us, the more incompetent we are, the less likely we are to become aware of our own incompetence. Individuals often fall victim to this effect regardless of their intelligence, social grouping, or their successes at anticipating and counteracting a self-serving bias in some other area.
Despite being familiar in some form for several decades, the Dunning-Kruger effect has not seriously grabbed the attention of normative philosophers. Only epistemologists have seriously considered how it may affect epistemic obligations, for instance, how we should act in circumstances of assumed peer disagreement (Wiland, 2016). We have hardly considered the kinds of moral obligations we might have as individuals, or how we ought to shape public policies and institutions in the face of widespread Dunning-Kruger effects.
Consider, for instance, the decision of a highly educated person whether she should go into politics and compete for public seats. Imagine that this person is educated broadly enough to offer meaningful contributions over a range of public concerns. “But alas”, she reflects, “I don’t know enough about all the relevant laws, or how to draw up or revise a budget. I wasn’t trained for public administration. So I’m hardly competent to take up such a job.” But the educated person fails to consider that if she decides not to pursue the job, a far less competent person, one with far fewer scruples of the aforementioned kind, may attempt to take it up instead. Apart from considering merely whether she is qualified, she must assess whether the Dunning-Kruger effect will generate unwavering confidence in candidates who are far less qualified. So in the face of a lurking threat of social harms arising from incompetence, is the educated person obliged to overcome her reservations?
A further complication arises from the flip side of the Dunning-Kruger effect: in some cases, the truly competent exemplify tendencies to second-guess their competences, even if the area of competence is much more specific than in the previous example. Bertrand Russell noticed both sides of these self-assessment difficulties, when he famously stated that “in the modern world the stupid are cocksure while the intelligent are full of doubt” (1933). Simply, the awareness of the competent that there is still much they don’t know saps their confidence, whereas the incompetent are unperturbed in their lack of awareness of just how incompetent they are. But in that case, the obligation to overcome their reservations may include a psychological hurdle for the competent that makes it particularly demanding.
Assigning the competent with this obligation faces two other crucial difficulties. First, the main lesson of the Dunning-Kruger effect is that those who believe they are competent may very easily turn out to be incompetent. Thus, it’s quite possible that those taking up the obligation to save us from the incompetent, with the best of intentions, are themselves incompetent. When there is no one who could vouch for their competence, self-assessors will often overestimate themselves. Therefore, committing ourselves to beat the incompetent runs at least some risk that we are thereby enabling our own incompetence.
Second, even if we could safely and reliably establish, with the help of others, that we are truly competent, a moral question remains: how much should we be asked to do? How far-reaching is our obligation to clean up after the incompetent, or preventing them from ever making a mess? Surely, if we are competent, we are allowed to appeal to an “agent-centered prerogative […] a modest right of self-interest” (Cohen, 1996), not to invest most of our time, like in the case of taking up a public seat.
Whether moral complications arising from the Dunning-Kruger effect should affect the decisions of individuals, and how, remains an open question that requires serious thought. However, we might think that Dunning-Kruger effects are best neutralized at various levels of institutional structure. Education, for instance, might be attuned to help the most competent in overcoming their imposter syndromes, in steering and reassuring them towards positions of great social importance, and encouraging them to branch out of their epistemic comfort zones. This can, in turn, help the competent in overcoming their psychological barriers when taking up individual moral obligations.
If, however, education fails, the Dunning-Kruger effect stands out as an important consideration in setting up our electoral and governmental institutions. There is no doubt that the effect influences both the incompetent and the competent in their voting behavior, as well as in their decisions to pursue positions of leadership. An institutional arrangement that prevents the incompetent in some way from hijacking important public decisions may very well be the last frontier at which self-serving biases are to be repelled.
It is often said that the main task of teachers is to foster learning. But what kind of learning? What knowledge can we hope to attain through such learning? And what kinds of people should children aspire to become in the process? We imagine that fostering learning in the right way would ensure not only that adults lead flourishing lives, but that they can help others in acquiring knowledge. As epistemologists show, some intellectual virtues are other-regarding, meaning that individuals can and should affect others in their knowledge acquisition and intellectual flourishing; such is, for instance, the drive to discover socially relevant findings, or honesty and integrity in communicating information (Turri and Alfano, 2017).
In most Western democracies nowadays, pre-election periods are littered with polls. Some polls, conducted by polling organizations, are sophisticated and more likely to be challenged for their accuracy (as are the media houses that publish them). Other polls are simple. For instance, a news website may ask its readers who they would vote for if the election happened on that day. Polls represent a simple and cheap commodity for the commercial news media to offer to their audiences. As Jesper Strömbäck notes, polls generate fresh and often dramatic news items that are easy to analyze for journalists and easy to digest for audiences.
But how do polls, and particularly pre-election polls, fit into a normative vision of democracy? Do they enrich our democratic practices and institutions, or do they undercut democratic ideals? Despite being an epitome for divisive issues (49% of countries restrict the publishing of pre-election polls in some capacity, as Petersen notes), pre-election polls have attracted little interest of democratic theorists. Reaching a verdict on whether they are normatively compatible with democracy has been left almost entirely to political scientists and journalists.
There is an argument, appearing in both the higher and lower tiers of public debate, that goes something like this:
You can raise as many arguments as you want about solving Problem A (say, adoption rights for gay couples), but what you’re missing is that we should be dealing instead with the more prominent Problem B (say, how the budget is being balanced). It is there that we need to place our focus.
A first-semester philosophy student will easily recognize the red herring fallacy here. The proponent of the argument is not addressing the points presumably raised about how Problem A should be solved, but sidesteps into a different subject altogether. Some further claims might be made by the proponent that Problem A is being used as a smoke screen for Problem B, and that to deal with Problem A itself indicates a certain susceptibility of those involved to being distracted by ‘the powers that be’.
In an important sense, the philosopher’s annoyance is well warranted. The particularities of Problem B hardly bear any relevance to Problem A. But in at least some cases, I want to suggest that the ‘red herring’ could stand for a legitimate concern about how we are distributing our deliberative forum. The claim raised might not be an attempt to solve Problem A, but that another problem, Problem B, requires attention and is being overlooked without justification.
Public officials are often called to resign their posts if they commit grave moral or legal wrongs as private persons. Consider a few cases. It is discovered that a Minister of Education had plagiarized multiple parts of his academic work before taking up his position in the government. Another high official is caught expressing bigoted ideas against ethnic and religious minorities in personal Facebook comments and posts. A county prefect is charged for beating his wife. Should such acts call for resignations? Can they ground the decisions of political bosses to sack these individuals, or justify the general public in exerting pressures on the government to drive them out of office?
Most moral objections to nudging–the practice of altering choice environments in order to subconsciously steer behavior–have been grounded in the value of personal autonomy. The autonomy of the nudged are claimed to be undermined because the control individuals have over their evaluations, deliberations and decision-making is effectively reduced, if not fully bypassed. More so, nudging seems autonomy-threatening because the architects look to supplant the wills of their targets with their own.
When nudging was first discussed by its main proponents Thaler and Sunstein in their book Nudge in 2008, it was proposed as an innovative supplement to government policy-making. In response, most of the autonomy-related objections focused on the paternalism of governments carrying out the nudging. Surprisingly, few have paid much attention to similar forms of influences in the market setting–behavioral techniques used in advertising, pricing, and other market interactions. I claim the standard autonomy-based objections against nudging raise more worries about current market practices than emerging and prospective policy practices.