Artificial Intelligence and the Role of Political Philosophers
In a recent blog post, Paul Christiano estimates there is a 20% probability that most humans will die within 10 years of building powerful AI. This assessment is so bewildering that many of us will quickly dismiss it as a crazy prediction rooted in science fiction rather than reality. Unfortunately, it is not the fringe view of some apocalyptic dilettante. Paul Christiano previously ran the alignment team at OpenAI, most famously known as the creators of ChatGPT. And in a 2022 survey of researchers in artificial intelligence, machine learning, and computational neuroscience, about half of respondents estimated there is an at least 10% probability of an “extremely bad outcome (e.g. human extinction)” from advanced AI. The timeframe for advanced AI? Of course, it is impossible to make definitive claims, but Geoffrey Hinton, often called the “Godfather of AI” now puts it at 20 years or less, suggesting that even a timeframe of 5 years should not be excluded. This post does not offer any elaborate philosophical argument. Instead, it aims to highlight the pressing need of recognising the most salient issue humanity will face in the near future, which is the rapid development of ever-more powerful AI, and to tentatively explore what – if any – part political philosophers should play in all of this.
There is a popular joke, traceable in its original version to the 13th century Muslim folklore character, Nasreddin. Here is a more modern version:
A police officer sees a drunken man intently searching the ground near a lamppost and asks him the goal of his quest. The inebriate replies that he is looking for his car keys, and the officer helps for a few minutes without success then he asks whether the man is certain that he dropped the keys near the lamppost.
“No,” is the reply, “I lost the keys somewhere across the street.”
“Why look here?” asks the surprised and irritated officer.
“The light is much better here,” the intoxicated man responds with aplomb.[1]
Political philosophy (and academia in general) is often like this. Our professional activity is rarely driven by general usefulness and much more frequently by institutional incentives, since these matter for getting jobs, promotions, funding, and prestige. In most academic institutions, incentives unambiguosly point to one goal: publish as many articles as possible in high-quality academic journals. While this system does have its benefits, one of its downsides is that it encourages academics to become hyper-specialized in a few narrow topics, while disengaging from many other issues, even when they become important for their work [2]. “The light is much better here”, on the terrain we are already familiar with; looking somewhere else would be inefficient. There is also another reason why we tend to stay in the streetlight. Political philosophers aren’t usually fond of uncertainty. We frequently design radically simplified thought experiments not only to isolate morally relevant features, but also to eliminate the uncertainty that complicates moral thought in real life cases. There’s more uncertainty in the darkness, so we’d better avoid it.
Our societies are changing, in both a fundamental way and at an unprecedented, and to some extent unpredictable, pace, as artificial intelligence becomes more sophisticated and is deployed more extensively. Can we, as political philosophers, go on with business-as-usual in such times or do we have a duty to move out of the streetlight and engage with the difficult (and often dark) questions raised by the rise of AI?
To start with, if you look at many of the statements made by leading AI experts, you will no doubt notice that there is either an expectation, or at least an urgent call, for broad public conversations on the governance of AI to take place, coupled with the development of effective regulatory frameworks. As Yoshua Bengio – a pioneer of the deep neural networks on which models such as ChatGPT is based – states: “We need to stimulate the efforts of researchers, especially in the social sciences and humanities, because the solutions involve not only a technical, computational aspect, but also – and especially – social and human considerations” [3]. While the tone of some of the remarks made in this vein by AI experts may sometimes strike us as a bit naïve and overly optimistic when it comes to our collective capacity to have meaningful public debates and regulate effectively, this is precisely why we should heed such calls and publicly engage with these questions, moving beyond our ordinary focus on scholarly publications that are circulated only amongst ourselves. This can be done in many ways. To name just a few: by contributing to the regulatory framework wherever this is possible and publicly push for policy proposals when these are lagging, including through non-regulatory instruments such as public funding for research in AI safety or the social, political and economic impact of AI; by using insights from political philosophy to directly contribute to problems faced by AI researchers, such as the alignment problem [4]; or by contributing to shaping public discourse on this topic from our angle of expertise through publishing trade books, op-eds, participating in podcasts, news programs and other forums for debate.
Surely, if there is a duty of public engagement on the topic of artificial intelligence, it is not unique to political philosophers. But there are a few considerations that put political philosophers in a prime position to do so and, perhaps, make this duty more weighty than in some other cases. First, our broadly construed area of expertise is particularly salient for these discussions: fundamentally, scholarly work in political philosophy is about designing institutions, whether it is more applied, e.g. providing specific regulatory prescriptions, or more abstract, e.g. clarifying the values we should aim for in our regulatory frameworks. Second, while it is true that we often refrain from making the all-things-considered judgements required for institutional design, we are – I believe – especially proficient at identifying objectionable practices, states of affairs and institutional arrangements, and this critical approach is valuable in policy-making where uncertainty plays a major role. Third, political philosophers (at least those who hold stable academic positions) are often publicly funded and, even when they are not, are ordinarily sufficiently unencumbered by economic pressure, so that they can offer an independent perspective counterbalancing that of governmental officials and private sector actors, who may have vested interests in pushing certain types of narratives regarding the (de)regulation of AI.
Another important question to address is what are the kinds of discussions where political philosophers can add meaningful contributions? I think we can distinguish between three levels, which can be roughly seen as having both a temporal dimension and a substantive one. The first is the policy level, which is likely to be most pressing in the short-term, and which refers to the myriad ways in which AI either already is or is soon likely to be affecting our lives in significant ways, from racial discrimination in predictive policing, to the algorithmic governance failures of social welfare, to the challenges posed by ChatGPT and other LLMs for education, and many others. Many political philosophers are already quite well positioned to approach such issues as it falls within their area of expertise, the risk being rather a potential failure to appropriately acknowledge the fundamental way in which they will be influenced by AI, together with the dangers entailed.
The second level, which is to some extent already taking shape but will more likely be visible on a medium-term basis is systemic: how will our economic systems evolve in a world where a considerable range of jobs will become obsolete, as they will be better and cheaper performed by AI models? If we continue current policy trends, how will societies cope not only with quickly rising unemployment but also with exponentially growing economic inequality, as the wealth created by automation is likely to go to a relatively small group of entrepreneurs? How robust will democratic systems be in the face of AI-enabled disinformation and the AI-enhanced surveillance and military capabilities likely to be developed in the future? Dystopian scenarios must not necessarily come to fruition, but failing to recognise their potential or worse, dismissing them as works of fantasy, precisely at a time when we need to seriously think about systemic resilience will do nothing to preclude them.
Finally, the third level, which can be called existential, and in the chronological structure adopted here is likely to become relevant in the more long-term future (but this is only in a relative sense, when compared to the first two), is the one with which this piece has started. I won’t elaborate much on it, beyond highlighing once again that the threat posed by advanced AI for the survival of the human species is, as difficult or upsetting as this might be to conceive, widely acknowledged as real by experts who are intimately involved in AI research [5]. I disagree, however, with Mathias Risse’s (2023, p. xxiv) claim that “[t]he task for political theory, then, is to think about the topics that will likely come our way, distinguish among the various timeframes (such as Life 2.0 and Life 3.0) in which they might do so, and make proposals for how a democratic society should prepare itself to deal with the changes in the technology domain that it might eventually have to address” [6]. The stakes are simply too high for political theorists to set aside considerations regarding existential risk, even if they are “too bombastic in scope to allow for conclusive validation”. We should, by contrast, be prepared to engage with it head-on, in both scholarship and the public forum, inspite of the many difficulties which ground our reluctance to approach it. And there is no reason to think that taking existential risks seriously now detracts in any way from also addressing more specific policy and systemic risks.
I do not mean to argue that every political philosopher should abandon whatever topic they are currently working on and shift all efforts to understanding AI and its effects. Many other issues require our attention and there is much meaningful work to do that is unrelated to AI. And if there is such a duty as the one outlined above, perhaps it binds us at a collective, rather than individual, level. My aim is rather to emphasize that while there are many institutional and, perhaps even psychological, incentives for us to steer clear of engaging with the difficult problems raised by artificial intelligence, we would be shirking our moral responsibility if we do not do so at least in part. On these issues above anything else, political philosophers should step out into the public arena, and we should channel our efforts, knowledge, abilities, and resources as best we can in the various struggles to come. If some of these will be lost, they may literally be our last.
_
[1] https://quoteinvestigator.com/2013/04/11/better-light/.
[2] For this and other insightful discussions on some of the problems of contemporary political philosophy, see Mark Reiff’s (2018) “Twenty-one statements about political philosopy: an introduction and commentary on the state of the profession”.
[3] https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/.
[4] For some works in this vein see e.g. Iason Gabriel’s (2020) “Artificial Intelligence, Values, and Alignment” or the recent paper by Weidinger et al. (2023), “Using the Veil of Ignorance to align AI systems with principles of justice”.
[5] See Max Tegmark’s (2023), “The ‘Don’t Look Up’ Thinking That Could Doom Us With AI” for an overview of the issue.
[6] Matthias Risse (2023), Political theory of the digital age. Where Artificial Intelligence Might Take Us. As far as I know this is the first book-length approach to artificial intelligence from a political theory perspective and represents an excellent introductory book to some of the topics mentioned here. Several articles in a 2022 Daedalus special issue should also be of interest for political philosophers.