Artificial Intelligence – Adobe Stock

Artificial intelligence (AI) and machine learning (ML) have seen impressive developments in the last decades. Think about Google’s DeepMind defeating Lee Sedol, the best human player of Go, with their program AlphaGo in 2015. The latest version, AlphaZero, is remarkable because it relied on deep reinforcement learning to learn how to play Go entirely by itself from scratch: with only the rules of the game, through trial and error, and playing millions of games against itself. Machine learning algorithms have a range of other practical applications, from image recognition in medical diagnostics to energy management.

Today, AI systems using ML are also increasingly adopted in public administration and business to help make difficult decisions from individuals’ eligibility to social benefits and prison sentences to content moderation on Facebook. They will also improve productivity and disrupt labor markets by fueling a new wave of automation. In my new article in Moral Philosophy and Politics, I argue for collaboration between governments and the private sector to tackle the ethical dimension of AI, but I maintain a priority of governments’ hard regulation over businesses’ selfregulation.

A key insight from leading liberal philosophers of the late 20th century such as John Rawls is that moral principles that may guide our personal lives should not necessarily be the same as political principles governing social life. The reason is that we live in pluralistic societies in which people have different moral beliefs, preferences, and interests. Authors in this tradition such as Samuel Scheffler argue for a division of moral labor between public institutions (e.g. governments) and private agents (e.g. businesses). In this view, governments are in charge of realizing justice and implementing institutional safeguards while individuals and businesses mainly have to maintain just institutions and follow the rules (e.g. vote, pay their taxes, stop at red lights, etc.). For Scheffler (2005 p.229), “the idea of a division of moral labour is best understood as the expression of a strategy for accommodating diverse values.” This allows us to lead our lives as we see fit and it frees us from the constant worry of doing what is right, “secure in the knowledge that elsewhere in the social system the necessary corrections to preserve background justice are being made” (Rawls 1993, p.269).

While authors like Liam Murphy (1999) and Onora O’Neill (2001) have raised objections to this view, I maintain that three central arguments for a division of moral labor remain convincing, and suggest that governments are in principle (if not yet in practice) the best agents to realize a just society. First, governments are more legitimate than private agents because they can legitimize controversial decisions about justice through democratic decision-making procedures and can better secure publicity, transparency, and accountability. Second, they can implement safer coercive mechanisms to secure stable and reliable compliance. Third, they can better centralize information and coordination which is indispensable for efficiency and to avoid coordination failure in the pursuit of distributive justice. Instead, private agents relying on suboptimal means to achieve the same goal would delay justice at best and sustain injustice at worst.

One aim of the paper is to explain how this institutionalist argument applies to important cases in AI ethics to provide ethical guidance to policymakers and practitioners in the AI industry. This could also interest philosophers investigating the scope of the institutionalist view.

The legitimacy argument can be illustrated in the case of automated content moderation on social media. Facebook uses AI and ML to improve content moderation. Every single post or picture is filtered through hierarchical neural networks to recognize the content of the post or picture and decide whether to show it or take it down if it constitutes terrorist propaganda, fake news, harassment, or hate speech. The problem is that removing content that is not widely agreed to be harmful could undermine free speech, especially given the power of social media platforms today and their critical role in amplifying or censoring speech at scale. Therefore, Facebook’s principles of content moderation should perhaps be selected through legitimate decision procedures. Besides the need for legitimate principles, other requirements of justice are relevant in regulating AI such as guaranties of stable compliance to privacy protection and effective coordination in tackling the distributive impact of AI. These arguments justify why government intervention is often the best means to reliably secure justice.

In non-ideal, real-world circumstances, however, when governments are unfair, absent, or ineffective, the nature of our ethical obligations may change. We can no longer be content with following the law and private agents should take on more responsibility in realizing a just society. This is particularly relevant in the case of new technology like AI because, until regulations are adequately updated, businesses seem to have no choice but to self-regulate.

Nevertheless, I believe that an institutionalist approach continues to provide ethical guidance to private agents in the AI industry even in cases of “justice failure”. The first-best strategy and top priority for businesses should be improving public institutions and regulations, by working to improve governments’ legitimacy when it is lacking, by supporting the rapid adoption of binding legislation to secure swift, industry-wide compliance, and by collaborating with lawmakers to provide the information necessary to effectively update regulation. Only when this first-best strategy is not possible can the second-best strategy become permissible: using suboptimal means such as self-regulation to attempt at improving justice.

Thomas Ferretti

I am a Lecturer in Ethics and Sustainable Business at the University of Greenwich (UK). Before that, I taught for five years at the London School of Economics. My work specialises in moral and political philosophy, business ethics, and AI ethics. I hold a Ph.D. in Philosophy from UCLouvain (BEL, 2016). Read more: https://www.thomasferretti.com/