Can entry-level jobs be saved by virtuous AI?

Photo credit: RonaldCandonga at Pixabay.com, https://pixabay.com/photos/job-office-team-business-internet-5382501/

This is a guest post by Hollie Meehan (University of Lancaster).

We have been warned by the CEO of AI company Anthropic that up to 50% of entry-level jobs could be taken by AI in the coming years. While reporters have pointed out that this could be exaggeration to drive profits, it raises the question of where AI should fit into society. Answering this is a complicated matter that I believe could benefit from considering virtue ethics. I’ll focus on the entry-level job market to demonstrate how these considerations can play an important role in monitoring our use of AI and mitigating the potential fallout.

What is virtue ethics?

To start with, it’s important to be clear about what virtue ethical considerations I’m talking about. AI ethics has previously been based in principles, however recently virtue ethics has been seen as a viable alternative that could make much more socially responsible, robust, and complex AI.

Virtue ethics focuses on human flourishing, a term which comes from Aristotle (although he uses the term ‘eudaimonia’), and has many interpretations throughout the virtue ethical canon. For Aristotle, to flourish is to actively possess and develop the virtues and gain the necessary external goods to do this over the course of our lifetimes. Many scholars, including Aristotle and Rosalind Hursthouse, identify flourishing with happiness, and it’s what we get when we act virtuously. Action guidance is built into virtue concepts, like justice, rather than being an explicit set of rules for us to follow. So, to be ethical, AI and its usage have to embody certain virtues to facilitate flourishing. Some common ones, derived in part from Aristotle, are justice, care, and responsibility. Being a virtue ethicist means not only caring about people but having a particular focus on flourishing as an end goal, and on the virtues as the primary means of achieving this. This distinguishes virtue ethics from simply a focus on human welfare, and its flexibility and focus both on means and ends separates it from deontological approaches like Kantianism, and from utilitarianism.

How is this better for AI?

Virtue ethics gives a more holistic approach to AI by looking at the systems that it exists within, like the wider environment it exists in and the motives behind its creation, rather than just the programming. It places the emphasis on humans rather than on the AI, which is what the European Commission recommends. Additionally, AI is a fast-changing technology, and the problems of next week might not be predictable now. Virtue ethics is a bottom-up approach that allows the AI to learn and adapt to new situations, whereas principles-based approaches might lag behind new developments.

How does this affect the workplace?

Anthropic’s warning conjures images of AI squeezing people out of jobs for increased profits. This raises many general concerns about people’s livelihoods, but more importantly (for present purposes) unique problems for entry-level jobs. This is because of their special nature: those who apply for entry-level positions do so to break into the workforce. They are a first step onto the career ladder. The problems come from ideas of fairness for younger generations breaking into the workplace, and a responsibility to ensure that people have the means to live in a world of rapid cost of living increases.

But many of the responsibilities that entry-level workers have can be done instead by AI. Aneesh Raman identified three key jobs where this is the case: junior developers, paralegals, and customer service. 63% of executives on LinkedIn agreed that AI will eventually take over a portion of the “mundane tasks” that entry-level workers have. This poses a problem for those breaking into these careers.

For example, in journalism, entry-level jobs often involve collecting news items from other outlets, which AI is well-positioned to do instead for a significantly smaller cost. Approaching this problem with virtue in mind allows us to create solutions that retain ideas of fairness and a focus on the people surrounding the AI, even if some jobs have to disappear. To give an example, journalism company Axios asks managers to explain why AI won’t be doing a job before approving it. Virtue ethical considerations might recommend asking why it might be better for humans if AI did it instead, resulting in more accountability and focus on humans. Moreover, Business Insider laid off 20% of its staff members in May after saying they would go “all-in on AI”. Rather than lay off staff members, virtue ethics encourages us to consider using AI to facilitate human flourishing, perhaps by changing the nature of entry-level work to retain opportunities to enter journalism, but improve efficiency if this is what the company needs.

The case for a virtue-ethics approach

Virtue ethics’ holistic approach means we think about how we integrate AI. It should strengthen people in the workplace rather than remove them, especially when the jobs threatened are entry-level. Perhaps even more worrying than the impact on graduates, the customer service sector could change drastically. This means there will be a larger impact on those from lower socioeconomic backgrounds, particularly those who have not been to university. Entry-level jobs provide a way into careers for those without privileged backgrounds. Without these roles, it is more difficult to break into high-powered careers, given the connections that such backgrounds provide.

Almost every use of AI will take somebody’s job, much like the drive for automation in production did. However, keeping ideas like fairness, practical wisdom, and care at the forefront can mitigate problems caused by workplace AI. Virtue ethical considerations would ensure not only that the outcome is beneficial, but also the way that we go about achieving that outcome is beneficial. This would make AI integration more socially responsible and protect the economic and social power of those at risk.

Unfortunately, this blog post raises more questions than it answers. What should AI be doing? What responsibilities should we be giving to AI, and which should we keep for people? Virtue ethical considerations can help us with these complex problems by putting people first.


Hollie Meehan is a postgraduate student at Lancaster University. She is interested in virtue ethics, applied ethics, and social epistemology. She is hoping to pursue a PhD in the coming years, focussing on AI and virtue ethics.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *