Changes in moral norms, practices and attitudes are partly driven by technological developments, a phenomenon called “technology-induced moral change”. Such change can be profoundly disruptive, meaning that it disrupts human practices at a fundamental level, including moral concepts. In a recent paper, Katharina Bauer and I argue that such changes and disruptions require the development of what we call “technomoral resilience”, and that moral education should aim at fostering this complex capacity. We illustrate our concept of technomoral resilience by means of the example of human caregivers confronted with the introduction of care robots in elderly care. Our argument does not entail that the cultivation of moral resilience is sufficient for dealing with current challenges in elderly care and healthcare more generally. Structural changes such as better payment for care workers are urgently called for, and it is not our intention to place the burden of ensuring the continuous provision of good care entirely on individuals. We see the development of technomoral resilience as contributing to a differentiated and balanced reaction to the change that happens, thus complementing the necessary changes at the political and institutional level.

We propose an understanding of resilience as procedural: it involves a movement from a state of stability through destabilisation and restabilisation to a new and modified stability. The concept applies to the individual as well as to the systemic (or practice) level. At the systemic level, technomoral resilience is the capacity to regain stability after destabilisation, which asks for a certain degree of flexibility. At the individual level, it is the capacity to cope with moral disturbances prompted by technological developments without losing one’s identity as a moral agent.

Care robots are seen as a crucial part of the solution to the problem of a shortage of care workers in an aging society. Robots that are already in use in care settings include robots for lifting patients (“RIBA”), robots that facilitate communication with family members (“Double”) and multifunctional robots like “Zora”, a small humanoid social robot that can give instructions for daily routines, foster patients’ mobility, or pick up trash in care facilities. The first (potential) technology-induced moral change that we address is a change in what it means for care to be good care. The conviction that good care must be care provided by human beings, not by machines, which are associated with coldness is widespread. How can care that is partly provided by care-robots be good care? Answering this question requires “techno-moral imagination”, which is part of technomoral resilience.

The second change concerns a new distribution of roles and responsibilities. The care-robot will, upon entering a sociotechnical network, “alter the distribution of responsibilities and roles within the network as well as the manner in which the practice takes place”. These changes are likely to give rise to confusion and uncertainty. The third change is a new self-understanding of human caregivers. For instance, a caregiver might come to understand themselves as being, together with the robots, jointly responsible for the well-being of the elderly, while in the past they had understood themselves as bearing this responsibility all by themselves. This transition can be expected to be accompanied by uncertainty and feelings of distress.

How would we describe a nurse who has developed technomoral resilience? Imagine a caregiver who reflects critically upon the introduction of an autonomous robot for lifting, such as the RIBA robot. The nurse doesn’t simply refuse to make use of the robot or quit their job, nor do they uncritically embrace the new technological support. Rather, in interaction with the robot and together with other nurses as well as the elderly, they try to explore: how the robot can be used in a way that contributes to the realisation of values such as trust and privacy, how to best redistribute responsibilities, what features of the technology should be improved, and so on. Far from being purely theoretical, this process takes primarily the form of trying something out by giving the robot a particular task and modifying that task in the light of how well the robot fulfilled it, including how the fulfilment of the task by the robot affected the elderly person, the nurse and the relationship between the two.

Technomoral resilience enables people to both cope with change and co-shape the change. We conclude our paper by suggesting that moral education foster technomoral resilience by focusing on a triangle of capacities: 1) moral imagination, 2) a capacity for critical reflection, and 3) a capacity for maintaining one’s moral agency in the face of disturbances. A way of facilitating the development of moral imagination is cultivating curiosity and teaching techniques of imagining and playing. This can be done by using concrete scenarios of the application of emerging technologies and their potential impact on morality. It can, e.g., take the form of narratives or building models, for instance of technologised nursing homes tailored to the needs of the elderly. We can prepare and equip ourselves for future developments by learning within a simulated, imagined future scenario, for instance in serious video games.

I am an Assistant Professor in Philosophy at the University of Twente in the Netherlands. Previously I have held research and teaching positions at the European Inter-University Centre for Human Rights and Democratisation in Venice, Maastricht University, Utrecht University and Eindhoven University of Technology. I hold a PhD from the European University Institute in Florence. My husband and I live in Baarn, a village in the province of Utrecht, together with our two daughters Philine and Romy.