a blog about philosophy in public affairs

Author: Julia Hermann Page 1 of 2

I am an Assistant Professor in Philosophy at the University of Twente in the Netherlands. Previously I have held research and teaching positions at the European Inter-University Centre for Human Rights and Democratisation in Venice, Maastricht University, Utrecht University and Eindhoven University of Technology. I hold a PhD from the European University Institute in Florence. My husband and I live in Baarn, a village in the province of Utrecht, together with our two daughters Philine and Romy.

The Need for Technomoral Resilience

Changes in moral norms, practices and attitudes are partly driven by technological developments, a phenomenon called “technology-induced moral change”. Such change can be profoundly disruptive, meaning that it disrupts human practices at a fundamental level, including moral concepts. In a recent paper, Katharina Bauer and I argue that such changes and disruptions require the development of what we call “technomoral resilience”, and that moral education should aim at fostering this complex capacity. We illustrate our concept of technomoral resilience by means of the example of human caregivers confronted with the introduction of care robots in elderly care. Our argument does not entail that the cultivation of moral resilience is sufficient for dealing with current challenges in elderly care and healthcare more generally. Structural changes such as better payment for care workers are urgently called for, and it is not our intention to place the burden of ensuring the continuous provision of good care entirely on individuals. We see the development of technomoral resilience as contributing to a differentiated and balanced reaction to the change that happens, thus complementing the necessary changes at the political and institutional level.

We propose an understanding of resilience as procedural: it involves a movement from a state of stability through destabilisation and restabilisation to a new and modified stability. The concept applies to the individual as well as to the systemic (or practice) level. At the systemic level, technomoral resilience is the capacity to regain stability after destabilisation, which asks for a certain degree of flexibility. At the individual level, it is the capacity to cope with moral disturbances prompted by technological developments without losing one’s identity as a moral agent.

Care robots are seen as a crucial part of the solution to the problem of a shortage of care workers in an aging society. Robots that are already in use in care settings include robots for lifting patients (“RIBA”), robots that facilitate communication with family members (“Double”) and multifunctional robots like “Zora”, a small humanoid social robot that can give instructions for daily routines, foster patients’ mobility, or pick up trash in care facilities. The first (potential) technology-induced moral change that we address is a change in what it means for care to be good care. The conviction that good care must be care provided by human beings, not by machines, which are associated with coldness is widespread. How can care that is partly provided by care-robots be good care? Answering this question requires “techno-moral imagination”, which is part of technomoral resilience.

The second change concerns a new distribution of roles and responsibilities. The care-robot will, upon entering a sociotechnical network, “alter the distribution of responsibilities and roles within the network as well as the manner in which the practice takes place”. These changes are likely to give rise to confusion and uncertainty. The third change is a new self-understanding of human caregivers. For instance, a caregiver might come to understand themselves as being, together with the robots, jointly responsible for the well-being of the elderly, while in the past they had understood themselves as bearing this responsibility all by themselves. This transition can be expected to be accompanied by uncertainty and feelings of distress.

How would we describe a nurse who has developed technomoral resilience? Imagine a caregiver who reflects critically upon the introduction of an autonomous robot for lifting, such as the RIBA robot. The nurse doesn’t simply refuse to make use of the robot or quit their job, nor do they uncritically embrace the new technological support. Rather, in interaction with the robot and together with other nurses as well as the elderly, they try to explore: how the robot can be used in a way that contributes to the realisation of values such as trust and privacy, how to best redistribute responsibilities, what features of the technology should be improved, and so on. Far from being purely theoretical, this process takes primarily the form of trying something out by giving the robot a particular task and modifying that task in the light of how well the robot fulfilled it, including how the fulfilment of the task by the robot affected the elderly person, the nurse and the relationship between the two.

Technomoral resilience enables people to both cope with change and co-shape the change. We conclude our paper by suggesting that moral education foster technomoral resilience by focusing on a triangle of capacities: 1) moral imagination, 2) a capacity for critical reflection, and 3) a capacity for maintaining one’s moral agency in the face of disturbances. A way of facilitating the development of moral imagination is cultivating curiosity and teaching techniques of imagining and playing. This can be done by using concrete scenarios of the application of emerging technologies and their potential impact on morality. It can, e.g., take the form of narratives or building models, for instance of technologised nursing homes tailored to the needs of the elderly. We can prepare and equip ourselves for future developments by learning within a simulated, imagined future scenario, for instance in serious video games.

A Conversation with Philip Kitcher about Moral Methodology and Moral Progress

On Monday evening, I talked to Philip Kitcher about his novel account of moral progress, which he developed in his Munich Lectures in Ethics. Those lectures have just been published by Oxford University Press, together with comments from Amia Srinivasan, Susan Neiman and Rahel Jaeggi. In the Munich Lectures, Kitcher takes up the “Deweyan project of making moral progress more systematic and sure-footed”. He seeks to gain a better understanding of what moral progress is by looking at cases from history. He then proposes a methodology for identifying morally problematic situations and coming up with justified solutions to those problems. It is a methodology for moral and ethical practice (not theory!), and it manifests the hope that human beings are able to attain moral progress – even with respect to the highly complex moral problems of our times. In our conversation, we talked about the open-endedness of the moral project, the collective nature of moral insight, the kinds of conversations that Kitcher believes are needed to deal with the moral problems that humanity is facing today, and the role of technology in the moral project.

COVID-19 and Technomoral Change

According to the emerging paradigm of technomoral change, technology and morality co-shape each other. It is not only the case that morality influences the development of technologies. The reverse also holds: technologies affect moral norms and values. Tsjalling Swierstra compares the relationship of technology and morality with a special type of marriage: one that does not allow for divorce. Has the still-ongoing pandemic led to instances of technomoral change, or is it likely to lead to them in the future? One of the many effects of the pandemic is the acceleration of processes of digitalisation in many parts of the world. The widespread use of digital technologies in contexts such as work, education, and private life can be said to have socially disruptive effects. It deeply affects how people experience their relations to others, how they connect to their families, friends and colleagues, and the meaning that direct personal encounters have for them. Does the pandemic also have morally disruptive effects? By way of changing social interactions and relationships, it might indirectly affect moral agency and how the competent moral agent is conceived of. As promising as the prospect of replacing many of the traditional business meetings, international conferences, team meetings etc. with online meetings might seem with regard to solving the climate crisis, as worrisome it might be with an eye on the development and exercise of social and moral capacities.

An Ethical Code for Citizen Science?

Citizen Science is gaining popularity. The term refers to a form of scientific research that is carried out entirely or in part by citizens who are not professional scientists. These citizens contribute to research projects by, for example, reporting observations of plants and birds, by playing computer games or by measuring their own blood sugar level. “Citizen scientists” (also referred to as, for instance, “participants”, “volunteers”, “uncredentialed researchers”, or “community researchers”) can be involved in several ways and at any stage of a research project. They often collect data, for instance about air quality or water quality, and sometimes they are also involved in the analysis of those data. In some cases, citizens initiate and/or lead research projects, but in most of the projects we read about in academic journals, professional scientists take the lead and involve citizens at some stage(s) of the research. Some interpret the rise of citizen science as a development towards the democratisation of science and the empowerment of citizens. In this post, I address some ethical worries regarding citizen science initiatives, relate them to the choice of terminology and raise the question as to whether we need an ethical code for citizen science.

How will the coronavirus affect us – as individuals and as a society?

Schools are closed. Flights cancelled. Highways and trains deserted. People are asked to minimise social contact. At first, the coronavirus appeared to be not much different from a normal flu. But then it spread in almost no time across 100 states around the world. Initially, the measures taken by the Italian government seemed extreme, perhaps exaggerated – now several countries are following the Italian example, including Belgium, Germany, and the Netherlands. The most urgent ethical issue raised by the coronavirus will be the allocation of limited resources, including hospital space. There are also concerns of global justice, given the huge differences between states with regard to their ability to deal with the virus. Despite the fatal effects of this pandemic, we also hear voices that view it as a chance and express the hope that it might bring about some positive changes in society. How will covid-19 affect us – as individuals and as a society? Will it make us more egoistic (“My family first!”) or will it bring us closer together, making us realise how much we depend on each other? Can we expect anything positive from this crisis, and what could that be?

The Potential Mediating Role of the Artificial Womb

On May 6th, I published a post about the artificial womb and its potential role for promoting gender justice. I keep thinking about this technology, and since there is more and more ethical discussion about it, I want to address it again, this time from the point of view of mediation theory and in an attempt to anticipate the potential mediating role of this technology. According to mediation theory, technology mediates how humans perceive and act in the world. The Dutch philosopher Peter-Paul Verbeek has extended this post-phenomenological approach, which has been developed by Don Ihde, to the realm of ethics. Verbeek sees technology as being intrinsically involved in moral decision-making. Technology mediates our moral perceptions and actions. Moral agency is not something exclusively human, but a “hybrid affair”. Moral actions and decisions “take place in complex and intricate connections between humans and things”. Verbeek illustrates technology’s mediating role by means of the example of obstetric ultrasound. I shall apply the idea of the technological mediation of morality to the artificial womb and discuss some ways in which that technology could play a mediating role in morality.

More Gender Justice Through the Artificial Womb?

In 2017, US-scientists succeeded in transferring lamb foetuses to what comes very close to an artificial womb: a “biobag”. All of the lambs emerged from the biobag healthy. The scientists believe that about two years from now it will be possible to transfer preterm human babies to an artificial womb, in which they have greater chances to survive and develop without a handicap than in current neonatal intensive care. At this point in time, developers of the technology, such as Guid Oei, gynaecologist and professor at Eindhoven University of Technology, see the technology as a possible solution to the problem of neonatal mortality and disability due to preterm birth. They do not envisage uses of it that go far beyond that. Philosophers and ethicists, however, have started thinking about the use of artificial womb technology for very different purposes, such as being able to terminate a risky pregnancy without having to kill the foetus, or strengthening the freedom of women. If we consider such further going uses, new ethical issues arise, including whether artificial womb technology could promote gender justice. Should we embrace this technology as a means towards greater equality between men and women?

Technological Justice

Relaxed senior adult wearing eyeglasses works on a laptop computer at home.

At least in the developed world, technology pervades all aspects of human life, and its influence is growing constantly. Major technological challenges include automation, digitalisation, 3 D printing, and Artificial Intelligence. Does this pose a need for a concept of “technological justice”? If we think about what “technological justice” could mean, we see that the concept is closely connected to other concepts of justice. Whether we are talking about social justice, environmental justice, global justice, intergenerational justice, or gender justice – at some point we will always refer to technology. It looks as if a concept of technological justice could be useful to draw special attention to technology’s massive impact on human lives, although the respective problems of justice can also be captured by more familiar concepts.

Moral progress in beliefs and practices

Abraham Lincoln said: “If slavery is not wrong, then nothing is wrong”. Similarly we could say: “If the abolition of slavery is not an instance of moral progress, then nothing is an instance of moral progress.” The abolition of slavery is the favourite example of philosophers who write about the topic of moral progress. While the existence and the possibility of moral progress are contested, the view that if there were such a thing as moral progress, the abolition of slavery would be an instance of it is not. (By the way, I fully acknowledge that slavery still exists, especially new forms of slavery, which are in some respects even worse than the old forms. But this doesn’t change the fact that the slave trade that we used to have for centuries is now illegal in every country in the world.) Other popular examples of moral progress include the development of a human rights regime, the emancipation of women and the abolition of foot binding. In a previous post, I argued that moral progress is not impossible and cited evolutionary considerations. In this post, I challenge Michelle Moody-Adams’ view of moral progress in social practices as the realization of previously gained moral insights.

Ought non-mobile citizens of the EU be compensated for the costs of mobility?

In his kick-off contribution to the latest EUDO-Forum debate, Maurizio Ferrera engages with a challenging question raised by Rainer Bauböck in his State of the Union Address (5 May 2017, Florence): can the integrative functions of EU citizenship be enhanced and how? Ferrera identifies flaws of the EU citizenship construct, focusing on its social dimension, and concludes with “some modest proposals for ‘adding stuff’ to the EU citizenship container”. His proposals include a compensation of non-mobile EU citizens for the negative economic and social externalities of intra-EU mobility, i.e., of the mobility of workers in the EU. While I agree with much of what Ferrera says, I am unconvinced of this particular proposal. The argument presented here is a short version of the one published on the EUDO website.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén