a blog about philosophy in public affairs

Author: Julia Hermann Page 1 of 2

I am an Assistant Professor in Philosophy at the University of Twente in the Netherlands. Previously I have held research and teaching positions at the European Inter-University Centre for Human Rights and Democratisation in Venice, Maastricht University, Utrecht University and Eindhoven University of Technology. I hold a PhD from the European University Institute in Florence. My husband and I live in Baarn, a village in the province of Utrecht, together with our two daughters Philine and Romy.

The Disruption of Human Reproduction

This is already my third post about ectogestative technology, better known as “artificial womb technology”. While in the first post, I explored the idea that this technology could potentially advance gender justice, in the second, I approached the technology from the perspective of post-phenomenology. In this third post, I look at the technology as an example of a socially disruptive technology. Ongoing research in the philosophy of technology investigates the ways in which 21st Century technologies such as artificial intelligence, synthetic biology, gene-editing technologies, and climate-engineering technologies affect “deeply held beliefs, values, social norms, and basic human capacities”, “basic human practices, fundamental concepts, [and] ontological distinctions”. Those technologies deeply affects us as human beings, our relationship to other parts of nature such as non-human animals and plants, and the societies we live in. In this post, I sketch the potential disruptive effects of ectogestative technology on practices, norms, and concepts related to expecting and having children.

Driving for Values

Smart cities are full of sensors and collect large amounts of data. One reason for doing so is to get real-time information about traffic flows. A next step is to steer the traffic in a way that contributes to the realisation of values such as safety and sustainability. Think of steering cars around schools to improve the safety of children, or of keeping certain areas car-free to improve air quality. Is it legitimate for cities to nudge their citizens to make moral choices when participating in traffic? Would a system that limits a person’s options for the sake of improving quality of life in the city come at the cost of restricting that person’s autonomy? In a transdisciplinary research project, we (i.e., members of the ESDiT programme and the Responsible Sensing Lab) explored how a navigation app that suggests routes based on shared values, would affect users’ experiences of autonomy. We did so by letting people try out speculative prototypes of such an app on a mobile phone and ask them questions about how they experienced different features of the app. During several interviews and a focus group, we gained insights about the conditions under which people find such an app acceptable and about the features that increase or decrease their feeling of autonomy.

The Need for Technomoral Resilience

Changes in moral norms, practices and attitudes are partly driven by technological developments, a phenomenon called “technology-induced moral change”. Such change can be profoundly disruptive, meaning that it disrupts human practices at a fundamental level, including moral concepts. In a recent paper, Katharina Bauer and I argue that such changes and disruptions require the development of what we call “technomoral resilience”, and that moral education should aim at fostering this complex capacity. We illustrate our concept of technomoral resilience by means of the example of human caregivers confronted with the introduction of care robots in elderly care. Our argument does not entail that the cultivation of moral resilience is sufficient for dealing with current challenges in elderly care and healthcare more generally. Structural changes such as better payment for care workers are urgently called for, and it is not our intention to place the burden of ensuring the continuous provision of good care entirely on individuals. We see the development of technomoral resilience as contributing to a differentiated and balanced reaction to the change that happens, thus complementing the necessary changes at the political and institutional level.

A Conversation with Philip Kitcher about Moral Methodology and Moral Progress

On Monday evening, I talked to Philip Kitcher about his novel account of moral progress, which he developed in his Munich Lectures in Ethics. Those lectures have just been published by Oxford University Press, together with comments from Amia Srinivasan, Susan Neiman and Rahel Jaeggi. In the Munich Lectures, Kitcher takes up the “Deweyan project of making moral progress more systematic and sure-footed”. He seeks to gain a better understanding of what moral progress is by looking at cases from history. He then proposes a methodology for identifying morally problematic situations and coming up with justified solutions to those problems. It is a methodology for moral and ethical practice (not theory!), and it manifests the hope that human beings are able to attain moral progress – even with respect to the highly complex moral problems of our times. In our conversation, we talked about the open-endedness of the moral project, the collective nature of moral insight, the kinds of conversations that Kitcher believes are needed to deal with the moral problems that humanity is facing today, and the role of technology in the moral project.

COVID-19 and Technomoral Change

According to the emerging paradigm of technomoral change, technology and morality co-shape each other. It is not only the case that morality influences the development of technologies. The reverse also holds: technologies affect moral norms and values. Tsjalling Swierstra compares the relationship of technology and morality with a special type of marriage: one that does not allow for divorce. Has the still-ongoing pandemic led to instances of technomoral change, or is it likely to lead to them in the future? One of the many effects of the pandemic is the acceleration of processes of digitalisation in many parts of the world. The widespread use of digital technologies in contexts such as work, education, and private life can be said to have socially disruptive effects. It deeply affects how people experience their relations to others, how they connect to their families, friends and colleagues, and the meaning that direct personal encounters have for them. Does the pandemic also have morally disruptive effects? By way of changing social interactions and relationships, it might indirectly affect moral agency and how the competent moral agent is conceived of. As promising as the prospect of replacing many of the traditional business meetings, international conferences, team meetings etc. with online meetings might seem with regard to solving the climate crisis, as worrisome it might be with an eye on the development and exercise of social and moral capacities.

An Ethical Code for Citizen Science?

Citizen Science is gaining popularity. The term refers to a form of scientific research that is carried out entirely or in part by citizens who are not professional scientists. These citizens contribute to research projects by, for example, reporting observations of plants and birds, by playing computer games or by measuring their own blood sugar level. “Citizen scientists” (also referred to as, for instance, “participants”, “volunteers”, “uncredentialed researchers”, or “community researchers”) can be involved in several ways and at any stage of a research project. They often collect data, for instance about air quality or water quality, and sometimes they are also involved in the analysis of those data. In some cases, citizens initiate and/or lead research projects, but in most of the projects we read about in academic journals, professional scientists take the lead and involve citizens at some stage(s) of the research. Some interpret the rise of citizen science as a development towards the democratisation of science and the empowerment of citizens. In this post, I address some ethical worries regarding citizen science initiatives, relate them to the choice of terminology and raise the question as to whether we need an ethical code for citizen science.

How will the coronavirus affect us – as individuals and as a society?

Schools are closed. Flights cancelled. Highways and trains deserted. People are asked to minimise social contact. At first, the coronavirus appeared to be not much different from a normal flu. But then it spread in almost no time across 100 states around the world. Initially, the measures taken by the Italian government seemed extreme, perhaps exaggerated – now several countries are following the Italian example, including Belgium, Germany, and the Netherlands. The most urgent ethical issue raised by the coronavirus will be the allocation of limited resources, including hospital space. There are also concerns of global justice, given the huge differences between states with regard to their ability to deal with the virus. Despite the fatal effects of this pandemic, we also hear voices that view it as a chance and express the hope that it might bring about some positive changes in society. How will covid-19 affect us – as individuals and as a society? Will it make us more egoistic (“My family first!”) or will it bring us closer together, making us realise how much we depend on each other? Can we expect anything positive from this crisis, and what could that be?

The Potential Mediating Role of the Artificial Womb

On May 6th, I published a post about the artificial womb and its potential role for promoting gender justice. I keep thinking about this technology, and since there is more and more ethical discussion about it, I want to address it again, this time from the point of view of mediation theory and in an attempt to anticipate the potential mediating role of this technology. According to mediation theory, technology mediates how humans perceive and act in the world. The Dutch philosopher Peter-Paul Verbeek has extended this post-phenomenological approach, which has been developed by Don Ihde, to the realm of ethics. Verbeek sees technology as being intrinsically involved in moral decision-making. Technology mediates our moral perceptions and actions. Moral agency is not something exclusively human, but a “hybrid affair”. Moral actions and decisions “take place in complex and intricate connections between humans and things”. Verbeek illustrates technology’s mediating role by means of the example of obstetric ultrasound. I shall apply the idea of the technological mediation of morality to the artificial womb and discuss some ways in which that technology could play a mediating role in morality.

More Gender Justice Through the Artificial Womb?

In 2017, US-scientists succeeded in transferring lamb foetuses to what comes very close to an artificial womb: a “biobag”. All of the lambs emerged from the biobag healthy. The scientists believe that about two years from now it will be possible to transfer preterm human babies to an artificial womb, in which they have greater chances to survive and develop without a handicap than in current neonatal intensive care. At this point in time, developers of the technology, such as Guid Oei, gynaecologist and professor at Eindhoven University of Technology, see the technology as a possible solution to the problem of neonatal mortality and disability due to preterm birth. They do not envisage uses of it that go far beyond that. Philosophers and ethicists, however, have started thinking about the use of artificial womb technology for very different purposes, such as being able to terminate a risky pregnancy without having to kill the foetus, or strengthening the freedom of women. If we consider such further going uses, new ethical issues arise, including whether artificial womb technology could promote gender justice. Should we embrace this technology as a means towards greater equality between men and women?

Technological Justice

Relaxed senior adult wearing eyeglasses works on a laptop computer at home.

At least in the developed world, technology pervades all aspects of human life, and its influence is growing constantly. Major technological challenges include automation, digitalisation, 3 D printing, and Artificial Intelligence. Does this pose a need for a concept of “technological justice”? If we think about what “technological justice” could mean, we see that the concept is closely connected to other concepts of justice. Whether we are talking about social justice, environmental justice, global justice, intergenerational justice, or gender justice – at some point we will always refer to technology. It looks as if a concept of technological justice could be useful to draw special attention to technology’s massive impact on human lives, although the respective problems of justice can also be captured by more familiar concepts.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén