Category: Technology

Just do(pe) it? Why the academic project is at risk from proposals to pharmacologically enhance researchers.

In this post, Heidi Matisonn (University of Cape Town) and Jacek Brzozowski (University of KwaZulu-Natal) discuss their recently published article in the Journal of Applied Philosophy in which they explore the justifiability and potential risks of cognitive enhancement in academia.

Image created with ChatGPT.

The human desire to enhance our cognitive abilities, to push the boundaries of intelligence through education, tools, and technology has a long history. Fifteen years ago, confronted by the possibility that a ‘morally corrupt’ minority could misuse cognitive gains to catastrophic effect, Persson and Savulescu proposed that research into cognitive enhancement should be halted unless accompanied by advancements in moral enhancement.

In response to this, and following on from Harris’ worries about the mass suffering that could result from delaying cognitive enhancement until moral enhancement could catch up, in 2023, Gordon and Ragonese offered what they termed a ‘practical approach’ to cognitive enhancement research in which they advocated for targeted cognitive enhancement —specifically for researchers working on moral enhancement.

Our recent article in the Journal of Applied Philosophy suggests that while both sets of authors are correct in their concerns about the significant risks related to cognitive enhancement outrunning moral enhancement, their focus on the ‘extremes’ neglects some more practical consequences that a general acceptance of cognitive enhancement may bring — not least of which relate to the academic project itself.

(more…)

From the Vault: Universities, Academia and the academic profession

While Justice Everywhere takes a short break over the summer, we recall some of the highlights from our 2023-24 season. 

Trinity College Library, Dublin. antomoro (FAL or FAL), via Wikimedia Commons

Here are a few highlights from this year’s posts relating to academia, the modern university, and the academic profession:

Stay tuned for even more on this topic in our 2024-25 season!

***

Justice Everywhere will return in full swing in September with fresh weekly posts by our cooperative of regular authors (published on Mondays), in addition to our Journal of Applied Philosophy series and other special series (published on Thursdays). If you would like to contribute a guest post on a topical justice-based issue (broadly construed), please feel free to get in touch with us at justice.everywhere.blog@gmail.com.

Why Conscious AI Would Be Bad for the Environment

Image credit to Griffin Kiegiel and Sami Aksu

This is a guest post by Griffin Kiegiel.

Since the meteoric rise of ChatGPT in 2021, artificial intelligence systems (AI) have been implemented into everything from smartphones and electric vehicles, to toasters and toothbrushes. The long-term effects of this rapid adoption remain to be seen, but we can be certain of one thing: AI uses a lot of energy that we can’t spare. ChatGPT reportedly uses more than 500,000 kilowatt-hours of electricity daily, which is massive compared to the 29 kilowatt-hours consumed by the average American household.

As the global temperature and ocean levels rise, it is our responsibility to limit our collective environmental impact as much as possible. If the benefits of AI don’t outweigh the risks associated with increasing our rate of energy consumption, then we may be obligated to shut down AI for the sake of environmental conservation. However, if AI becomes conscious, shutting them down may be akin to murder, morally trapping us in an unsustainable system.

(more…)

The Disruption of Human Reproduction

This is already my third post about ectogestative technology, better known as “artificial womb technology”. While in the first post, I explored the idea that this technology could potentially advance gender justice, in the second, I approached the technology from the perspective of post-phenomenology. In this third post, I look at the technology as an example of a socially disruptive technology. Ongoing research in the philosophy of technology investigates the ways in which 21st Century technologies such as artificial intelligence, synthetic biology, gene-editing technologies, and climate-engineering technologies affect “deeply held beliefs, values, social norms, and basic human capacities”, “basic human practices, fundamental concepts, [and] ontological distinctions”. Those technologies deeply affects us as human beings, our relationship to other parts of nature such as non-human animals and plants, and the societies we live in. In this post, I sketch the potential disruptive effects of ectogestative technology on practices, norms, and concepts related to expecting and having children.

(more…)

What’s really at stake with Open Access research? The Case of Sci-Hub and the “Robin Hood of Science”

A mural dedicated to Sci-Hub at UNAM. Txtdgtl, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

This is a guest post by Georgiana Turculet (Universitat Pompeu Fabra).

In his recently published “More Open Access, More Inequality in the Academia”, Alex Volacu aptly criticizes present Open Access (OA) policies for giving rise to structural inequalities among researchers, and increasing revenues only for publishers. His analysis is aimed at contextualizing some recent academic events, that is, the board of the well-known Journal of Political Philosophy resigning due to pressures from publishers to increase the intake of open access publications. However, it would benefit from considering the wider context of recent alternative form of resistance to corporate publishers’ pressures.

(more…)

Driving for Values

Smart cities are full of sensors and collect large amounts of data. One reason for doing so is to get real-time information about traffic flows. A next step is to steer the traffic in a way that contributes to the realisation of values such as safety and sustainability. Think of steering cars around schools to improve the safety of children, or of keeping certain areas car-free to improve air quality. Is it legitimate for cities to nudge their citizens to make moral choices when participating in traffic? Would a system that limits a person’s options for the sake of improving quality of life in the city come at the cost of restricting that person’s autonomy? In a transdisciplinary research project, we (i.e., members of the ESDiT programme and the Responsible Sensing Lab) explored how a navigation app that suggests routes based on shared values, would affect users’ experiences of autonomy. We did so by letting people try out speculative prototypes of such an app on a mobile phone and ask them questions about how they experienced different features of the app. During several interviews and a focus group, we gained insights about the conditions under which people find such an app acceptable and about the features that increase or decrease their feeling of autonomy.

(more…)

What Claims Do We Have Over Our Google Search Profiles?

This is a guest post by Hannah Carnegy-Arbuthnott (University of York).

We’ve all done things we regret. It used to be possible to comfort ourselves with the thought that our misadventures would soon be forgotten. In the digital age, however, not only is more of our personal information captured and recorded, search engines can also serve up previously long-forgotten information at the click of a button.

(more…)

The diversity of values in virtual reality

In this post, Rami Ali (University of Arizona) discusses his recent article in Journal of Applied Philosophy on the range of values possible in the virtual world.


AI-generated image generated by Rami Ali’s prompt using OpenAI

Early in The Matrix Cypher confronts Neo with a question: “Why, oh why, didn’t I take that blue pill?” The confrontation is meaningful and significant. The red pill gave them their nonvirtual life outside the matrix. But is that life really more valuable than their blue pill-life inside the matrix? We’re invited to take a side and it’s tempting to do so. But neither choice is right. In The Values of the Virtual I argue that virtual items are not less or more valuable, nor of equal or sui generis value when compared to their nonvirtual counterparts. Or more aptly, they are all of these, depending on the virtual instance we have in mind. Taking sides short-changes the diversity of the virtual world and everything populating it, leaving us with less nuance than we need to understand and govern our virtual lives.

(more…)

Is it possible to trust Artificial Intelligence (AI)?

In this post, Pepijn Al (University of Western Ontario) discusses his recent article in Journal of Applied Philosophy on trust and responsibility in human relationships with AI and its developers.


Chances are high that you are using AI systems on a daily basis. Maybe you have watched a series that Netflix recommended to you. Or used Google Maps to navigate. Even the Editor I used for this blogpost is AI-powered. If you are like me, you might do this without knowing exactly how these systems work. So, could it be that we have started to trust the AI systems we use? As I argue in a recent article, I think this would be the wrong conclusion to, because trust has a specific function which is absent in human-AI interactions.

(more…)

What, if any, harm can a self-driving car do?

In this post, Fiona Woollard discusses their recent article in Journal of Applied Philosophy on the kinds of constraints against harm relevant to self-driving cars.


We are preparing for a future when most cars do not need a human driver. You will be able to get into your ‘self-driving car’, tell it where you want to go, and relax as it takes you there without further human input. This will be great! But there are difficult questions about how self-driving cars should behave. One answer is that self-driving cars should do whatever minimises harm. But perhaps harm is not the only thing that matters morally: perhaps it matters whether an agent does harm or merely allows harm, whether harm is strictly intended or a mere side effect, or who is responsible for the situation where someone must be harmed.

I argue in a recent article that these distinctions do matter morally but that care is needed when applying them to self-driving cars. Self-driving cars are very different from human agents. These differences may affect how the distinctions apply. (more…)