Early in The Matrix Cypher confronts Neo with a question: “Why, oh why, didn’t I take that blue pill?” The confrontation is meaningful and significant. The red pill gave them their nonvirtual life outside the matrix. But is that life really more valuable than their blue pill-life inside the matrix? We’re invited to take a side and it’s tempting to do so. But neither choice is right. In The Values of the Virtual I argue that virtual items are not less or more valuable, nor of equal or sui generis value when compared to their nonvirtual counterparts. Or more aptly, they are all of these, depending on the virtual instance we have in mind. Taking sides short-changes the diversity of the virtual world and everything populating it, leaving us with less nuance than we need to understand and govern our virtual lives.
Category: Technology Page 1 of 2
In this post, Pepijn Al (University of Western Ontario) discusses his recent article in Journal of Applied Philosophy on trust and responsibility in human relationships with AI and its developers.
Chances are high that you are using AI systems on a daily basis. Maybe you have watched a series that Netflix recommended to you. Or used Google Maps to navigate. Even the Editor I used for this blogpost is AI-powered. If you are like me, you might do this without knowing exactly how these systems work. So, could it be that we have started to trust the AI systems we use? As I argue in a recent article, I think this would be the wrong conclusion to, because trust has a specific function which is absent in human-AI interactions.
In this post, Fiona Woollard discusses their recent article in Journal of Applied Philosophy on the kinds of constraints against harm relevant to self-driving cars.
We are preparing for a future when most cars do not need a human driver. You will be able to get into your ‘self-driving car’, tell it where you want to go, and relax as it takes you there without further human input. This will be great! But there are difficult questions about how self-driving cars should behave. One answer is that self-driving cars should do whatever minimises harm. But perhaps harm is not the only thing that matters morally: perhaps it matters whether an agent does harm or merely allows harm, whether harm is strictly intended or a mere side effect, or who is responsible for the situation where someone must be harmed.
I argue in a recent article that these distinctions do matter morally but that care is needed when applying them to self-driving cars. Self-driving cars are very different from human agents. These differences may affect how the distinctions apply.
This month we will be publishing a series of posts on the topic of fatigue. Two years after the outbreak of the Covid-19 pandemic, constant fatigue characterises the lives of too many of us. Here we think about some of the political and social consequences of fatigue. In this first post, Elisa Piras writes about the dangers of information overload.
I’m having trouble trying to sleep / I’m counting sheep but running out (…)
My eyes feel like they’re gonna bleed / Dried up and bulging out my skull / My mouth is dry, my face is numb (…)
My mind is set on overdrive / The clock is laughing in my face / A crooked spine, my senses dulled (…)
Green Day, Brain Stew (1995)
If there is a word for describing the continuous tension that we experience in our daily life because of our compulsive need of information, it is probably overload. In a large and hyperconnected world, we are at the same time information seekers, producers and transmitters: we are informative hubs, constantly sharing messages with other hubs, because of our work, education, leisure activities. Like Don Quixote, the average person spends way too many hours engrossed in intellectual activities, absorbing the most different notions, analysing a wide array of data, messaging with a number of interlocutors. Some of us do so while moving between different languages and crossing several networks. Unlike Don Quixote’s, the world we live in is not an imaginary or evanescent one; quite to the contrary, the information waves that we ride and that sometimes overwhelm us bring elements of reality to our attention and put a strain on our cognitive, communicative and social skills.
When reality becomes especially pressing – for instance, in particularly intense work periods, or when major media events, like a pandemics or an escalating war, unfold – we can experience a malaise that Wurman (1989) has described as information anxiety, the condition of stress caused by the perceived gap between data and knowledge, which we feel when we are not able to extract what we need or want from the available information. Analysing work-induced stress manifestations among managers, Lewis (1990) observed the existence of the so-called information fatigue syndrome, whose symptoms are psychophysical: unrest and irritability, anxiety and self-doubt, insomnia, confusion and frustration, forgetfulness, frequent stomach pains and headaches. Since our access to information is often physically mediated by screens or earphones/earbuds, these symptoms might be accompanied by those revealing technostress: brain fog, sore eyes, neck and spine pain.
Overwhelming waves of information cause the condition we know as information overload or infoxication, which “occurs when decision-makers face a level of information that is greater than their information processing capacity”; this situation causes a decisional paralysis (Roetzel 2019). Sure, the problem of obtaining and processing just the adequate amount of information which allows people to make good choices is not a new one. However, nowadays data smog and info-noise appear to be especially challenging, not only for managers, but for a wider group of people, including adolescents. Moreover, according to a recent report, 59.5% of the world population uses the Internet and the pandemic has boosted the number of social media users, which has reached 4.2 billion as of January 2021. Smart working, online teaching and learning, socialising in the metaverse – something that 30 years ago could be possible only in cyberpunk sci-fi novels – have become widespread activities during the last two years and the smartphone really is this age’s devotional object, as techno-apocalyptic philosopher Byung-Chul Han (2014) maintains.
As a rich literature shows, our capacity to make decisions is hampered by information overload. Even under normal conditions, our decisions tend to be less rational and intelligible than we believe them to be, because of the characteristics of the problems at stake such as undecidability and/or of the so-called opacity of consciousness, i.e. the difficulty with grasping the cognitive processes behind our choices. This is especially so when we consider collective decisions which have to be adopted under conditions of information disorder. Rumours, i.e. false or manipulated information, contribute to make our collective decisional processes – in the family, at work, in political arenas – more complicated and make dialogue more difficult, fostering opinion polarisation and undermining the chances of reaching an agreement and developing mutual trust.
When information overload is not a momentary blackout but becomes a daily condition, our normality changes and we feel protracted fatigue and exhaustion. Writing, reading, listening, discussing become exhausting tasks and we feel like we are falling into a Green-Day-dystopia. We risk becoming prey to neuronal illnesses like depression and burnout syndrome. Information is the key to our societies and it helps us to shed light on reality, but as Byung-Chul Han (2015) warns us, its overly intense glow can blind us and eventually plunge us into darkness, turning us into insomniac and depressed hyperconnected yet socially isolated ghosts. Knowing that we are exposed to such a risk is the necessary precondition to start searching for viable solutions, alternative to the “heresy” of radically disconnecting ourselves from the digital world and choosing a life sheltered from the blinding light of information.
essay by Elisa Piras
This is a guest post by Nikhil Venkatesh, a PhD candidate in Philosophy at University College London, and a fellow of the Forethought Foundation for Global Priorities Research. It draws on his paper ‘Surveillance Capitalism: a Marx-inspired account’.
On Monday 4th October, mistakes in a routine maintenance task led to Facebook’s servers disconnecting from the Internet. For six hours people across the world were unable to use Facebook and other platforms the company owns such as Instagram and WhatsApp.
The outage had serious consequences. Billions of people use these platforms, not just to gossip and share memes but to do their jobs and to reach their families. Orders and sales were missed, and so were births and deaths. At the same time, many found those six hours liberating: a chance to get things done undistracted. But what if the outage had gone on for weeks, months, or forever? Would you have been able to cope?
The previous day, former Facebook employee Frances Haugen revealed herself as the source for a Wall Street Journal series examining how the company’s products ‘harm children, stoke division and weaken our democracy’. This is the latest in a continuous stream of Facebook-related scandals: Cambridge Analytica and Brexit, Russian interference and Trump, genocide in Myanmar, the ongoing presence of scams and hate speech, and the spread of conspiracy theories about the pandemic and the vaccine which led the President of the United States, no less, to accuse Facebook of ‘killing people’. Each time a scandal appears, many of us consider quitting Facebook’s platforms. How could you participate in a social network that does these awful things?
According to the emerging paradigm of technomoral change, technology and morality co-shape each other. It is not only the case that morality influences the development of technologies. The reverse also holds: technologies affect moral norms and values. Tsjalling Swierstra compares the relationship of technology and morality with a special type of marriage: one that does not allow for divorce. Has the still-ongoing pandemic led to instances of technomoral change, or is it likely to lead to them in the future? One of the many effects of the pandemic is the acceleration of processes of digitalisation in many parts of the world. The widespread use of digital technologies in contexts such as work, education, and private life can be said to have socially disruptive effects. It deeply affects how people experience their relations to others, how they connect to their families, friends and colleagues, and the meaning that direct personal encounters have for them. Does the pandemic also have morally disruptive effects? By way of changing social interactions and relationships, it might indirectly affect moral agency and how the competent moral agent is conceived of. As promising as the prospect of replacing many of the traditional business meetings, international conferences, team meetings etc. with online meetings might seem with regard to solving the climate crisis, as worrisome it might be with an eye on the development and exercise of social and moral capacities.
Artificial intelligence (AI) and machine learning (ML) have seen impressive developments in the last decades. Think about Google’s DeepMind defeating Lee Sedol, the best human player of Go, with their program AlphaGo in 2015. The latest version, AlphaZero, is remarkable because it relied on deep reinforcement learning to learn how to play Go entirely by itself from scratch: with only the rules of the game, through trial and error, and playing millions of games against itself. Machine learning algorithms have a range of other practical applications, from image recognition in medical diagnostics to energy management.
At the start of March, the US National Security Commission on AI (NSCAI), chaired by Eric Schmidt, former CEO of Google and Robert Work, former Deputy Secretary of Defense, issued its 756-page final report. It argues that the US is in danger of losing its technological competitive advantage to China, if it does not massively increase its investment in AI. It claims that
For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change.
At the same time, it highlights the immediate danger posed to US national security by both China’s and Russia’s more enthusiastic use of (and investment in) AI, noting for instance the use of AI and AI-enabled systems in cyberattacks and disinformation campaigns.
In this post, I want to focus on one particular part of the report – its discussion of Lethal Autonomous Weapons Systems (LAWS) – which already received some alarmed headlines before the report was even published. Whilst one of the biggest challenges posed by AI from a national security perspective is its “dual use” nature, meaning that many applications have both civilian and military uses, the development of LAWS has over the past decade or so been at the forefront of many people’s worries about the development of AI thanks to the work of the Campaign to Stop Killer Robots and similar groups.
This is the first interview of this year from our Beyond the Ivory Tower series (you can read previous interviews here). Last October, Lisa Herzog spoke to Rowan Cruft about his public philosophy, and in particular his contribution to the Leveson Inquiry into the practices and ethics of the British media.
Rowan Cruft is a Professor in the Department of Philosophy at the University of Stirling. His research focuses on the nature and justification of rights. In 2012, he offered evidence to the Leveson Inquiry on the nature of ethical journalism and the public interest.
During the last months, an enthralling debate on fake news has been unfolding on the pages of the academic journal Inquiry. Behind opposed barricades, we find the advocates of two arguments, which for the sake of conciseness and simplicity we can sketch as follows:
- We should abandon the term ‘fake news’;
- We should keep using the term ‘fake news’.