a blog about philosophy in public affairs

Category: Technology

What’s the problem with killer robots? Some reflections on the NSCAI final report

At the start of March, the US National Security Commission on AI (NSCAI), chaired by Eric Schmidt, former CEO of Google and Robert Work, former Deputy Secretary of Defense, issued its 756-page final report. It argues that the US is in danger of losing its technological competitive advantage to China, if it does not massively increase its investment in AI. It claims that

For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change.

At the same time, it highlights the immediate danger posed to US national security by both China’s and Russia’s more enthusiastic use of (and investment in) AI, noting for instance the use of AI and AI-enabled systems in cyberattacks and disinformation campaigns.

In this post, I want to focus on one particular part of the report – its discussion of Lethal Autonomous Weapons Systems (LAWS) – which already received some alarmed headlines before the report was even published. Whilst one of the biggest  challenges posed by AI from a national security perspective is its “dual use” nature, meaning that many applications have both civilian and military uses, the development of LAWS has over the past decade or so been at the forefront of many people’s worries about the development of AI thanks to the work of the Campaign to Stop Killer Robots and similar groups.

An interview with Rowan Cruft (Beyond the Ivory Tower Series)

This is the first interview of this year from our Beyond the Ivory Tower series (you can read previous interviews here). Last October, Lisa Herzog spoke to Rowan Cruft about his public philosophy, and in particular his contribution to the Leveson Inquiry into the practices and ethics of the British media.

Professor Rowan Cruft

Rowan Cruft is a Professor in the Department of Philosophy at the University of Stirling. His research focuses on the nature and justification of rights. In 2012, he offered evidence to the Leveson Inquiry on the nature of ethical journalism and the public interest.

The harm in fake news

During the last months, an enthralling debate on fake news has been unfolding on the pages of the academic journal Inquiry. Behind opposed barricades, we find the advocates of two arguments, which for the sake of conciseness and simplicity we can sketch as follows:

  1. We should abandon the term ‘fake news’;
  2. We should keep using the term ‘fake news’.

Intellectual Property and the Problem of Disruptive Innovations

In this post, Sam Duncan discusses his recent article in Journal of Applied Philosophy on the rights and duties of intellectual property.


Intellectual property is perhaps the most valuable form of property in the modern economy, and many recently minted multimillionaires and billionaires owe their fortunes to some sort of intellectual property claim. But why think that the creators of intellectual property deserve such outsized rewards? The most obvious answer seems to be to invoke some kind of Lockean or labor-based theory of intellectual property, which are usually taken to grant strong property rights to intellectual property with few obligations. However, as I argue in my recent article, these theories actually entail that those who claim many forms of intellectual property have very strong obligations to those made worse off by them. In fact, they would rule popular solutions to the job losses that many forms of intellectual property bring about, such as the universal basic income, to be entirely inadequate.

The Case for Ethical Guidelines on Universities’ Corporate Partnerships

In this guest post, members of No Tech for Tyrants (NT4T) – a student-led, UK-based organisation working to sever the links between higher education, violent technology, and hostile immigration environments – discuss one important arm of their work. 

Photo by Cory Doctorow on Flickr, licenced by CC BY-SA 2.0

Migrant communities are endangered by universities’ relationships with businesses like Palantir Technologies, whose software  is “mission critical” to US Immigration and Customs Enforcement’s (ICE) mass raids, detentions, and deportations. The harm inflicted by ICE is an integral component of a white nationalist deportation machine, which routinely destroys lives and condemns migrants to deadly concentration camps. Migrant rights organisations describe Palantir as the “most prominent supporter of the deportation machine in Silicon Valley.” The anti-migrant violence Palantir enables would not be possible without the talent it recruits from top UK universities. In exchange for material benefits, universities invite Palantir representatives to deliver talks,  present at career fairs, and sponsor student prizes. Several groups have cut ties with Palantir, citing the company’s facilitation of anti-migrant violence; yet, despite claiming to be committed to social responsibility, many universities remain open to Palantir.

As members of No Tech For Tyrants (NT4T), a student-led migrant justice organisation, we met with university administrators to request that they implement ethical guidelines in regards to their corporate partnerships. Administrators responded with two kinds of objections: ethical guidelines would (1) threaten free expression, and (2) be too political. We’ll explicate and reject both kinds of objection. Instituting ethical guidelines on corporate partnerships is necessary for dismantling the relationship between universities and technology businesses that facilitate egregious harm.

Philosophers’ Rundown on the Coronavirus Crisis

The outbreak of COVID-19 has raised several ethical and political questions. In this special edition, Aveek Bhattacharya and Fay Niker have collected brief thoughts from Justice Everywhere authors on 9 pressing questions.

Topics include: the feasibility of social justice, UBI, imagining a just society, economic precarity, education, climate change, internet access, deciding under uncertainty, and what counts as (un)acceptable risk.   

An Interview with Baroness Onora O’Neill (Beyond the Ivory Tower series)

Aveek Bhattacharya and Fay Niker recently interviewed Baroness Onora O’Neill, asking her about her wide-ranging experiences combining being a professor of philosophy and a member of the House of Lords (among many other things). 

Baroness Onora O’Neill of Bengarve is Emeritus Honorary Professor at the University of Cambridge and has been a cross-bench (i.e. not aligned with any political party) member of the British House of Lords since 2000. She has written widely in ethics and political philosophy, and is particularly known for her work on bioethics, trust and the philosophy of Kant. She was Principal of Newnham College, Cambridge from 1992-2006, President of the British Academy from 2005-9, chaired the Nuffield Foundation from 1998-2010 and chaired the Equality and Human Rights Commission from 2012-2016.

The Potential Mediating Role of the Artificial Womb

On May 6th, I published a post about the artificial womb and its potential role for promoting gender justice. I keep thinking about this technology, and since there is more and more ethical discussion about it, I want to address it again, this time from the point of view of mediation theory and in an attempt to anticipate the potential mediating role of this technology. According to mediation theory, technology mediates how humans perceive and act in the world. The Dutch philosopher Peter-Paul Verbeek has extended this post-phenomenological approach, which has been developed by Don Ihde, to the realm of ethics. Verbeek sees technology as being intrinsically involved in moral decision-making. Technology mediates our moral perceptions and actions. Moral agency is not something exclusively human, but a “hybrid affair”. Moral actions and decisions “take place in complex and intricate connections between humans and things”. Verbeek illustrates technology’s mediating role by means of the example of obstetric ultrasound. I shall apply the idea of the technological mediation of morality to the artificial womb and discuss some ways in which that technology could play a mediating role in morality.

From Fact-Checking to Value-Checking

Fears over ‘fake news’, targeted disinformation, and the rise of post-truth politics have met with a central mainstream solution: ‘fact-checking’. Fact-checking is featuring prominently in coverage of the 2019 UK General Election. ITV News, for instance, will use FullFact.org to analyse the claims made by Boris Johnson and Jeremy Corbyn in their forthcoming leadership debate, with the aim of better informing their viewers by exposing misleading statements.

This reflects the wider embrace of fact-checking as a panacea against the rise of anti-expert politics. It has been employed in coverage of US presidential and primary debates, as well as the parliamentary theatre of Brexit. Third party fact-checking organisations have also been championed by social media companies in response to demands by regulators and legislatures that they take responsibility for the content circulated on their platforms. Indeed the use of ‘independent’ fact-checkers to flag content was highlighted by Mark Zuckerberg, during his various appearances before Congress, in defence of Facebook’s practices.

However, the concept of fact-checking frames the problems of post-truth politics in narrowly positivist terms – as reducible to a lack of information (‘facts’), leading to sub-optimally rational decision-making by electorates. It has not been underpinned by a sophisticated account of the epistemic conditions for the exercise of democratic citizenship. Fact-checking occupies an increasingly central place in our political culture, but the justification for it remains largely implicit and untheorized.

Page 2 of 2

Powered by WordPress & Theme by Anders Norén