Bednets versus Rocket Ships: Should we care more for people alive today or the future of humanity?

In this post, Elizabeth Hupfer (High Point University) discusses her article recently published in the Journal of Applied Philosophy on how to balance concern for the future of humanity with the needs of those alive today.

Made with Canva AI

Ever wonder why ChatGPT was invented? Or why billionaires have become so obsessed with rockets? The common thread in these questions is Longtermism. Longtermism is the view that concern for the long-term future is a moral imperative. The theory is caricatured by critics as a movement preoccupied with dystopian takeover by AI, a globe shrouded in nuclear winter, and colonization of distant planets. But at the heart of Longtermism are concepts intuitive to many: that future people’s lives matter and that it is good to ensure the survival of humanity. Yet, in our current world of scarce resources, Longtermist priority may go to future people at the expense of present people in need. In my paper I argue that Longtermists do not have a clear means of giving priority to people in need today without abandoning central tenets of the theory.

Longtermism

Longtermism has grown in popularity from a philosophical theory to a social movement that impacts Silicon Valley, US politics, international laws, and more. To understand this consequential theory, we need to look at two important components: time and quantity of future people.

First, Longtermists argue that time is not morally important. In What We Owe the Future, William MacAskill gives the example of a dropping a shard of glass on a hike. If you drop the glass and do not pick it up then you have harmed the person who steps on it, even if that person exists in the future.

Second, Longtermists argue that there are potentially tens of trillions of people who could exist in the future. There are various ways that Longtermists can calculate this number, but all that matters for our purposes is that it is a lot. A whole lot. More people than exist presently, and more people than have ever existed up to this point.

Combining the notion that time is not morally important and that there are a vast number of potential people, means that it is imperative to safeguard both the survival of humanity and the quality-of-life of future people.

Far-Future Priority Objection

What if this concern for the tens of trillions of future people comes at the expense of people who are living today? I call this the Far-Future Priority Objection: repeated instances of priority to far-future concerns will result in the systemic neglect of current people in the most need and potentially large-scale reallocation of resources to far-future interventions.

For example, Hilary Greaves and William MacAskill argue that the most effective way to save a current life through donation is providing insecticide-treated bednets in malaria zones. Their data shows that with these bednets, donating $100 is equivalent to saving 0.025 lives. But this is less effective than many Longtermists causes such asteroid deflection ($100 would result in around 300,000 additional lives), pandemic preparedness (200 million additional lives), and preventing AI takeover (one trillion additional lives). If Longtermists are concerned about efficiently doing the most good they can with a unit of resources (and I argue in my paper that they are), then Longtermist causes will trump even the most efficient causes for people alive today.

According to the Far-Future Priority Objection, repeated priority in this pattern could significantly shift overall resources away from those in need today over time, particularly those in low-income nations. Thus, widespread espousal of Longtermism may result in the global affluent turning their backs on these populations.

Potential Responses

In my paper, I analyse several potential responses the Longtermist could give to the Far-Future Priority Objection and argue that none of these responses can successfully mitigate the objection without abandoning basic tenets of Longtermism.

I will highlight one such argument here. Longtermists typically argue that far-future interventions cannot cause serious harm in the short term. According to my Far-Future Priority objection, individual instances of priority to the far future are not harmful but repeated instances may be. Take the following analogy: a law is enacted which is not explicitly discriminatory towards minority Group X. However, over time, implementation of the law results in resources, which would previously have gone to Group X, going to nearby (perhaps better off) Group Y. A decade later, Group X is significantly worse off. I think that one could reasonably argue that Group X was seriously harmed. Similarly, Longtermism does not intentionally or explicitly discriminate against current people, and it does not remove existing resources from them. Serious harm is likely caused nonetheless.

However, I argue that appealing to near-future serious harms results in either too strong or too weak of a response to the Far-Future Priority Objection and is not a viable avenue for the Longtermist. This is because one could be an absolutist about causing harm, which would mean that repeated priority to the future would be morally wrong and Longtermism would be undermined altogether. Alternatively, one could be a non-absolutist and say that the prevention of harm can be overridden when the stakes are high enough. Yet, since there could be tens of trillions of future lives at risk, the stakes will always be so high as to override the ban.

Conclusion

Longtermists have two options. First, they can bite the bullet and accept that Longtermism could result in systemic neglect of present people. This is counterintuitive to many. Second, they can create a new principle which allows for occasional priority for present people without abandoning basic tenets of the theory. In my paper, I analyse and dismiss several possible principles.


Elizabeth Hupfer’s research focuses on the intersection between normative/applied ethics and social/ political philosophy. She has published on distributive justice, coercion, humanitarianism, Effective Altruism, and Longtermism.

Journal of Applied Philosophy

The Journal of Applied Philosophy is a unique forum for philosophical research that seeks to make a constructive contribution to problems of practical concern. Open to the expression of diverse viewpoints, it brings the identification, justification, and discussion of values to bear on a broad spectrum of issues in environment, medicine, science, policy, law, politics, economics and education. The journal publishes in all areas of applied philosophy, and posts accessible summaries of its recent articles on Justice Everywhere.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *