David Thorstad (University of Oxford/Vanderbilt University), "High risk, low reward: A challenge to the astronomical value of existential risk mitigation"
Forthcoming, Philosophy and Public Affairs
The paper: https://philpapers.org/rec/THOHRL
Many longtermists hold two things. First, they think that humanity faces high levels of existential risk - risks that threaten our survival or potential for desirable future development. For example, Toby Ord holds that we face a one-in-six chance of existential catastrophe by the end of this century. Let existential risk pessimism be the claim that humanity faces high levels of existential risk.
Second, many longtermists hold that efforts to reduce existential risk have astronomical value, far greater than the value of most other things we can do. Call this the astronomical value thesis.
It is natural to think that existential risk pessimism supports the astronomical value thesis. Surely it is more important mitigate large risks rather than small risks. Strikingly, I show across a range of models that precisely the opposite is true: existential risk pessimism tells against the astronomical value thesis, and in many cases the tension is strong enough that existential risk pessimism may scuttle the astronomical value thesis.
Next, I ask what might be done to make existential risk pessimism compatible with the astronomical value thesis. I argue that the most viable option relies on an empirical hypothesis, the Time of Perils Hypothesis on which risk is high now, but will soon fall to a permanently low level. Finally, I argue that the Time of Perils Hypothesis is false.
I conclude that there is an enduring tension between existential risk pessimism and the astronomical value thesis. If you really think that humanity faces high levels of existential risk, then you should be significantly less enthusiastic about efforts to reduce existential risk.
The video shows a recent talk I gave about this paper. I have also blogged about this paper here.