Angela O’Sullivan and Lilith Mace (University of Glasgow), "Reverse-Engineering Risk"
Forthcoming in Erkenntnis
By Angie O’Sullivan and Lilith Mace
Imagine a death lottery, of the kind envisioned by Shirley Jackson in her short story The Lottery. Once a year, a neighbourhood of 10,000 people, say, runs a lottery. Whoever holds the winning ticket gets executed. But this is not pointless brutality: the surviving parties to the lottery consistently report that participation in the lottery boosts community spirit and fosters togetherness. After each lottery takes place, things run more smoothly in the neighbourhood, for about 11 months and 3 weeks, when tensions begin to rise again. But no bother: it’s then time for another lottery.
Suppose we asked you whether you’d like to subject your own neighbourhood to such a lottery. “No way!”, you’d probably say, “That would put me at far too high a risk of being the unlucky blighter who gets executed.” And this seems sensible: were your neighbourhood to conduct a death lottery, it could just so happen that you end up dead. The risk of death is just too high, even though there are benefits to be gained from participation.
But what if we asked you whether you’d like to subject some other neighbourhood to the death lottery – for their own good, of course? Well, that’s a different matter entirely. Someone will die, yes, but it won’t be you, so you can weigh up the costs and benefits of the death lottery in a more impartial manner. Maybe the risk of death, to each participant in the lottery, wouldn’t be so high after all, if there are sufficiently many participants.
How could it be that the risk to you in the first case is very high, but the risk to any particular resident of the other neighbourhood is not so high? We say: because the relevant notion of risk at work in each case is different. In the first case, the relevant notion of risk has it that high risk events are those that could just so happen to occur, given your evidence; if those events occurred, it would not be surprising. If it would not be surprising, given your evidence, that a course of action led to your death, then that course of action involves a high risk of death. If your neighbourhood held the death lottery, it would not be surprising if you ended up dead: your ticket could just as well be the quote-unquote ‘winner’ as any other. But in the second case, the relevant notion of risk has it that high risk events are those that are sufficiently likely to occur. For each participant in the lottery, there is a 1 in 10,000 chance of death. That’s not a miniscule chance of death, but it’s not high enough to be likely, either.
More specifically, in our paper ‘Reverse-Engineering Risk’, we argue that the concept of risk serves a particular purpose for us, and it does that best by picking out different notions of risk in different contexts. The purpose of the concept of risk is to guide our decision-making so as to reduce disvalue, and how best to do this depends on features of the particular decision being made.
Where a subject is making a one-off decision with only short-term consequences that primarily affect her, the most useful notion of risk is given by the normic account of risk developed by Philip Ebert, Martin Smith and Ian Durbach (2020). On this account, a negative event is high risk if the obtaining of that event would not call out for any special explanation, given a body of evidence. It could just so happen. Maybe it’s unlikely that I’d die if my neighbourhood held a death lottery. But this wouldn’t call out for any special explanation: one ticket has to win, and it might as well be mine as any other. So my partaking in the death lottery involves a high risk of death.
Where a subject is making a decision that will affect a larger number of people, or is expected to have broader or longer-term consequences, the most useful notion of risk is that offered by the probabilistic account of risk. On this account, a negative event is high risk if it is likely to occur, relative to a body of evidence. Given the benefits the residents of the other neighbourhood would receive if I were to subject their neighbourhood to the death lottery, it would be useful to have an account of risk that says that it’s not so risky, for any particular resident, to enter into the lottery. The probabilistic account gives us this result: there’s a 0.01% chance of any resident dying in the lottery, which is pretty unlikely.
Finally, those with no interest in practical decision-making – such people are often known as ‘philosophers’ – will find use for a third account of risk, offered by Duncan Pritchard (2015). On Pritchard’s modal account of risk, the risk of a negative event is determined by the closeness of the closest world in which that event occurs, irrespective of any body of evidence. Worlds are closer to the actual world the more similar they are to the actual world. Suppose that my neighbourhood has already held the death lottery, and the results have been drawn. My ticket wasn’t the winner, but my next-door neighbour’s was. I’m still at high risk of being executed, as there’s a close possible world in which my ticket is the winner: all that differs between that world and the actual world is that a different ticket was drawn from the lottery machine. But fortunately, the risk is higher for my next-door neighbour: as her ticket actually won, the world in which her ticket wins is closer than the world in which my ticket wins, as it exactly matches the actual world. Of course, these facts about risk cannot affect my decision-making, nor hers: they are entirely beyond our ken, at least until the results of the lottery are made public. Yet when theorising about the case from a God’s eye perspective, as philosophers are wont to do, it seems right to say that my neighbour is at higher risk of being executed than I am. The modal account allows us to issue this verdict.
The death lottery is an extreme case. But notice that we do partake in activities in which we expect some number of people to die, for relatively frivolous reasons. For example, most years a handful of people die at Glastonbury Festival, while mass sporting events like marathons usually involve fatalities. Our central suggestion is that different notions of risk are in play for different decision-makers as regards these events, and this allows the concept of risk to serve its purpose most effectively in these different contexts.
An individual runner deciding whether she should partake in a marathon should make use of a different notion of risk than a city council member deciding whether a marathon should go ahead in her city. The city council member doesn’t care whether any one marathon runner in particular dies, but about how many deaths she should expect there to be. So it shouldn’t concern her whether it would call out for special explanation for any particular runner to die – which is the factor that determines risk on Ebert, Smith and Durbach’s normic account. What should concern her is the chance of any runner dying. As such, she should make use of the probabilistic notion of risk in determining whether to hold the marathon. However, for the would-be marathon runner, there is one runner whose death is of particular concern to her: her own. Knowing that roughly one in 100,000 runners die during or immediately after a marathon can’t tell her whether she would be that one in 100,000. What she should care about, we posit, is the normic risk of her dying: whether this could just so happen, given her evidence, or whether it would call out for some special explanation.
In this way, we argue for risk pluralism: there is no single one account of risk that accurately determines risk in all situations, as different situations call out for different notions of risk, depending on the decisions being made. But unlike other versions of risk pluralism – in particular, that offered by Ebert, Smith and Durbach (2020) – our pluralism is principled: we explain why it is that the concept of risk takes these different forms. When I’m thinking about how my decision will affect some random person, I care about whether it’s likely to do them some harm – a one in 100,000 chance of a runner dying if I allow my city to put on a marathon is not too high a risk. But when I’m thinking about myself, I don’t care about chances, I care whether I am the one in 100,000 who dies. On our picture, the concept of risk cares about this, too.