Owen C. King (UNC Chapel Hill) & Mayli Mertens (University of Copenhagen), "Self-fulfilling Prophecy in Practical and Automated Prediction"
Ethical Theory & Moral Practice, 2023
By Owen C. King & Mayli Mertens
Our article on self-fulfilling prophecies grew out of worries we shared about the ethics of prediction, especially worries about the downstream effects of predictions and the potential for feedback loops.
Typically a person is interested in a prediction that P because other things depend on P. Insights about P may equip us to deal better with those other things. But predictions, and the ways they are used, have consequences of their own. For example, if a company predicts that a consumer will be interested in a particular product or that a reader will be interested in a particular sort of content, and then that company acts on its predictions, those actions may influence the very preferences that were subject to prediction. Similarly, if a medical team makes a particular prognosis, predicting that a patient will have a particular outcome, that prediction will influence treatment decisions, and thereby potentially affect the very outcome predicted. Such predictions are reflexive, in the sense that making and using them affects the outcome predicted, and they raise distinctive practical and epistemic problems.
Among reflexive predictions, we have been especially concerned with those that are self-fulfilling, in the sense that they somehow bring about their own truth. These self-fulfilling prophecies (SFPs) are often noted by social scientists and in wider culture. Although it is easy to find people worrying about predictions becoming SFPs, we were not able to find a satisfying account of what might be especially problematic about SFPs. What might be wrong with a prediction of consumer preferences pushing consumers in that direction? What might be wrong with a particular medical prognosis expediting the outcome predicted (especially if it yields that outcome in the most humane way)? Our article aims to shed light on these questions.
As a first step, we need a clearer statement of the phenomenon in question. Abstracting away from the many diverse concrete cases in various literatures, we characterize several necessary conditions for a prediction to be an SFP, and then arrive at this definition:
A self-fulfilling prophecy is a prediction, treated as credible enough to be employed, with its outcome realized due to how the employment of the prediction affected a system sensitive to such employment.
With this definition in hand, we are in a position to lay out several related problems with SFPs. We begin by discussing how reflexivity in prediction complicates accountability for the outcomes influenced by predictions. However, that does not yet shed any light about what is distinctively problematic about SFPs. So, what makes SFPs, more than other reflexive predictions, especially problematic? Our answer, in essence, is that SFPs hide errors. We call this the problem of errors without error signals. In our article, we illustrate this problem by constructing a fairly elaborate example, comparing SFPs to some predictions that are not self-fulfilling.
Our example involves a type of predictive policing used by some police departments in the US. The sort of predictive policing we have in mind involves using data analytics to make predictions about crime hotspots—locations that will have high crime rates. On the basis of those predictions, the predicted hotspots are patrolled more heavily. With more police officers patrolling an area, the police will observe and record more crime there. Hence, the crime rates in the predicted hotspots will be higher than they otherwise would have been, and thus the predictions are self-fulfilling. (Note that the predictions do not necessarily affect the background level of criminal activity—the sorts of occurrences which, if observed, might prompt a police report—but do affect the recorded crime rate by influencing what the officers do.)
We begin our example by imagining two police captains, each in charge of different precincts in a large city that has just hired data scientists to implement predictive policing. Each captain will receive daily predictions from the data scientists about which locations will have high crime rates. Suppose that both captains will treat these predictions as credible. However, the two captains have different plans for employing the predictions: One captain—call her Captain Deployment—aims to use the predictions in the usual way, deploying more officers to patrol areas predicted to have higher crime rates. The other captain—call her Captain Rotation—opts for an alternative. Instead of directing greater attention to hotspots, her goal is to distribute the challenging work of patrolling hotspots more evenly among the officers. So, she rotates different officers through the projected hotspots on different days, without actually adjusting how heavily any area is patrolled. Finally, suppose that, unbeknownst to the data scientists and the police, a software update introduces a bug, making the hotspot predictions essentially random. The captains then use the system for a month.
For Captain Deployment, many of the hotspot predictions are SFPs because having more officers in an area causes extra arrests and police reports in that area. So, at least in aggregate, the predictions are borne out, with more crime registered in the predicted hotspots than in otherwise comparable areas. In contrast, for Captain Rotation, the predictions are not self-fulfilling. Overall, the crime rate in the predicted hotspots is no different than it would have been, and not significantly different from the crime rate in other areas. The two captains receive equally faulty predictions. However, only Captain Deployment’s employment of the predictions introduces significant reflexivity and skews crime rates. In contrast, Captain Rotation does not introduce reflexivity or skew crime rates.
Now, to see the special problem with SFPs, consider the captains’ perspectives at the end of the month. We might regard Captain Rotation as epistemically fortunate. At the end of the month, she is surprised that the crime rates do not match the predictions. For her, the mismatch of predictions and outcomes is an error signal—an indication that something has gone awry—which prompts her to look for mistakes and make corrections. She might reconsider how she grants credibility to the data scientists’ predictions. Perhaps she stops relying on those predictions altogether, or simply tells the data scientists that something is wrong. In general, an error signal can alert the participants in a practical endeavor that the endeavor is not well-served by the predictions employed. Thus, error signals offer opportunities for adjustment.
Captain Deployment, by comparison, is not so fortunate. Many of the hotspot predictions she employed were self-fulfilling. She observes a general alignment between hotspot predictions and areas with high crime rates. So, Captain Deployment receives no error signal, or, at most, a much weaker error signal than Captain Rotation receives. Indeed we can imagine her pleased to see the predictions largely borne out, noticing neither their randomness nor their reflexivity. Furthermore, although her actions actually elevated the crime rate registered in the predicted hotspots, no one will likely be called to answer for that. Thus, the special problem with SFPs is that they conceal mistakes that may have affected the prediction and its outcome.
This problem marks a contrast between SFPs and other reflexive predictions. Ordinarily, for predictions that are not self-fulfilling, reflexivity will be an interfering or perturbing factor, reducing the likelihood that a predicted outcome will be realized. Hence, failures to recognize the potential for reflexivity typically do produce error signals. This is especially striking in cases of self-defeating prophecies, where the reflexivity thwarts the outcome predicted. For illustration, suppose there is a third captain—call her Captain Avoidance—who, in the interest of reducing officers’ job-related stress, employs the hotspot predictions by sending officers away from predicted hotspots. Her predictions are self-defeating, because they leave few officers around to register crime in the areas where it is predicted. After a month of employing hotspot predictions, Captain Avoidance, like Captain Rotation, notes that the predictions were not accurate. Thus, the recognition of false predictions is an error signal for Captain Avoidance. Whereas Captain Deployment is not called to notice or answer for how her actions skewed crime rates, Captain Avoidance is prompted to take responsibility and adjust.
Summarizing this line of critique: Since, with an SFP, the outcome eventually observed is the outcome originally predicted, anyone who grants credibility to the prediction in the first place will not be surprised by the outcome. Thus, SFPs quell error signals, and mistakes can go unnoticed. When mistakes go unnoticed, they are less likely to be corrected, and we are likely to repeat them.
From this critique of SFPs, our article proceeds to articulate the relationship between SFPs and a couple varieties of feedback loops. There is an intuitive connection between SFPs and feedback loops, so much so that the two phenomena are often discussed together or even mentioned interchangeably. In light of the problem of errors without error signals, the relationship between SFPs and feedback loops becomes much clearer.
Our hope is that, with a clearer view of SFPs and the feedback loops they engender, tricky problems in the ethics of prediction become more tractable.
Brilliant article and this topic of SFPs illustrates how tied together much of Philosophy is. It brings up epistemic issues, Bayesian issues, ontological issues, normative issues, even such far flung ideas as stochastic processes and random sampling! Where to start? I can't. Save to mention SFPs made by fictional predicted about fictional states of affairs in fictional worlds, the interplay of equally credible but contrasting predictions, can they both be SFPs if they create diametrically opposed outcomes? And so forth and so on! Brilliant contribution!