Adam Piovarchy (University of Notre Dame Australia) & Scott Siskind, "Epistemic Health, Epistemic Immunity and Epistemic Inoculation"
Forthcoming, Philosophical Studies
Epistemic health refers to how well an entity (e.g. individual, family, community, network, nation, or system) is functioning with regard to a range of epistemic goods or ideals. These goods can involve one’s beliefs, inferences, or cognitive capacities. Agents might have healthy priors or healthy scepticism. We can also talk meaningfully about agents with poor epistemic health resulting from a range of factors. Perhaps a community has corrupt academic or media outlets, blocked informational networks (e.g. government censoring a media outlet), unreliable community agents, or low levels of social trust. Epistemic health assessments can be context-sensitive or relative. A community might have good epistemic health given some obstacle that they’ve overcome, while still having poor epistemic health relative to other communities.
This concept is useful in ways similar to that of ‘health’. It aids evaluation: once one has the concept, one can assess the epistemic functioning of individuals, communities, networks, countries, or institutions according to our interests. One can consider the merits of various proposals: a policy might improve epistemic health, but lead to lower happiness. It also helps us co-ordinate. Given we care about the epistemic health of ourselves, our families, communities, and country, identifying that a policy improves epistemic health can help us implement it. And it can help us understand phenomena in the world. Perhaps a community’s low compliance with public health orders, distrust of experts, and spread of misinformation can be seen as all resulting from poor epistemic health.
Maintaining good epistemic health doesn’t just require certain resources and training, it also requires remaining free from damaging ideas, habits, or inferences. One needs to be robustly disposed to warding off threats or pathogens. Epistemic immunity refers to one’s resistance to engaging in certain forms of epistemic activity (e.g. forming certain kinds of beliefs, trusting certain sources, or making certain kinds of inferences), typically unhealthy ones. One’s degree of epistemic immunity is to be understood functionally, measured by how robust one’s resistance to engaging in certain activities is across a range of relevant counterfactuals. Being too inattentive, or arrogant, to follow someone’s sophisticated argument for an unhealthy conclusion can actually contribute to one’s immunity against that argument (though this might be bad overall given the other threats such habits leave one exposed to).
Such observations matter for assessing whether something is immunity-improving or compromising. You’re probably familiar with the large body of research showing our reasoning is frequently affected by seemingly irrelevant factors like how easily we can think of examples, or the way in which options are framed. Though these are commonly described as ‘biases’, we shouldn’t assume we would be healthier or more immune to bad ideas without them. Many biases, heuristics, and ‘System 1’ reasoning can be ecologically rational, as they typically deliver correct judgments given the set of environments that one finds themselves in. For example, while being disposed to vote for whichever candidate is listed first on a voting form seems irrational given listed order does not correlate with quality, when people implicitly recommend an option they often list it first. And mechanisms which favour first-presented options can, on the whole, be rational, if one is in an environment where people with good will do often present their implicit recommendation first.
Epistemic immunity isn’t just a state that agents can have, it is also something we can collectively produce in others and ourselves through epistemic inoculation. Though inoculation, immunisation, and vaccination are sometimes used interchangeably in everyday discourse, the first has some particular associations worth emphasising. Inoculation refers to producing immunity by exposure to something introduced by other agents. (Immunisation lacks this social element, as our immune systems naturally learns to defeat pathogens, and foetuses gain immunity from their mothers’ antibodies without this typically being referred to as inoculation. Additionally, throughout history people have inoculated others using scabs, but this was not a vaccine). The proposal here is that, when providing an explanation for how someone’s epistemic immunity resulted, the factors we point to as relevant can be more or less social in nature. Factors that are more social, contingent, or which we collectively have more control over, are more fitting of the term ‘inoculation’.
Epistemic inoculation, then, occurs when one’s epistemic immunity is produced as a result of social (including cultural and political) processes. Since so much of what we learn is passed down through institutions and culture, there’s a lot of inoculation about for us to study. Inoculating audiences against unhealthy beliefs might require directly outlawing expressions of support for certain kinds of ideas or ideologies (e.g. Nazism), or it might mean creating strong protections for free speech to prevent censorship by ill-intentioned government officials. Different immunity-promoting policies are likely to have different trade-offs between different epistemic (and moral) goods.
Given epistemic health is about much more than just beliefs, we can inoculate against other things too. We can inoculate against certain attitudes and objects, such as trusting particular sources. Given someone has reported stories which were poorly-sourced, you’re going to be much more likely to dismiss future stories from them even if they seem better-supported. Agents can be inoculated against making certain inferences: teaching fallacies in a critical thinking unit aims at exactly this. It is also possible to inoculate against ideas, positions, or sources that have certain qualities. A proposal might be too utopian, too absolutist, or even too complicated, which might be relevant when public understandability is important. We can also inoculate against certain attitudes or dispositions, such as arrogance, dismissiveness, or being uncharitable.
Armed with these concepts, it can be tempting to want to go out and begin improving other people’s epistemic health, inoculating them against epistemic activities we find unhealthy. While this is a valuable goal, there are some risks.
Agents can also become immune to healthy beliefs, inferences, authorities, and ideologies. This can happen intentionally, such as when agents become immune to trusting climate scientists through exposure to arguments from faux ‘scientists’ (or scientists without relevant expertise) who have been hired to distort the facts. But agents can also be inoculated by well-intentioned agents who make hyperbolic or exaggerated claims which are later proven to be exaggerated. This gives us good reason to be careful with our pronouncements and temper calls on complex social issues.
There is one particular inoculating effect worth drawing attention to, which one might not notice without first possessing these concepts. In critical thinking classes, philosophical research, media op-eds, and everyday conversations, it is common to point to conspiracy theorists, Nazis, homeopathy supporters, and creationists as people with poor epistemic health. Call these people ‘The Usual Suspects’. While these agents can be useful for getting a point across, overreliance on them can produce an unfortunate inoculating effect.
To improve our epistemic health, we need to first identify what good epistemic health looks like. But many people’s concepts are formed by being trained on a set of examples from which they learn to notice certain features and then generalise. The worry is that continually marking out The Usual Suspects as the clearest examples of poor epistemic health has bad effects because, in fact, they are not representative of the most common problems that lead to poor epistemic health, or do not look like the kinds of unhealthy activities most people are at most risk of engaging in. Because The Usual Suspects hold beliefs that most of us and our audiences were never at much risk of believing in the first place, it leads said audiences to think that the kinds of mental habits and inferences which result in poor epistemic health are obvious, or stem from some abnormal personal defects that we are rarely susceptible to. Treating The Usual Suspects as the most central examples, or as the kinds of things that we most need to train ourselves to recognise, risks reducing the extent to which we examine our own epistemic health, learn to identify moments where we may be being led astray, and teach others to do the same. This then results in thinking that the kinds of mental habits used by one’s own ‘team’ are epistemically acceptable even if they stem from unhealthy dispositions to e.g. ignore lots of research and rely on straw man arguments.
One of the risks of this inoculating effect is simply that we fail to believe what we have good reason to believe, or misidentify what bad epistemic health looks like. But a deeper worry concerns the attitude or default affective tone with which people end up approaching various debates. Giving too many examples of poor inferences that aren’t tempting for us to make can lead us to think that we’re immune to such temptation, or that maintaining good epistemic health is easy. Alternatively, it creates an impression that epistemically healthy views are almost always extremely well-supported by lots of evidence whose strength is self-evident, while false or unhealthy views are not.
In short, the risk we are warning against is accidentally inoculating ourselves by ‘Othering’ poor epistemic habits and poor epistemic health: conceptualising poor epistemic habits as properties that belong to other social groups in a way that sets them apart from us. Which unhealthy habits we are at most risk of committing is a complex topic that won’t be able to be taken up here. But the hope is that by noticing ways in which we our habits are degrading our collective epistemic health, we can then begin taking steps to improve it.
The paper: https://link.springer.com/article/10.1007/s11098-023-01993-9