Morgan Thompson (Universität Bielefeld), "Epistemic risk in methodological triangulation: the case of implicit attitudes"
Imagine that you are given a closed wooden box with at least one object inside. You are tasked with identifying whatever is within, but you must not open the box. You can use different senses and methods to inform your guesses. Perhaps you shake the box and listen to the sounds. What types of material make those noises? You might also estimate the weight of the box itself and attempt to determine the weight of what’s inside. Combining evidence from your different experiments may help you narrow the possibilities or even make an educated guess. Call this use of multiple methods to provide better support for your hypothesis ‘triangulation’. Still, one might worry that for all one's ingenuity in triangulating between such methods a question remains unanswered: is there a single object inside? It seems that none of your various experiments nor any combination of the evidence can answer this question conclusively.
Similar problems arise for social psychologists studying implicit attitudes, as I have argued in my recent article. Psychologists introduced implicit attitudes to be part of the explanation of the persistence of injustice despite people’s reports of holding egalitarian beliefs. To do so, they distinguished prejudicial attitudes that are held unconsciously and produce uncontrollable behaviors, namely, implicit attitudes, and prejudicial attitudes that are consciously held and reported. As implicit attitudes could be held simultaneously with egalitarian beliefs, they can explain the persistence of unjust disparities.
Research on implicit attitudes flourished over the next decades. Since psychologists could not ask people about their implicit attitudes directly, they needed new ways to measure these attitudes. Indirect measures, such as the Implicit Association Test and the Evaluative Priming Task, were developed to measure the positive or negative associations with concepts, often those of social categories (for example, Black people or women). However, this research also revealed there is substantial uncertainty in how to characterize what was being measured, namely, ‘implicit attitudes’. Most basically, are these attitudes themselves unconscious? And are their effects uncontrollable? How stable are these attitudes over time? Under what conditions should we expect them to vary? And so on. Studies showed that people can predict the direction of their implicit biases and thus people have some awareness of their implicit attitudes (Hahn et al. 2014). So, it is unlikely that they are unconscious. As a result, many social psychologists jettisoned some initial assumptions about the nature of implicit attitudes and adopted a broader conception (Feest 2020).
To get a handle on implicit attitudes without first settling these theoretical questions, many social psychologists use triangulation. Perhaps more certainty about indirect measures can help get a purchase on what exactly is being measured. If multiple indirect measures (such as the Implicit Association Test and Evaluative Priming Task) provide similar scores for the same person or population, then it seems plausible that they are measuring ‘implicit attitudes’. That distinct methods produce similar results seems best explained by the existence of the phenomenon of interest. However, this kind of triangulation reasoning turned out to itself rely on the assumption that these measures are measuring the same thing (namely, implicit attitudes), despite the uncertainty about the major characteristics of these attitudes. However, it is also a possibility that distinct methods produce similar results because they provide evidence about different but related phenomena.
How should psychologists proceed? Is triangulation a doomed strategy when there is so much uncertainty about the nature of its target? And do philosophical accounts of triangulation have any normative advice for these psychologists? I think triangulation can still be a useful research strategy for psychologists studying implicit attitudes, though there are limitations. Triangulation under uncertainty may not warrant acceptance of the claim that implicit attitudes exist nor must it be evaluated based on matching the features of paradigmatic cases in the philosophy of science literature.
Instead, I propose psychologists using triangulation (when uncertain about its target) should consider the epistemic risks in their inferences. Epistemic risks are the risks of error that can arise throughout knowledge practices, or in this case, the practice of triangulation (Biddle and Kukla 2017). There are two particular points at which epistemic risk is relevant in the case of implicit attitudes: (i) inferences about the relevance of data for some hypothesis and (ii) inferences about the acceptance or rejection of a hypothesis (also called ‘inductive risk’). I go into greater length about the first type of epistemic risk in the paper. The second type of epistemic risk is the risk of error in accepting the claim that implicit attitudes (broadly characterized) exist. It is possible and consistent with current evidence that indirect measures do not all measure the same phenomenon, but instead measure slightly different but related phenomena. However, it may simply be impossible to entirely rule out this possibility using our available methods, even after triangulation. We need some standard by which to set a threshold of allowable epistemic risk when accepting the claim that there is a unified phenomenon of implicit attitudes. Such a standard must be sensitive to the context provided by our theoretical, socio-political, and/or ethical concerns about the phenomenon. Psychologists may implicitly set such thresholds in their evaluations of convergent validity, which I discuss more in the article. However, these standards should be set more explicitly.
Here I want to point to more general consequences for thinking about epistemic risk in implicit attitude research. This approach allows us to ask broader questions about the benefits or harms of framing the explanation of persisting prejudice in terms of personal, implicit attitudes. Two particular harms suggest setting a reasonably high threshold for evidence about hypotheses in implicit attitude research before we accept them. First, implicit attitude research has captured a disproportionate share of attention and funding in social psychology. It explains persisting prejudice based on what happens in individual’s minds. Too easily accepting claims about implicit attitudes has and will crowd out other potential explanations of persisting prejudice. There should be should be sufficient engagement and funding of research relevant to competing explanations, such as research on the influence of context (Murphy and Walton 2013), on microaggressions (Pierce 1970), on the experiences of those on the receiving end of prejudice (Williams et al. 1997), and other explanations that emphasize the role of structural oppression. Second, implicit bias has found a life of its own outside the academy. Corporate trainings warn employees of the potential impact of their implicit biases during hiring. Psychological tests like the Implicit Attitude Test are publicly available through Project Implicit. One problem is that the scientific community cannot control the public understanding of what implicit attitudes are, the extent to which they explain prejudice in the world, nor what one should infer from one’s results on the Implicit Attitude Test. To the extent that the goals of implicit attitude research have changed, these changes must be communicated to the public (Byrd and Thompson 2022).
To summarize, there is substantial risk of harm when false claims are disseminated to the public, given the attention that implicit bias receives. The risks of harm for propagating false claims about implicit attitudes are high; our thresholds for sufficient evidence to accept implicit attitudes as an explanation for persistent prejudice should be high as well.
Biddle, J. B., & Kukla, R. (2017). The geography of epistemic risk. Exploring Inductive Risk, 15, 215–238
Byrd, N., & Thompson, M. (2022). Testing for implicit bias: Values, psychometrics, and science communication. Wiley Interdisciplinary Reviews: Cognitive Science, 13(5), e1612.
Feest, U. (2020). Construct validity in psychological tests: The case of implicit social cognition. European Journal for Philosophy of Science., 10(1), 1–24
Hahn, A., Judd, C. M., Hirsh, H. K., & Blair, I. V. (2014). Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143(3), 1369–1392
Murphy, M. and Walton, G. (2013). From Prejudiced People to Prejudiced Places: A Social-Contextualist Approach to Prejudice. In C. Stangor & C. S. Crandall (Eds.), Stereotyping and prejudice (pp. 181–203). Psychology Press
Pierce, C. (1970). Offensive mechanisms. In F.B. Barbour (Ed.). The black seventies. Boston, MA: Porter Sargent Publisher.
Williams, D.R., Yu, Y., Jackson, J.S., and Anderson, N.B. (1997) “Racial Differences in Physical and Mental Health: Socioeconomic Status, Stress, and Discrimination.” Journal of Health Psychology. 2(3):335-351.
Thanks for reading New Work in Philosophy! Subscribe for free to receive new posts and support my work.