Elanor Taylor (Johns Hopkins University), "Explanation and the Right to Explanation"
Forthcoming, Journal of the American Philosophical Association
In the early days of my Spotify account I used the platform to listen to low-key electronic music while working. One day, out of what seemed to me to be the blue, it recommended an angsty song that I had loved to the point of obsession at fifteen, but had not heard since my teens. I felt uncomfortably known by the algorithm. On what basis was this recommendation made? What did they know about me?
I now know that Spotify was recommending a huge amount of music based on a range of markers including my age and tendency towards the melancholic, on the off chance that something would stick. There was nothing particularly insightful about this recommendation beyond its coincidental salience to me. But this experience of being blindsided by the apparent insight of an automated decision-maker, and left wondering about the bases of its decisions, is widely-shared. Such systems play an increasingly pervasive role in our lives. Often their decisions are not so easy to make sense of after the fact, and are far more important than what to listen to next.
In light of growing concerns about the widespread use and opacity of automated decision-making, in 2018 the General Data Protection Regulation (GDPR) was brought in to legislate the use of this technology in the European Union. According to some interpretations the GDPR secures a “right to explanation”: if an automated decision-making system makes a decision that affects you, then you have a right to an explanation of that decision. Neither the law itself nor much of the comment around the legislation offered a clear definition of “explanation.” Philosophers of science, however, have done much useful, sophisticated work on this question. I thought some of that insight would be usefully applied here, and would go some way towards illuminating what the right to explanation is a right to.
I began by looking at the normative motivations for access to explanations: why do we want explanations of these decisions? The answer will tell us what an explanation should be such that it can meet these motivations. In sources such as recent legislation and public and academic comment I found a range of motivations, including promoting transparency, evaluating decision- making systems for fairness, protecting individual autonomy, and evaluating decision-making systems for their capacity to generate broader social harm. However, to genuinely meet these needs in many cases the explanation would have to identify a reason why the decision was made, and not just that a certain factor had made a difference to that decision. Because of the opacity of many automated decision-making systems and of the organisations that use them, the prospects for extracting reason-based explanations from many automated systems are bleak. In light of this it seems that the right to explanation asks for something impossible to provide.
Questions about whether or not such explanations are genuinely intractable from an engineering perspective are best answered by engineers. But currently a range of sources of opacity block our access to such reason-giving explanations, from algorithmic opacity to a lack of transparency on the part of organisations that use decision-making technology. This led me to think about what we do in other contexts in which opacity blocks our ability to ask for reasons. In such situations we often turn to the outcomes of the person’s decisions and actions, rather than their reasons, as a target for normative evaluation. For example, when a person refuses to engage with you in a reasonable way, eventually it makes sense to stop asking them why they act as they do, and to focus instead on how their actions affect you. Kate Manne recommends a similar shift when thinking about misogyny. [1] Instead of reflecting on what a misogynist’s reasons might be, Manne suggests considering the impact of their actions on women. The opacity evident in current use of automated decision-making indicates that a similar shift is reasonable, from asking for explanations of decisions in order to understand and to normatively evaluate them, to instead considering their individual and social impact.
Much time has passed since the GDPR came into law, and in public policy we are currently seeing a move away from explainability as a normative target, towards the evaluation of outcomes. For example, the EU’s proposed Artificial Intelligence Act focuses on outcomes, as does NYC’s L44, which legislates the use of automated decision-making in human resources and hiring. [2] This paper can be understood as offering a philosophical basis for this shift. In future perhaps we may be able to demand reason-based explanations from automated systems and the organisations that use them. But unless and until we reach that point, an emphasis on outcomes is the best locus for the normative evaluation of automated decision-making. As I put it in the paper,
This shift of focus is in the spirit of a political philosophy that recognizes that harms are easily generated by well-intentioned systems and takes unjust outcomes as sufficient to warrant political response without requiring evidence of unjust intent… If automated decision making consistently, say, privileges the rich, promotes racist judicial policies, hampers social mobility, and breaks down information channels essential to democratic communication, then, on this line of thought, that is enough to be working with, without having to also scrutinize an opaque system for the source of the decisions that generated such harms.
[1] Manne, Kate. (2017) Down Girl: The Logic of Misogyny. New York: Oxford University Press.
[2] https://artificialintelligenceact.eu/; https://www.natlawreview.com/article/nyc-dcwp-proposes-rules-to- implement-new-law-governing-automated-employment-decision
For Marcus — is it possible to include a link to the published papers with each post? Many thanks. vc