Discover more from New Work in Philosophy
C. Thi Nguyen (University of Utah), "Transparency is Surveillance"
Philosophy and Phenomenological Research, 2022
Onora O’Neill offers a biting criticism of our desire for transparency. People think trust and transparency go together, she says, but they’re actually deeply in tension.
Transparency can encourage people to be less honest so increasing deception and reducing reasons for trust: those who know that everything they say or write is to be made public may massage the truth. Public reports may underplay sensitive information; head teachers and employers may write blandly uninformative reports and references; evasive and uninformative statements may substitute for truth-telling. (73)
O’Neill’s discussion here, from her 2002 BBC Reith Lectures On Trust, strike me as deeply illuminating. But her argument seems to have vanished without a trace. The standing presumption right now seems to be: the more transparency, the better. But O’Neill’s insight is vital. My goal, in “Transparency is Surveillance”, was to expand on O’Neill’s brief sketch, to clarify and defend her argument — and to worsen it.
Here’s one way to put her worry. When we institute a system of transparency, we’re asking people to make their actions, reasons, and accomplishments public — to open up their choices to observation and evaluation to the outside world. This would be fine, if the outside world was a perfectly good judge — if the observers and evaluators could adequately recognize good reasoning, good actions, and good outcomes. And sometimes, when matters are simple enough, that might be the case.
But so much of the time, institutional actors have some kind of special sensitivity or understanding. They are experts. They are scientists, doctors, teachers, art historians, economists. We need local politicians who know the ins and outs of their particular bureaucracy, the particular tangle of worries in their city. We need charity workers who know all the intricacies of their particular turf, be it famine relief, educational reform, or immigration law. But in many cases of transparency, we’re asking experts to explain themselves to non-experts. But by definition, much of an expert’s actual reasoning is inaccessible to non-experts.
So what happens? O’Neill suggests one possibility: that experts, since they know their actual reasoning will be misunderstood, make up fake ones for public consumption. They will deceive. And we, the public, should know this is happening — should know the recorded reasons will not the real ones. Transparency systematically incentives deception, she says, and so undermines trust.
But I think there is something even worse that can happen. In many cases, experts may be unwilling or unable to deceive. The monitoring systems of transparency may be sufficiently intrusive, or the experts will have substantially internalized the desire for transparency, that they won’t just make up fake justifications. In that case, experts will confine themselves to acting on reasons that are publicly accessible. In that case, transparency will undermine the expertise itself. It will force the expert to act on non-expert reasons, and in so doing, deprive the expert of much of the power of their expertise. This is the paradox of transparency: in many cases, the reasons we need experts in the first place is that they understand what we do not. But then we place those experts under a system of transparency, and so bind them to acting on reasons that the inexpert public can understand. Transparency is a species of surveillance, that forces experts to act under the gaze of a distant, insensitive, contextless eye.
This often takes the form of forcing experts to take on a goal imposed by the public, often in the form of a simple metric. Take Charity Navigator, a watchdog organization that was set up to provide some kind of oversight for non-profit charities — to rank them in terms of efficiency. For years, the chief ingredient Charity Navigator’s ranking system was a throughput rating — a ratio of how much money was donated to the charity vs. how much money was getting through to the other side and reaching the target population. Charity Navigator’s goal is to guide our donations towards efficient charities, and to avoid inefficient ones — ones that spent too much money internally. But, as so many people in the charity sector protested, the throughput rating didn’t actually capture effective charities. It forced charities to compete with each other on the rankings by reducing internal overhead — to spend less on its internal infrastructure and its employees. But beyond a certain point, reducing internal overhead makes charities less functional. Those charities can’t hire the right specialists, can’t pay employees enough. (Imagine if the functional unit in your own organization were ranked strictly on how little they spent internally.) The point is that the throughput rating looks pretty good to the casual eye. I’ll admit, I used it for years. But it only looked good to me, as an evaluative standard, because I knew almost nothing about charities — and I don’t know what I don’t know. To many experts in the terrain, throughput is clearly a bad metric for judging charities. But Charity Navigator quickly became a major force in directing charitable, and so charities have been forced to focus on their throughput rating, and to take “reducing overhead” as a primary reason for action. To the degree that this system of transparency binds their actions, it forces them to chase those goals that sound good to the public, rather than the goals that they understand to be really important.
The pain of transparency is obvious when you’re the one being surveilled. For years, I was charge of assessment for my philosophy department. This involved finding some supposedly objective way to report, to the state legislator and the public at large, our successful student outcomes. But it was extremely hard to actually measure, using publicly accessible methodologies, what we actually cared about. Philosophy professors often care about some pretty intangible stuff: instilling in students intellectual humility, reflectiveness, and curiosity. I think I can tell when I’ve made some progress along those lines, but I’ve had two decades of experience teaching philosophy to students. But I can’t really simply tell my overseers: “Trust me, we’re doing great!” I need to find some assessment methodology whose application is comprehensible to outsiders, and open to their inspection. State legislators often demand justification in much more simple, publicly accessible currency: graduation rate, graduation speed, post-graduate employment rate and income.
We’re caught here between two pressures. On the one hand, experts see what non-experts do not: they often have a better sense of what’s important in the terrain, they are often better at seeing when what’s really important has been achieved. And they have a sense for how those values interact with the complex practical realities of their space. On the other, when experts are operating in their domain of expertise, they are outside the rest of our comprehension. They can lie, cheat, swindle; they can operate out of internalized biases and bad norms. But when we demand that experts operate in our sight, we deprive them the capacity to act out of their special understanding, attunement, and experience.
Transparency asks us to operate according to principles and policies that we can make explicit to the public. But real expertise involves going beyond such explicit principles. As Hubert and Stuart Dreyfus put it: it is the novice who operates on clear, explicit principles. As we become more expert, as we gain in experience, we often make intuitive leaps — complexities become perceptually obvious. We start synthesizing vast tracts of experience in ways that are impossible to put in explicit form.
Transparency always sounds great, when you’re on the side of the external evaluator. But when you’re under the gun — when you’re trying to find some way to report your own actions, in your own expert turf, to a rushed and mostly uncaring public, then it is painfully obvious how much the metrics miss.
The point here isn’t to argue for some essential superiority of some distinct class of elites over the lowly masses. The point is that most of us are inexpert about most things. The world is so dense, so complex, so full of weird corners to know about. Every one of us is an expert in our own tiny patch, but a mere novice in the rest. As an aggregate, the public is inexpert about most things. So when you create a system of transparency to the public over experts, then, insofar as that system influences the experts’ actions, then it is undermining their expertise.
Here’s another way to approach the same problem, without the emphasis on technical expertise. A lot of us believe in standpoint epistemology. That’s the idea that people who have lived under a particular form of oppression have a special sensitivity to some key parts of the world relevant to their oppression. You understand something special about certain sectors of the world if you’ve lived under racism or homophobia. Disabled people, for example, can have better insight into how the built environment can be unthinkingly hostile. But the basic tenets of standpoint epistemology are in conflict with the demand for public transparency. Standpoint epistemology says: your particular place in the world grants you some special understanding or insight. But public transparency says: you must explain yourself to everybody — including lots of people without that special understanding.
I watched a version of this play out on a college campus, when a university-funded LGBTQ+ support group had to, in the name of transparency, justify its budgetary requests to university administrators who were, by and large, not LGBTQ+. Many of the budgetary requests came around maintaining a special campus center to serve as a “safe space”. But university administrators didn’t understand the value of queer safe spaces; they saw it as discriminatory. Being queer, in a homophobic part of the country, can give you an immediate understanding of the value of safe spaces, that may not be as accessible to the general public. The weird thing here is that modern progressives are often committed to two things: to standpoint epistemology, and to total transparency. But, it turns out, standpoint and transparency are deeply at odds.
The point here is not to get rid of transparency. Transparency is deeply valuable. Transparency really does help us fight corruption and get rid of bias. The point is to show that transparency is not an unalloyed good. It comes with a cost. So we shouldn’t simply barrel ahead, and just try to maximize transparency. Transparency is powerful, but rough medicine. Transparency attacks the kind of judgment that is intuitive, synthetic — the ones we can’t explain to the public at large. Intuitive, synthetic judgments are, indeed, where bias lives and corruption thrives. But intuitive, synthetic judgments are also where expertise lives and sensitivity thrives. Transparency gets rid of both. As with any kind of surveillance, it is worth deploying when the stakes are high, and corruption likely — but it must be deployed with care.
The problem with transparency arises from a deeper problem: the problem of hyperspecialization. As Elijah Millgram puts it, the basic fact of our era is that there is far too much knowledge for one human head. There is too much science; there is too much world. So we have to trust each other, to depend on each other to go beyond where we cannot. But trust makes us vulnerable. Vulnerability, as Annette Baier says, is the essential feature of trust. When we trust people we are some part of our selves in their care, for them to act beyond our sight, in areas we cannot understand.
Transparency trades trust for monitoring. Trust makes us powerful because it lets us take advantage of expert understandings that go beyond what we understand; it lets us rely on special sensitivities that we ourselves do not possess. But this makes us vulnerable. Transparency is desirable because it promises to secure that vulnerability. Transparency offers us safety from such vulnerability by forcing experts back into our sight, by leashing their actions to what we non-experts can understand. But in doing so, it also undermines much of the power of trust. Transparency robs us of the special fruits of expertise.
Expertise and sensitivity are distributed across the various people in the community. Some of us understand the science of vaccines, others of us understand the value of experimental cinema or the particular musical value in the Atlanta trap scene; others of us are sensitive to the pains of growing up poor in America. To cope with the complexity of the world, we need a profound division of cognitive labor – and a division of moral labor, and a division of valuing labor. And to promote full the full flowering of that division, we need to trust each other. We certainly need to check on each other —but the check is there as a safeguard in the process of developing better relationships of trust, and not a replacement for trust.
Thanks for reading New Work in Philosophy! Subscribe for free to receive new posts and support my work.