Hannah Carnegy-Arbuthnott (University of York), "Privacy, Publicity, and the Right to Be Forgotten"
The Journal of Political Philosophy, 2023
We’ve all done things we regret. It used to be possible to comfort ourselves with the thought that our misadventures would soon be forgotten. In the digital age, however, not only is more of our personal information captured and recorded, search engines can also serve up previously long-forgotten information at the click of a button.
Does Google violate your privacy when it serves up results pointing to embarrassing information from your past? Should search engines be required to remove such outdated links from the search results page associated with your name?
GDPR regulation places just such a requirement on search engine operators when the personal data in question is deemed to be outdated. This regulation is controversial. On the one hand, we don’t tend to think that information should be deleted from sources like newspaper archives or public records after a certain time has passed, at least when there is a robust enough public interest case for it to have been recorded and published in the first place. (though there are of course notable exceptions for some types of record, such as juvenile criminal records). And if we think the information should remain in the archives, it seems like a double standard to erase it from the index that allows us to navigate those archives.Â
On the other hand, there is something disturbing about the way in which search engines can now serve up previously long-forgotten information at the click of a button. Especially given the fact that it is now common practice to use search engines to make assessments about people before we meet them in many contexts, including recruitment, admissions, or dating. And it is plausible to think that we have some claims against search engines when they prominently display outdated results on our search profiles.
In my paper, Privacy, Publicity and the Right to be Forgotten, I argue that what we have reason to object to in such cases is not the fact that the information in question is thrust into the public domain. After all, it may have been available to the public via the original source all along. Instead, what’s objectionable is the way the information is presented.Â
So how do search engines like Google present information to us? Google doesn’t provide a neutral index of the internet, analogous to a library catalogue. Instead, it creates a ranking by relevance of search results for any given search term. Moreover, it tailors these results to each individual user, depending on what it knows about why you’re using a particular search term.Â
When it comes to searches of a person’s name, the search engine essentially curates a public profile of that person, one that is tailored to the purposes of each searcher. When Google serves up results, the message it implicitly conveys to the searcher is: here are the most relevant pieces of information by which to judge your prospective employee/applicant/date (insert any reason for which you might be Googling another person).
When a search engine serves up outdated information about you while framing it as relevant for the purposes of the searcher, I suggest that that involves a distortion of your public profile. The distortion involves presenting information from your past in a way that suggests it is appropriate to hold you accountable for it, when it is no longer appropriate to do so.
I argue that the claims we have against such practices are best understood as claims against distortion, rather than claims to privacy. Claims against distortion are a category of claims that sit alongside claims to privacy and claims against defamation.Â
All of these claims play a part in protecting a general interest we have in being able to present ourselves to others on our own terms. Claims to privacy serve this interest by allowing you to choose how much of your personal information you share, and who you share it with. Claims against defamation provide protection against others making untrue derogatory statements about you. Claims against distortion, as I define them, are claims against true information being presented in a way that suggests it is appropriate to hold someone accountable for it, when it is not appropriate to do so.
Explaining the objection we have to the way search engines dredge up outdated information in terms of claims against distortion rather than privacy helps to explain a quirk of the central case that established the right to be forgotten in the GDPR. That case involved a guy who had defaulted on a debt decades ago. He objected to the newspaper announcement about this being shown at the top of search results when you Googled his name. The court ruled that the search results linking to the foreclosure announcement should be removed from his Google search profile. However, the landmark case was widely reported in the press, resulting in the information that the plaintiff wanted suppressed being even more widely publicised (a phenomenon known as the Streisand Effect). Now, when you Google the original plaintiff’s name, the top results shown are links pointing to newspaper articles about the case, which include the crucial information about him having defaulted on his debt.
If we assess this particular case from the perspective of the plaintiff’s privacy rights, we should conclude that the court ruling was counterproductive, given the press attention it garnered. However, I argue that the plaintiff’s general interest in self-presentation has been better protected through the ruling, despite the Streisand Effect. While the information about the foreclosure is still served up on his search profile, this information is now presented in the context of news articles which also explain that this information is officially considered outdated. As such, it carries the implicit message that it would no longer be appropriate to hold him accountable for that debt default.Â
Recognizing claims against distortion expands our toolkit for defending a range of data protection provisions in cases where claims against privacy do not straightforwardly apply, and widens the scope of the debate about how much control individuals should have over how their personal information is presented in public.
In trying to ascertain what counts as distortion, one complication that arises is how to assess what it is appropriate to hold people accountable for. One way to do this is by reference to existing norms of accountability. But of course, those norms shift over time, and the very existence of search engines has already caused a seismic shift in those norms. What’s needed, then, is careful discussion of what our norms of accountability should look like, and how to foster a culture in which we are not kept stiflingly shackled to our past mistakes.Â
We should also pay attention to the power dynamics at play. We have reason to worry when a handful of corporations play an outsize role in shaping our norms.
Adding claims against distortion to our conceptual toolkit provides a path to defending and implementing measures that give individuals some control over their personal information even when from the perspective of privacy, the cat is already out of the bag.