Eamon Duede (U. of Chicago), "Instruments, agents, and artificial intelligence: novel epistemic categories of reliability"
Synthese, 2022
In order for science to make progress, individual scientists must trust in the reliability of not only one another, but also their various methods and instruments. Recently, scientists have begun to rely on approaches that fall under the rather broad heading of Artificial Intelligence. Like with the introduction of any new technology, the use of AI in science presents novel opportunities and challenges. One principle challenge has been how to determine when a given AI model is trustworthy. When scientists trust in the reliability of other experts, they are epistemically justified in their trust on the basis of certain principles. The same, of course, is true when they trust their methods and their scientific instruments. Yet, the underlying principles that justify that trust are distinct when it comes to experts, methods, or instruments. In trying to sort out how scientists might be justified in trusting in the reliability of AI-infused science, we might reasonably think that the underlying justificatory principles will be that of either experts, methods, or instruments. Any of the three might seem like attractive options. Afterall, AIs are variously treated and spoken of as if they just were either experts or instruments.
In a recent piece published in Synthese, I explore the principles that justify epistemic trust in the reliability of scientific experts and scientific methods and scientific instruments. While it’s true that today's AIs exhibit characteristics that appear common to each of these categories, in the paper I make the argument that the relatively familiar epistemic principles that justify belief in the reliability of instruments and experts are not only completely distinct from one another, but that scientists’ trust in the reliability of AI cannot be based on either. At the same time, I want to resist the rather pessimistic conclusion that there are no principles that can secure justified belief in the reliability of AIs. Rather, I see the results of this analysis as an occasion and opportunity for exciting, new philosophy.
Did you mean one *principal* challenge?