Technology and the Value of Trust : Can Society Trust AI?

University essay from Linköpings universitet/Institutionen för kultur och samhälle

Author: Dominika Janus; [2022]

Keywords: AI; trust; reliance;

Abstract: Ensuring "public trust" in AI seems to be a priority for policymakers and the private sector. It is expected that without public trust, such innovations cannot be implemented with legitimacy, and there is a risk of potential public backlash or resistance (for example cases of Cambridge Analytica, predictive policing, or Clearview AI). There is a rich body of research relating to public trust in data use that suggests that "building public trust" can too often place the burden on the public to be "more trusting" and will do little to address other concerns, including whether trust is a desirable and attainable characteristic of human-AI relation. I argue that there is good reason for the public not to trust AI, especially in the absence of regulatory structures that afford genuine accountability, but at the same time AI can be considered reliable. To that end, the main argument of this paper is 1. We are asked to trust an entity that cannot enter the trust relationship, because it doesn’t fulfil the conditions spelled out by the definitions of trust. 2. We are presented with a misdescription of the agent. Who we trust in fact are developers or policy makers. I also argue that the term "reliance" should be used instead of "trust", as by definition it is more fitting current AI applications. Additionally, the focus should be on framing trust as part of practices expected from AI solution providers, developers and regulators.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)