Towards Trustworthy AI : A proposed set of design guidelines for understandable, trustworthy and actionable AI
Abstract: Artificial intelligence is used today in both everyday applications and specialised expert systems. In situations where relying on the output of the AI brings about the risk of negative consequences, it becomes important to understand why the AI system has produced its output. Previous research in human-computer trust has identified trust antecedents that contribute to formation of trust in an AI artifact, understanding of the system being one of them. In the context of Pipedrive, a sales management system, this thesis investigates how can AI predictions be designed as understandable and trustworthy, and by extension which explanatory aspects provide guidance towards actions to take, and which presentation formats support for- mation of trust. Using a research-through design approach, multiple designs for displaying AI predictions are explored for Pipedrive, leading to a proposal for a set of design guidelines that support understandability, trustworthiness and actionability of AI. Both the designs and the guidelines have been iteratively developed in collaboration with users and design practitioners.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)