Attitudes to decision-making under risk supported by artificial intelligence and humans : Perceived risk, reliability and acceptance
Abstract: The purpose of this investigation was to explore how decision situations with varying degrees of perceived risk affect people’s attitudes to human and artificial intelligence (AI) decision-making support. While previous studies have focused on the trust, fairness, reliability and fear of artificial intelligence, robots and algorithms in relation to decision support, the risk inherent in the decision situation has been largely ignored. An online survey with a mixed approach was conducted to investigate artificial intelligence and human decision support in risky situations. Two scenarios were presented to the survey participants. In the scenario where the perceived situational risk was low, selecting a restaurant, people expressed a positive attitude towards relying on and accepting recommendations provided by an AI. In contrast, in the perceived high-risk scenario, purchasing a home, people expressed an equal reluctance to rely on or accept both AI and human recommendations. The limitations of this investigation are primarily related to the challenges of creating a common understanding of concepts such as AI and a relatively homogenous survey group. The implication of this study is that AI may currently be best applied to situations characterized by perceived low risk if the intention is to convince people to rely on and accept AI recommendations, and in the future if AI becomes autonomous, to accept decisions.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)