Managing Responsibility & AI in Healthcare
Abstract: The increased use of artificial intelligence (AI) in a multitude of fields comes with many benefits but also managerial and ethical challenges. One such challenge relates to how we attribute and distribute responsibility when artificial intelligence decision support systems (AI-DSS) are part of human decision-making, also known as the “responsibility gap”. This paper explores what factors can facilitate managerial learning regarding responsibility when AI-DSS is introduced in clinical healthcare, and thus aim to contribute to the research gap on this matter. An overview of the current literature is provided relating to AI, responsibility, and organizational learning, including legal and ethical frameworks that have been established or suggested to guide behavior. Based on semistructured interviews with experts on AI, ethics, regulations, and healthcare professionals, the researchers present findings concluded by five factors that can facilitate managerial learning on the issue of responsibility when introducing AI-DSS in healthcare: diverse and cross-functional teams, critical and up-to-date assessments of legal and ethical frameworks, comprehensive cost-benefit analyses, education, and meaningful human control.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)