Who is responsible if an AI system gives a wrong diagnosis? Analysis of the EU liability law framework of medical AI

University essay from Lunds universitet/Juridiska institutionen; Lunds universitet/Juridiska fakulteten

Author: Magdalena Rietzler; [2022]

Keywords: Law and Political Science;

Abstract: AI systems are part of our daily lives and not only science fiction. In the healthcare sector are medical AI systems used to monitor patients, compare x-rays in order to detect diseases, or to even make a diagnosis. These AI systems help healthcare providers to make the work of doctors and nurses more efficient and to ensure the best service for their patients. However, next to these benefits come such new technologies also with never-before-seen challenges. The media reports, e.g. about cyberattacks or data leaks which can lead to data theft. But what happens when not only the data gets stolen, but the medical AI system gives a wrong diagnose which leads to the wrong treatment? Or who is liable if an AI system discriminates and prefers white over black patients? These questions have led to discussions in the EU and its member states since years and the first guidelines as well as legal frameworks have been presented to tackle the issues of AI. This thesis analysis the current legal framework of the EU as well as the German legislation as an example for national law to see if the current legal liability framework is sufficient to tackle these new issues. Whereas fundamental rights and the GDPR have efficient safeguards in place to tackle liability issues of AI systems, the Product Liability Directive does not cover these systems enough. However, the European Commission is aware of this and has already conducted a public consultation about a revision of this directive. Furthermore, examines this work if the AI Act and the European Parliament’s resolution on civil liability for AI can close the gaps. Both proposals follow a risk-based approach, however, the AI Act does not entail liability rules, but it introduces obligations and requirements for high-risk AI systems to make them safe. This framework is a good starting point in order to tackle challenges which arise from the use of AI systems.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)