What’s harm got to do with it? The framing of accountability and harm in the EU Artificial Intelligence Act Proposal

University essay from Lunds universitet/Rättssociologiska institutionen

Abstract: The European Union released an Artificial Intelligence (AI) regulation proposal in April 2021 aimed at laying down harmonised rules for AI circulating the Union market. The purpose of this study is to critically examine how accountability and individual, collective, and social harm was approached and framed by the proposal. Previous research has shown that AI and algorithmic bias and discrimination is a widely known concern and there is a pressing need for regulation that protects against various types of harms. The theoretical framework chosen is built up of Boven’s public accountability theory as well as interjections from the field of critical algorithm studies. The study conducted a critical discourse analysis as designed by Fairclough of both the regulation proposal as well as other articles written in response to it, in order to see the relationship between the text, the discursive practices and the social practices. The results showed that the regulation proposal contained several empty promises regarding its intent and was commercially minded. The text showed a clear balance between innovation and development of AI and protections of fundamental rights, but then failed to deliver in terms of mechanisms established to uphold these promises. The regulation also made several exceptions for both groups such as law enforcement and the military, as well as AI systems. There was no involvement of individuals both prior to and within the regulation as well as no protections or rights for the public. As such, accountability and individual, collective, and social harms were not sufficiently considered by the regulation proposal.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)