AI-based Automated Decision Making: An investigative study on how it impacts the rule of law and the case for regulatory safeguards

University essay from Lunds universitet/Rättssociologiska institutionen

Abstract: The development and expansion of artificial intelligence have significant potential to benefit humanity; however, the risks posed by AI-related tools have also become a growing concern over the past decade. From the standpoint of human rights violations AI-related bias, discriminatory practices, data protection practices and violations or potential infringements on fundamental rights are some of the core concerns revolving around this evolving technology. This research inquiry primarily focuses on investigating ongoing discourse around AI-based digital surveillance, predictive policing and assessing the prospective contributions by automated decision-making. The study will critically review and discuss the impact AI-based technology has on policing, law enforcement and the rule of law in a democratic society, and how it could potentially influence the broader aspects of social justice. Moreover, this research inquiry investigates and critiques the ‘biases’ that allegedly exist within AI-based systems and deployment practices that have impacted certain communities more than others. The study focuses primarily on Europe and the U.S., with potential broader ramifications for other countries. Accordingly, the research examines the need for enhanced legal safeguards, i.e., regulatory intervention, which has been a long-standing and ongoing public request. Consequently, this investigation was carried out through a discourse analysis of European and American cases on this topic, supplemented by content analysis of the various EU regulatory and legislative provisions, supported by a qualitative research mixed-method approach, including participant interviews with industry practitioners and impacted families. This research paper would complement the current research on the consequences of AI practices involving automated decision-making and contributes towards challenging the current AI-related industry policies and practices concerning transparency and accountability. It is therefore of utmost importance to constantly question the EU’s powerful position from an accountability standpoint. This includes the need for its attention and intervention, towards certain ‘private actors’ (which include large-scale multinational tech giants) and their relationship with state agencies. This is particularly important in the current context, where most public services and functions are increasingly being outsourced to and carried out by the very same ‘private actors’ using AI tools that are largely self-regulated.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)