Machine-learning and Discrimination: Procedural Challenges of Algorithmic Decision-making

University essay from Lunds universitet/Juridiska institutionen; Lunds universitet/Juridiska fakulteten

Abstract: The emergence of artificial intelligence, especially machine-learning methods, challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. This paper will focus on cases of algorithmic discrimination in the context of recruitment as a business practice. Particular in the field of recruitment of workers, which has always been a field where EU non-discrimination law has hardened and evolved, the use of machine-learning algorithms in recruitment processes has triggered a debate on the application of the non-discrimination principles in EU. Beyond the discussion about the applicability of current non-discrimination law in algorithmic discrimination cases, it also important to shed light on the procedural challenges of such cases. Algorithms challenge two principles in the system of evidence in EU-non discrimination law. The first is effectiveness, given that due to the natural opacity of algorithms, the parties do not have easy and unrestricted access to information enabling them to support their claims. The second is fairness, which is an enormous task due to the algorithmic opacity, placing unrealistic burdens of proof on claimants as well as on respondents. Discrimination in such cases seems impossible to prove and, consequently, falls outside the scope of EU non-discrimination law. However, through an examination of current principles and case-law of Union law, two possible remedies are proposed in this paper. Regarding effectiveness, a joint reading of EU non-discrimination law and the GDPR, could recognize a right to access evidence in favour of victims of algorithmic discrimination. Regarding fairness, a more proportionate way to allocate the burden off proof is suggested by extending the grounds for defence of respondents. This is done by allowing a respondent to establish that biases were autonomously developed by an algorithm. All in all, this paper shows that from a legal point of view, many problems posed by algorithmic discrimination reinforce weaknesses and shortcomings that already exists in the legal framework. Nevertheless, changes and adaptations such as those suggested might help mend the gaps and differences between those that wish for a rapid and broad development of AI, leaving legal protection in the wake, and those that wish for a careful and steady development that gives regulation a chance in the game.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)