Mitigating algorithmic bias in Artificial Intelligence systems

University essay from Uppsala universitet/Matematiska institutionen

Abstract: Artificial Intelligence (AI) systems are increasingly used in society to make decisions that can have direct implications on human lives; credit risk assessments, employment decisions and criminal suspects predictions. As public attention has been drawn towards examples of discriminating and biased AI systems, concerns have been raised about the fairness of these systems. Face recognition systems, in particular, are often trained on non-diverse data sets where certain groups often are underrepresented in the data. The focus of this thesis is to provide insights regarding different aspects that are important to consider in order to mitigate algorithmic bias as well as to investigate the practical implications of bias in AI systems. To fulfil this objective, qualitative interviews with academics and practitioners with different roles in the field of AI and a quantitative online survey is conducted. A practical scenario covering face recognition and gender bias is also applied in order to understand how people reason about this issue in a practical context. The main conclusion of the study is that despite high levels of awareness and understanding about challenges and technical solutions, the academics and practitioners showed little or no awareness of legal aspects regarding bias in AI systems. The implication of this finding is that AI can be seen as a disruptive technology, where organizations tend to develop their own mitigation tools and frameworks as well as use their own moral judgement and understanding of the area instead of turning to legal authorities.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)