Social Biases in Language Models : Gender Stereotypes in GPT-3 Generated Stories

University essay from Uppsala universitet/Institutionen för informationsteknologi

Author: Beatrice Lorentzen; [2022]

Keywords: ;

Abstract: The Generative Pre-Trained Transformer 3 (GPT-3) is a language prediction model developed by OpenAI, which can interpret and can generate human language and code. The aim of the study was to assess whether GPT-3 reproduce gender biases in generated short stories. 900stories were generated using GPT-3's API with the engine ”Davinci”. Gender stereotypes about female, male, and gender-neutral characters were looked into using lists of gender connoted traits, professions, and hobbies, as well as physical features. The results indicate that GPT-3reproduces some gender biases that are seemingly benign in generated short stories. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)