Predicting the Unpredictable – Using Language Models to Assess Literary Quality

University essay from Uppsala universitet/Institutionen för lingvistik och filologi

Abstract: People read for various purposes like learning specific skills, acquiring foreign languages, and enjoying the pure reading experience, etc. This kind of pure enjoyment may credit to many aspects, such as the aesthetics of languages, the beauty of rhyme, and the entertainment of being surprised by what will happen next, the last of which is typically featured in fictional narratives and is also the main topic of this project. In other words, “good” fiction may be better at entertaining readers by baffling and eluding their expectations whereas “normal” narratives may contain more cliches and ready-made sentences that are easy to predict. Therefore, this project examines whether “good” fiction is less predictable than “normal” fiction, the two of which are predefined as canonized and non-canonized.  The predictability can be statistically reflected by the probability of the next words being correctly predicted given the previous content, which is then further measured in the metric of perplexity. Thanks to recent advances in deep learning, language models based on neural networks with billions of parameters can now be trained on terabytes of text to improve their performance in predicting the next unseen texts. Therefore, the generative pre-trained modeling and the text generator are combined to estimate the perplexities of canonized literature and non-canonized literature.  Due to the potential risk that the terabytes of text on which the advanced models have been trained may contain book content within the corpus, two series of models are designed to yield non-biased perplexity results, namely the self-trained models and the generative pre-trained Transformer-2 models. The comparisons of these two groups of results set up the final hierarchy of architecture constituted by five models for further experiments.  Over the process of perplexity estimation, the perplexity variance can also be generated at the same time, which is then used to denote how predictability varies across sequences with a certain length within each piece of literature. Evaluated by the perplexity variance, the literature property of homogeneity can also be examined between these two groups of literature.  The ultimate results from the five models imply that there lie distinctions in both perplexity values and variances between the canonized literature and non-canonized literature. Besides, the canonized literature shows higher perplexity values and variances measured in both median and mean metrics, which denotes that it is less predictable and homogeneous than the non-canonized literature.  Obviously, the perplexity values and variances cannot be used to define the literary quality directly. However, they offer some signals that the metric of perplexity can be insightful in the literary quality analysis using natural language processing techniques. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)