Deep Ensembles for Self-Training in NLP

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: With the development of deep learning methods the requirement of having access to large amounts of data has increased. In this study, we have looked at methods for leveraging unlabeled data while only having access to small amounts of labeled data, which is common in real-world scenarios. We have investigated a method called self-training for leveraging the unlabeled data when training a model. It works by training a teacher model on the labeled data that then labels the unlabeled data for a student model to train on. A popular method in machine learning is ensembling which is a way of improving a single model by combining multiple models. With previous studies mainly focusing on self-training with image data and showing that ensembles can successfully be used for images, we wanted to see if the same applies to text data. We mainly focused on investigating how ensembles can be used as teachers for training a single student model. This was done by creating different ensemble models and comparing them against the individual members in the ensemble. The results showed that ensemble do not necessarily improves the accuracy of the student model over a single model but in certain cases when used correctly they can provide benefits. We found that depending on the dataset bagging BERT models can perform the same or better than a larger BERT model and this translates to the student model. Bagging multiple smaller models also has the benefit of being easier to scale and more computationally efficient to train in comparison to scaling a single model.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)