Structural Comparison of Data Representations Obtained from Deep Learning Models

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: In representation learning we are interested in how data is represented by different models. Representations from different models are often compared by training a new model on a downstream task using the representations and testing their performance. However, this method is not always applicable and it gives limited insight into the representations. In this thesis, we compare natural image representations from classification models and the generative model BigGAN using two other approaches. The first approach compares the geometric clustering of the representations and the second approach compares if the pairwise similarity between images is similar between different models. All models are large pre-trained models trained on ImageNet and the representations are taken as middle layers of the neural networks. A variety of experiments are performed using these approaches. One of the main results of this thesis shows that the representations of different classes are geometrically separated in all models. The experiments also show that there is no significant geometric difference between representations from training data and representations from validation data. Additionally, it was found that the similarity of representations between different models was approximately the same between the classification models AlexNet and ResNet as well as between the classification models and the BigGAN generator. They were also approximately equally similar to each other as they were to the class embedding of the BigGAN generator. Along with the experiment results, this thesis also provide several suggestions for future work in representation learning since a large number of research questions were explored. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)