Knowledge Distillation for Semantic Segmentation and Autonomous Driving. : Astudy on the influence of hyperparameters, initialization of a student network and the distillation method on the semantic segmentation of urban scenes.

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Reducing the size of a neural network whilst maintaining a comparable performance is an important problem to be solved since the constrictions on resources of small devices make it impossible to deploy large models in numerous real-life scenarios. A prominent example is autonomous driving, where computer vision tasks such as object detection and semantic segmentation need to be performed in real time by mobile devices. In this thesis, the knowledge and spherical knowledge distillation techniques are utilized to train a small model (PSPNet50) under the supervision of a large model (PSPNet101) in order to perform semantic segmentation of urban scenes. The importance of the distillation hyperparameters is studied first, namely the influence of the temperature and the weights of the loss function on the performance of the distilled model, showing no decisive advantage over the individual training of the student. Thereafter, distillation is performed utilizing a pretrained student, revealing a good improvement in performance. Contrary to expectations, the pretrained student benefits from a high learning rate when training resumes under distillation, especially in the spherical knowledge distillation case, displaying a superior and more stable performance when compared to the regular knowledge distillation setting. These findings are validated by several experiments conducted using the Cityscapes dataset. The best distilled model achieves 87.287% pixel accuracy and a 42.0% mean Intersection-Over-Union value (mIoU) on the validation set, higher than the 86.356% pixel accuracy and 39.6% mIoU obtained by the baseline student. On the test set, the official evaluation obtained by submission to the Cityscapes website yields 42.213% mIoU for the distilled model and 41.085% for the baseline student.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)