Methods for Developing TinyConvolutional Neural Networksfor Deployment on EmbeddedSystems

University essay from Uppsala universitet/Institutionen för informationsteknologi

Author: Egemen Yiğit Kömürcü; [2023]

Keywords: ;

Abstract: With the recent development in the Deep Learning area, computationally heavy tasks like object detection in images have become easier to compute and take less time to execute with powerful GPUs. Also, when employing sufficiently larger models, these daily tasks are predicted with greater accuracy. However, these models are not widely used since the embedded devices have computational and memory limitations. Therefore, this project aims to explore different methods to compress these large networks into smaller ones. In that way, lighter models can be deployed on embedded devices without losing too much accuracy from the large models. Firstly, this work is started by training the baseline model ResNet50 which is available in Tobii´s DeepLearning framework developed for their eye-tracking glasses and devices. Then, different lighter modern architectures like Resnet18, MobileNetv2, and SqueezeNet are attempted to be trained to get reasonable accuracy with less model complexity. Three model compression techniques will be applied in this thesis. The first one is a method called distillation which is conducted to increase the accuracy of the smaller networks. The second step is to experiment with various pruning methods like threshold, layer, and channel pruning to decrease the model size of the SqueezeNet model. Since the quantized Squeezenet model is not available in Pytorch, the quantization method from 32-bit to 8-bit is applied to the model to further decrease the modelcomplexity. Finally, in this thesis it is found that it is possible to reduce the baseline model of 100MBs with an overall loss (0.77) less than 1, into the layer pruned quantized SqueezeNet of lessthan 1 MBs which has an overall loss of 1.44 which does not exceed 1.5 pixels.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)