Mixed Precision Quantization for Computer Vision Tasks in Autonomous Driving

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Quantization of Neural Networks is popular technique for adopting computation intensive Deep Learning applications to edge devices. In this work, low bit mixed precision quantization of FPN-Resnet18 model trained for the task of semantic segmentation is explored using Cityscapes and Arriver datasets. The Hessian information of each layer in the model is used to determine the bit precision for each layer and in some experiments the bit precision for the layers are determined randomly. The networks are quantization-aware trained with bit combinations 2, 4 and 8. The results obtained for both Cityscapes and Arriver datasets show that the quantization-aware trained networks with the low bit mixed precision technique offer a performance at par with the 8-bit quantization-aware trained networks and the segmentation performance degrades when the network activations are quantized below 8 bits. Also, it was found that the usage of the Hessian information had little effect on the network’s performance.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)