Visual Bird's-Eye View Object Detection for Autonomous Driving

University essay from Linköpings universitet/Datorseende

Abstract: In the field of autonomous driving a common scenario is to apply deep learningmodels on camera feeds to provide information about the surroundings. A recenttrend is for such vision-based methods to be centralized, in that they fuse imagesfrom all cameras in one big model for a single comprehensive output. Designingand tuning such models is hard and time consuming, in both development andtraining. This thesis aims to reproduce the results of a paper about a centralizedvision-based model performing 3D object detection, called BEVDet. Additionalgoals are to ablate the technique of class balanced grouping and sampling usedin the model, to tune the model to improve generalization, and to change thedetection head of the model to a Transformer decoder-based head. The findings include a successful reproduction of the results of the paper,while adding depth supervision to BEVDet establishes a baseline for the subsequentexperiments. An increasing validation loss during most of the training indicatesthat there is room for improvement in the generalization of the model. Severaldifferent methods are tested in order to resolve the increasing validation loss,but they all fail to do so. The ablation study shows that the class balanced groupingis important for the performance of the chosen configuration of the model,while the class balanced sampling does not contribute significantly. Without extensivetuning the replacement head gives performance similar to the PETR, themodel that the head is adapted from, but fails to match the performance of thebaseline model. In addition, the model with the Transformer decoder-based headshows a converging validation loss, unlike the baseline model.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)