Domain Adaptation Of Front View Synthetic Point Clouds Using GANs For Autonomous Driving

University essay from KTH/Väg- och spårfordon samt konceptuell fordonsdesign

Abstract: The perception of the environment is one of the main enablers of Autonomous Driving and is driven by Cameras, RADAR, and LiDAR sensors. Deep Learning algorithms used in perception need a vast amount of labeled, high-quality data which is costly to obtain for LiDAR sensors. Simulated data is easier to generate but does not reflect the complexity of real data. To overcome this Domain Adaptation can be used to translate data from the simulated to the real domain. In this thesis CycleGAN, a Generative Adversarial Network, is used to learn the domain adaptation between real LiDAR data collected with a race vehicle of the Technical University Munich as part of the \textit{Autonomous Challenge @ CES} and the corresponding race simulated with an Unreal engine. The Front View representation of the LiDAR point clouds is chosen to use all information available in the dataset. The resulting algorithms have an unsatisfying performance. The domain adaptation in both directions learned the general differences between the two datasets but fails to create a point cloud that can be recognized as a sample of the goal domain. Further work needs to be done in this field as the problem of sparse and costly real LiDAR data remains.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)