Investigation of Increased Mapping Quality Generated by a Neural Network for Camera-LiDAR Sensor Fusion

University essay from KTH/Mekatronik

Abstract: This study’s aim was to investigate the mapping part of Simultaneous Localisation And Mapping (SLAM) in indoor environments containing error sources relevant to two types of sensors. The sensors used were an Intel Realsense depth camera and an RPlidar Light Detection AndRanging (LiDAR). Both cameras and LiDARs are frequently used as exteroceptive sensors in SLAM. Cameras typically struggle with strong light in the environment, and LiDARs struggle with reflective surfaces. Therefore, this study investigated the possibility of using a neural network to detect an error in either sensors’ data caused by mentioned error sources. The network identified which sensor produced erroneous data. The sensor fusion algorithm momentarily excluded said sensor’s data, consequently, improving the mapping quality when possible. The quantitative results showed no significant difference in the measured mean squared error and structural similarity between the final maps generated with and without the network, when compared to the ground truth. However, the qualitative analysis showed some advantages with using the network. Many of the camera’s errors were filtered out with the neural network, and led to a more accurate continuous mapping than without the network implemented. The conclusion was that a neural network can to a limited extent recognise the sensors’ data errors, but only the camera data benefited from the proposed solution. The study also produced important findings from the implementation which are presented. Future work recommendations include neural network optimisation, sensor selection, and sensor fusion implementation.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)