ROS-based implementation of a model car with a LiDAR and camera setup

University essay from Uppsala universitet/Signaler och system

Abstract: The aim of this project is to implement a Radio Controlled (RC) car with a Light Detection and Ranging (LiDAR) sensor and a stereoscopic camera setup based on the Robot Operating System (ROS) to conduct Simultaneous Localization and Mapping (SLAM). The LiDAR sensor used is a 2D LiDAR, RPlidar A1, and the stereoscopic camera setup is made of two monocular cameras, Raspberry Pi Camera v2. The sensors were mounted on the RC car and connected using two Raspberry Pi microcomputers.  The 2D LiDAR sensor was used for two-dimensional mapping and the stereo vision from the camera setup for three-dimensional mapping. RC car movement information, odometry, necessary for SLAM was derived using either the LiDAR data or the data from the stereoscopic camera setup. Two means of SLAM were implemented both separately and together for mapping an office space. The SLAM algorithms adopted the Real Time Appearance Based Mapping (RTAB-map) package in the open-source ROS.    The results of the mapping indicated that the RPlidar A1 was able to provide a precise mapping, but showed difficulty when mapping in large circular patterns as the odometry drift resulted in the mismatch of the current mapping with the earlier mapping of the same positions and secondly in localization when turning quickly. The camera setup derived more information about surrounding and showed more robust odometry. However, the setup performed poorly for the mapping of visual loop closures, i.e., the current mapping did not match the earlier mapping of earlier visited positions.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)