Monocular vision-based obstacle avoidance for Micro Aerial Vehicles
Abstract: The Micro Aerial Vehicless (MAVs) are gaining attention in numerous applications asthese platforms are cheap and can do complex maneuvers. Moreover, most of the commer-cially available MAVs are equipped with a mono-camera. Currently, there is an increasinginterest to deploy autonomous mono-camera MAVs with obstacle avoidance capabilitiesin various complex application areas. Some of the application areas have moving obstaclesas well as stationary, which makes it more challenging for collision avoidance schemes.This master thesis set out to investigate the possibility to avoid moving and station-ary obstacles with a single camera as the only sensor gathering information from thesurrounding environment.One concept to perform autonomous obstacle avoidance is to predict the time near-collision based on a Convolution Neural Network (CNN) architecture that uses the videofeed from a mono-camera. In this way, the heading of the MAV is regulated to maximizethe time to a collision, resulting in the avoidance maneuver. Moreover, another interestingperspective is when due to multiple dynamic obstacles in the environment there aremultiple time predictions for different parts of the Field of View (FoV). The method ismaximizing time to a collision by choosing the part with the largest time to collision.However, this is a complicated task and this thesis provides an overview of it whilediscussing the challenges and possible future directions. One of the main reason was thatthe available data set was not reliable and was not provide enough information for theCNN to produce any acceptable predictions.Moreover, this thesis looks into another approach for avoiding collisions, using objectdetection method You Only Lock Once (YOLO) with the mono-camera video feed. YOLOis a state-of-the-art network that can detect objects and produce bounding boxes in real-time. Because of YOLOs high success rate and speed were it chosen to be used in thisthesis. When YOLO detects an obstacle it is telling where in the image the object is,the obstacle pixel coordinates. By utilizing the images FoV and trigonometry can pixelcoordinates be transformed to an angle, assuming the lens does not distort the image.This position information can then be used to avoid obstacles. The method is evaluated insimulation environment Gazebo and experimental verification with commercial availableMAV Parrot Bebop 2. While the obtained results show the efficiency of the method. To bemore specific, the proposed method is capable to avoid dynamic and stationary obstacles.Future works will be the evaluation of this method in more complex environments with multiple dynamic obstacles for autonomous navigation of a team of MAVs. A video ofthe experiments can be viewed at:https://youtu.be/g_zL6eVqgVM.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)