Real-time conversion of monodepth visual odometry enhanced network
Abstract: This thesis work belongs to the field of self-supervised monocular depth estimation and constitutes a conversion of the work done in . The purpose is to consider the computationally expensive model in  as the baseline model of this work and try to create a lightweight model out of it. The current work proposes a network suited to be deployed on embedded devices such as NVIDIA Jetson TX2 where the needs for short runtime, small memory footprint, and power consumption matters the most. In other words, if those requirements are missing, no matter if precision is extraordinarily high, the model cannot be functional on embedded processors. Thus, mobile platforms with small size such as drones, delivery robots, etc. cannot exploit the benefits of deep learning. The proposed network has _29.7 less parameters than the baseline model  and uses only 10.6 MB for a forward pass in contrast to 227MB used by the network in . Consequently, the proposed model can be functional on embedded devices’ GPU. Lastly, it is able to infer depth with promising speed even on standard CPUs and at the same time provides comparable or higher accuracy than other works.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)