Manipulation Action Recognition and Reconstruction using a Deep Scene Graph Network

University essay from Högskolan i Halmstad/Akademin för informationsteknologi

Author: Dawid Ejdeholm; Jacob Harsten; [2020]

Keywords: ;

Abstract: Convolutional neural networks have been successfully used in action recognition but are usually restricted to operate on Euclidean data, such as images. In recent years there has been an increase in research devoted towards finding a generalized model operating on non-Euclidean data (e.g graphs) and manipulation action recognition on graphs is still a very novel subject. In this thesis a novel graph based deep neural network is developed for predicting manipulation actions and reconstructing graphs from a lower space representation. The network is trained on two manipulation action datasets and uses their, respective, previous works on action prediction as a baseline. In addition, a modular perception pipeline is developed that takes RGBD images as input and outputs a scene graph, consisting of objects and their spatial relations, which can then be fed to the network to lead to online action prediction. The network manages to outperform both baselines when training for action prediction and achieves comparable results when trained in an end-to-end manner performing both action prediction and graph reconstruction, simultaneously. Furthermore, to test the scalability of our model, the network is tested with input graphs deriving from our scene graph generator where the subject is performing 7 different demonstrations of the learned action types in a new scene context with novel objects.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)