Probabilistic Feature Learning Using Gaussian Process Auto-Encoders

University essay from Uppsala universitet/Avdelningen för systemteknik

Author: Simon Olofsson; [2016]

Keywords: ;

Abstract: The focus of this report is the problem of probabilistic dimensionality reduction and feature learning from high-dimensional data (images). Extracting features and being able to learn from high-dimensional sensory data is an important ability in a general-purpose intelligent system. Dimensionality reduction and feature learning have in the past primarily been done using (convolutional) neural networks or linear mappings, e.g. in principal component analysis. However, these methods do not yield any error bars in the features or predictions. In this report, theory and a model for how dimensionality reduction and feature learning can be done using Gaussian process auto-encoders (GP-AEs) are presented. By using GP-AEs, the variance in the feature space is computed, thus, yielding a measure of the uncertainty in the constructed model. This measure is useful in order to avoid making over-confident system predictions. Results show that GP-AEs are capable of dimensionality reduction and feature learning, but that they suffer from scalability issues and problems with weak gradient signal propagation. Results in reconstruction quality are not as good as those achieved by state-of-the-art methods, and it takes very long to train the model. The model has potential though, since it can scale to large inputs.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)