Towards Latent Space Disentanglement of Variational AutoEncoders for Language

University essay from Uppsala universitet/Institutionen för lingvistik och filologi

Abstract: Variational autoencoders (VAEs) are a neural network architecture broadly used in image generation (Doersch 2016). VAEs are neural network models that encode data from some domain and project it into a latent space (Doersch 2016). In doing so, the resulting encoding space goes from being a discrete distribution of vectors to a series of continuous manifolds. The latent space is subject to a Gaussian prior, giving the space some convenient properties for the distribution of said manifolds. Several strategies have been presented to try to disentangle said latent space to force each of their dimensions to have an interpretable meaning, for example,

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)