Attribute Embedding for Variational Auto-Encoders : Regularization derived from triplet loss

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Techniques for imposing a structure on the latent space of neural networks have seen much development in recent years. Clustering techniques used for classification have been used to great success, and with this work we hope to bridge the gap between contrastive losses and Generative models. We introduce an embedding loss derived from Triplet loss to show that attributes and information can be clustered in specific dimensions in the latent space of Variational Auto-Encoders. This allows control over the embedded attributes via manipulation of these latent space dimensions. This work also serves to take steps towards the usage of any data augmentation when applying Triplet loss to Variational Auto-Encoders. In this work three different Variational Auto-Encoders are trained on three different datasets to embed information in three different ways using this novel method. Our results show the method working to varying degrees depending on the implementation and the information embedded. Two experiments using image data and one using waveform audio shows that the method is modality invariant.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)