Using a Deep Generative Model to Generate and Manipulate 3D Object Representation

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: The increasing importance of 3D data in various domains, such as computer vision, robotics, medical analysis, augmented reality, and virtual reality, has gained giant research interest in generating 3D data using deep generative models. The challenging problem is how to build generative models to synthesize diverse and realistic 3D objects representations, while having controllability for manipulating the shape attributes of 3D objects. This thesis explores the use of 3D Generative Adversarial Networks (GANs) for generation of 3D indoor objects shapes represented by point clouds, with a focus on shape editing tasks. Leveraging insights from 2D semantic face editing, the thesis proposes extending the InterFaceGAN framework to 3D GAN model for discovering the relationship between latent codes and semantic attributes of generated shapes. In the end, we successfully perform controllable shape editing by manipulating the latent code of GAN.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)