Text-Driven Fashion Image Manipulation with GANs : A case study in full-body human image manipulation in fashion

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Language-based fashion image editing has promising applications in design, sustainability, and art. However, it is considered a challenging problem in computer vision and graphics. The diversity of human poses and the complexity of clothing shapes and textures make the editing problem difficult. Inspired by recent progress in editing face images through manipulating latent representations, such as StyleCLIP and HairCLIP, we apply those methods in editing the images of full-body humans in fashion datasets and evaluate their effectiveness. First, we assess different methodologies to find a latent representation of an image via Generative Adversarial Network (GAN) inversion; then, we apply three image manipulation schemes. Thus, a pre-trained e4e encoder is initially utilized for the inversion process, while the results are compared to a more accurate method, Pivotal Tuning Inversion (PTI). Next, we employ an optimization scheme that uses the Contrastive Language Image Pre-training (CLIP) model to guide the latent representation of an image in the direction of attributes described in the input text. We address the problem of the accuracy and speed of the process by incorporating a mapper network. Finally, we propose an optimized mapper called Text-Driven Garment Editing Mapper (TD-GEM) to achieve high-quality image editing in a disentangled way. Our empirical results show that the proposed method can edit fashion items for changing color and sleeve length.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)