Unsupervised Image-to-image translation : Taking inspiration from human perception

University essay from Linnéuniversitetet/Institutionen för datavetenskap och medieteknik (DM)

Abstract: Generative Artificial Intelligence is a field of artificial intelligence where systems can learn underlying patterns in previously seen content and generate new content. This thesis explores a generative artificial intelligence technique used for image-toimage translations called Cycle-consistent Adversarial network (CycleGAN), which can translate images from one domain into another. The CycleGAN is a stateof-the-art technique for doing unsupervised image-to-image translations. It uses the concept of cycle-consistency to learn a mapping between image distributions, where the Mean Absolute Error function is used to compare images and thereby learn an underlying mapping between the two image distributions. In this work, we propose to use the Structural Similarity Index Measure (SSIM) as an alternative to the Mean Absolute Error function. The SSIM is a metric inspired by human perception, which measures the difference in two images by comparing the difference in, contrast, luminance, and structure. We examine if using the SSIM as the cycle-consistency loss in the CycleGAN will improve the image quality of generated images as measured by the Inception Score and Fréchet Inception Distance. The inception Score and Fréchet Inception Distance are both metrics that have been proposed as methods for evaluating the quality of images generated by generative adversarial networks (GAN). We conduct a controlled experiment to collect the quantitative metrics. Our results suggest that using the SSIM in the CycleGAN as the cycle-consistency loss will, in most cases, improve the image quality of generated images as measured Inception Score and Fréchet Inception Distance.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)