Compression of Generative Networks for Single Image Super-Resolution

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Author: Jacob Åslund; Anton Dahlin; [2020]

Keywords: ;

Abstract: In this research project we have compressed the model size of a generative neural network trained to upscale low resolution images. After first training a large network for this task, we used knowledge distillation to train smaller networks to approximate its output. The weights of the resulting networks were also converted from float32 to float16 to further reduce model size. We found that the size of the original network could be reduced down to 50 percent without noticeable loss in performance, and down to 15 percent of the original size with acceptable loss in performance

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)