Exploring Normalizing Flow Modifications for Improved Model Expressivity

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Normalizing flows represent a class of generative models that exhibit a number of attractive properties, but do not always achieve state-of-the-art performance when it comes to perceived naturalness of generated samples. To improve the quality of generated samples, this thesis examines methods to enhance the expressivity of discrete-time normalizing flow models and thus their ability to capture different aspects of the data. In the first part of the thesis, we propose an invertible neural network architecture as an alternative to popular architectures like Glow that require an individual neural network per flow step. Although our proposal greatly reduces the number of parameters, it has not been done before, as such architectures are believed to not be powerful enough. For this reason, we define two optional extensions that could greatly increase the expressivity of the architecture. We use augmentation to add Gaussian noise variables to the input to achieve arbitrary hidden-layer widths that are no longer dictated by the dimensionality of the data. Moreover, we implement Piecewise Affine Activation Functions that represent a generalization of Leaky ReLU activations and allow for more powerful transformations in every individual step. The resulting three models are evaluated on two simple synthetic datasets – the two moons dataset and one generated from a mixture of eight Gaussians. Our findings indicate that the proposed architectures cannot adequately model these simple datasets and thus do not represent alternatives to current stateof-the-art models. The Piecewise Affine Activation Function significantly improved the expressivity of the invertible neural network, but could not make use of its full potential due to inappropriate assumptions about the function’s input distribution. Further research is needed to ensure that the input to this function is always standard normal distributed. We conducted further experiments with augmentation using the Glow model and could show minor improvements on the synthetic datasets when only few flow steps (two, three or four) were used. However, in a more realistic scenario, the model would encompass many more flow steps. Lastly, we generalized the transformation in the coupling layers of modern flow architectures from an elementwise affine transformation to a matrixbased affine transformation and studied the effect this had on MoGlow, a flow-based model of motion. We could show that McMoGlow, our modified version of MoGlow, consistently achieved a better training likelihood than the original MoGlow on human locomotion data. However, a subjective user study found no statistically significant difference in the perceived naturalness of the samples generated. As a possible reason for this, we hypothesize that the improvements are subtle and more visible in samples that exhibit slower movements or edge cases which may have been underrepresented in the user study.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)