Conditional Noise-Contrastive Estimation : With Application to Natural Image Statistics

University essay from KTH/Skolan för datavetenskap och kommunikation (CSC)

Abstract: Unnormalised parametric models are an important class of probabilistic models which are difficult to estimate. The models are important since they occur in many different areas of application, e.g. in modelling of natural images, natural language and associative memory. However, standard maximum likelihood estimation is not applicable to unnormalised models, so alternative methods are required. Noise-contrastive estimation (NCE) has been proposed as an effective estimation method for unnormalised models. The basic idea is to transform the unsupervised estimation problem into a supervised classification problem. The parameters of the unnormalised model are learned by training the model to differentiate the given data samples from generated noise samples. However, the choice of the noise distribution has been left open to the user, and as the performance of the estimation may be sensitive to this choice, it is desirable for it to be automated. In this thesis, the ambiguity in the choice of the noise distribution is addressed by presenting the previously unpublished conditional noise-contrastive estimation (CNCE) method. Like NCE, CNCE estimates unnormalised models by classifying data and noise samples. However, the choice of noise distribution is partly automated via the use of a conditional noise distribution that is dependent on the data. In addition to introducing the core theory for CNCE, the method is empirically validated on data and models where the ground truth is known. Furthermore, CNCE is applied to natural image data to show its applicability in a realistic application.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)