Towards Deep Learning Accelerated Sparse Bayesian Frequency Estimation

University essay from Lunds universitet/Matematisk statistik

Abstract: The Discrete Fourier Transform is the simplest way to obtain the spectrum of a discrete complex signal. This thesis concerns the case when the signal is known to contain a small (unknown) number of frequencies, not limited to the discrete Fourier frequencies, embedded in complex Gaussian noise. A typical signal is generated from a digital radar and the frequency components stem from point scatters, typically targets. The task is to estimate the frequencies and their respective amplitudes. This is done in a hierarchical Bayesian framework known from literature. It allows frequencies to be off the Fourier grid. The thesis contains the derivation of involved distributions, complementing the literature, being one of the results. The resulting algorithm is a so-called hybrid-Gibbs sampler utilising a mix of conjugate priors and Markov Chain Monte Carlo. It constitutes a triple loop and is computationally heavy. The innermost loop is a Metropolis-Hastings sampler that samples a type of generalised (univariate and conditional) von Mises distribution. The main task of this thesis is to investigate the possibility of replacing it with a deep generative model. This would yield a significant acceleration. The model used in the investigation is a Continuous Conditional Generative Adversarial Network (CCGAN). Such networks can be used to sample synthetic images from highly multidimensional and complex distributions. Encouraged by such results it is easy to think that training a CCGAN to sample a univariate conditional distribution is easy. Counter-intuitively the opposite seems to be true. The first numerical result in the thesis is the successful reproduction of a result from literature, the sampling from a two-dimensional Gaussian distribution (constant covariance) conditioned on the mean being on the unit circle. This gives confidence in the implementation. In a second step the sampling of a univariate Gaussian distribution, conditioned on both mean and variance, is investigated. Performance is not satisfactory despite the simple nature of the problem. Learning von Mises type distributions, which are more complicated and also conditioned on high-dimensional data, yield, not surprisingly, even less good results with CCGAN than the univariate Gaussian case. Suggestions for further development of the final CCGAN are given with the hope of making the CCGAN useful in practical Bayesian inference.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)