An analysis of the effects of particle amount on the accuracy of particle filter estimations of marginal likelihood
Abstract: Particle filters are a type of genetic Monte Carlo algorithms that are broadly applied on filtering problems arising in signal processing and Bayesian statistical inference. These type of inference problems are easily modelled as hidden Markov models. Particle filters utilize samples, also known as particles, to represent an approximation of a stochastic process given error-prone observations. These particles are updated throughout the execution of the filter, in order to gradually improve the accuracy of the approximation. This paper investigates the effects of varying the amount of particles used, on the accuracy of the filter when applied on hidden Markov models of different sizes. The efficiency of the filter was evaluated through measurements of marginal likelihood, where the exact likelihood value was compared with the approximated value. Our results show that for our implementations on fixed size hidden Markov models, the increase of particles has less beneficial effect on marginal likelihood accuracy, as the number of total particles increases. The effect of adding more particles declines according to some power series, depending on the size of the model. Furthermore, in order to reach a certain marginal likelihood accuracy, the number of required particles increases linearly with respect to symmetrical HMM sizes.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)