Uncertainty quantification for neural network predictions

University essay from Umeå universitet/Statistik

Author: Jonas Borgström; [2022]

Keywords: ;

Abstract: Since their inception, machine learning methods have proven useful, and their usability continues to grow as new methods are introduced. However, as these methods are used for decision-making in most fields, such as weather forecasting,medicine, and stock market prediction, their reliability must be appropriately evaluated before the models are deployed. Uncertainty in machine learning and neural networks usually stems from two primary sources, the data used or the model itself. Uncertainty would not be a problem for most statistical and machine learning methods, but for neural networks that lack inherent uncertainty quantification methods, this can be more problematic. Furthermore, as the neural network architecture dimension grows in size, so does the number of parameters to be estimated. So, modeling the prediction uncertainty through parameter uncertainty can become an impossible task. There are, however, methods that can quantify uncertainty in neural networks using Bayesian approximation. One such method is Monte Carlo Dropout, where the same input data is used with different network structures. The results using these methods are assumed to follow a normal distribution, from which the uncertainty can be quantified. The second method tests a new approach where the neural network is first considered a dimension reduction tool. In doing this, input feature space that is often large is mapped to the state space of the neurons in the last hidden layer that can be selected to be smaller. Then by using the information from this reduced feature space, a reduced parameter set for the neural network prediction can be defined. With this, an assumption of, for example, a multinomial-Dirichlet probability model for discrete classification can be made. Importantly, this reduced feature space can generate predictions for hypothetical inputs, which quantifies prediction uncertainty for the network predictions. This thesis aims to see if the uncertainty of neural network predictions can be quantified statistically by evaluating this new method. Then, the results of these two methods will be compared to see any differences between the predictive uncertainty quantified using these methods. The results show that using the new method, predictive uncertainty could be quantified by first gathering the output range for each ReLU activation function. Then, new data could be uniformly simulated and inserted into the softmax layer for classification by using these ranges. By using these results, the multinomial-Dirichlet distribution could be used to quantify the uncertainty. The two methods offer comparable results when used to quantify predictive uncertainty.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)