Regularization Methods in Neural Networks

University essay from Uppsala universitet/Statistiska institutionen

Abstract: Overfitting is a common problem in neural networks. This report uses a simple neural network to do simulations relevant for the field of image recognition. In this report, four common regularization methods for dealing with overfitting are evaluated. The methods L1, L2, Early stopping and Dropout are first tested with the MNIST data set and then with the CIFAR-10 data set. All methods are compared to a baseline where no regularization is used at sample sizes ranging from 500 to 50 000 images. The simulations in the report show that all four methods have repetitive patterns throughout the study and that Dropout continuously is superior to the other three methods as well as the baseline.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)