The Effect of Batch Normalization on Deep Convolutional Neural Networks

University essay from KTH/Centrum för Autonoma System, CAS

Abstract: Batch normalization is a recently popularized method for accelerating the training of deep feed-forward neural networks. Apart from speed improvements, the technique reportedly enables the use of higher learning rates, less careful parameter initialization, and saturating nonlinearities. The authors note that the precise effect of batch normalization on neural networks remains an area of further study, especially regarding their gradient propagation. Our work compares the convergence behavior of batch normalized networks with ones that lack such normalization. We train both a small multi-layer perceptron and a deep convolutional neural network on four popular image datasets. By systematically altering critical hyperparameters, we isolate the effects of batch normalization both in general and with respect to these hyperparameters. Our experiments show that batch normalization indeed has positive effects on many aspects of neural networks but we cannot confirm significant convergence speed improvements, especially when wall time is taken into account. Overall, batch normalized models achieve higher validation and test accuracies on all datasets, which we attribute to its regularizing effect and more stable gradient propagation. Due to these results, the use of batch normalization is generally advised since it prevents model divergence and may increase convergence speeds through higher learning rates. Regardless of these properties, we still recommend the use of variance-preserving weight initialization, as well as rectifiers over saturating nonlinearities. 

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)