Combined Regularisation Techniques for Artificial Neural Networks

University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

Abstract: Artificial neural networks are prone to overfitting – the process of learning details specific to a particular training data set. Success in preventing overfitting through combining the L2 and dropout regularisation techniques has led to the combination’s recent popularity. However, with the introduction of each additional regularisation technique to an artificial neural network, there comes new hyperparameters which must be tuned in an increasingly complex and computationally expensive manner. Motivated by L2’s action as a Gaussian prior on the loss function, we hypothesise an analytic relation for an optimal L2 strength’s dependence on the number of patterns. Conducted on an artificial neural network composed of a single hidden layer, this systematic study tests the hypothesis for optimal L2 strength, and considers what interactions the additional involvement of dropout and early stopping may have on the relation. On an otherwise static problem and network calibration, the results of this thesis suggested the success of the hypothesis within a valid working region. The results are useful informants for the choice of L2 strength, drop rate and early stopping usage, and gave promise that the predictor may find real world applications.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)