Dynamic Stopping for Artificial Neural Networks

University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

Abstract: The growing popularity of Artificial Neural Networks (ANN) demands continuous improvement and optimization of the training process to achieve higher-performing algorithms at a lower computational cost. During training an ANN learns to solve a problem by looking at examples, and will iteratively go over a dataset to reach an optimal performance. Usually the user needs to define a fixed number of iterations before viewing the final result and assessing the quality of training. The goal of the project presented in this paper was to optimize the training process by defining an automatic stopping criterion, which arrests training when good performance is achieved but further training would yield diminishing returns. The two methods explored to achieve this were based on the Plateau Detection coefficient (C) and the Gradient Correlation coefficient (G). The hypothesis was that these could be used across different problems and different networks with consistently good dynamic stopping. Testing over multiple different ANN parameters revealed that G=1 produced a good stopping criterion over almost all scenarios, whereas C presented some weaknesses.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)