Combining Cross-Validation and Ensemble Creation for Artificial Neural Networks

University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

Abstract: Artificial neural networks (ANNs) are widely used nowadays, and the research into improving their performances is continually ongoing. One main goal of ANNs is to have a high generalization performance, which can be estimated through validation. Ensembles can be useful to raise the generalization performance, but the validation of ensembles is often computationally costly if the size of the training data set is limited. Therefore, this thesis introduces shortcut ensembles during cross-validation, where several validation outputs get averaged to estimate the generalization performance of an ensemble. To evaluate this method the validation performance of the shortcut ensemble was compared to validation and test performances of a single model and an actual ensemble, using two different data sets for classification problems. The results show that the shortcut ensemble gives better estimates for the generalization performance of an ensemble than a single model during validation and it can approximate the validation performance of an actual ensemble. Hence, the shortcut ensemble can provide a less costly way of validating ensembles during cross-validation.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)