Childhood Habituation in Evolution of Augmenting Topologies (CHEAT)

University essay from Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

Abstract: Neuroevolution is a field within machine learning that applies genetic algorithms to train artificial neural networks. Neuroevolution of Augmenting Topologies (NEAT) is a method that evolves both the topology of the network and trains the weights of the network at the same time, and has been found to successfully solve reinforcement learning problems efficiently and the XOR problem with a minimal topology. However, NEAT has not been shown to solve more complex labelling problems and has a vaguely motivated heuristic concept of speciation needed to keep a diverse population and protect new structural innovations from instant elimination. In this thesis a new algorithm was developed, the Childhood Habituation in Evolution of Augmenting Topologies (CHEAT) algorithm, which removes the need for the heuristic speciation concept and its associated hyper-parameters by splitting topology evolution and weight training into two distinct phases. CHEAT also allows for structured topology evolution by having the option of forcing fully connected layers. The algorithm was tested on the XOR problem and the spiral problem with two turns, with a result showing performances on par with NEAT for the XOR problem, and better results on the spiral problem where CHEAT is able to solve the problem while NEAT is not. It was found that without an early stopping criterion for gradient descent training, new structural innovations were quickly eliminated from the population before being optimally tuned, and thus the stopping criterion is vital to be able to remove the NEAT speciation heuristics. It was also found that restricting the algorithm to evolve in a structured manner by forcing fully connected layers was vital to solving any problems more complex than the XOR problem, likely due to the feature selection behaviour fully connected layers exhibit. The work done in this thesis opens further areas of research into evolving Artificial Neural Networks, where the most interesting leads lie in other weight training methods, different stopping criterion for gradient descent training, and finally letting the algorithm take control of evolution of its own hyper-parameters for automatic model construction.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)