Evolution of Neural Controllers for Robot Teams
Abstract: This dissertation evaluates evolutionary methods for evolving cooperative teams of robots. Cooperative robotics is a challenging research area in the field of artificial intelligence. Individual and autonomous robots may by cooperation enhance their performance compared to what they can achieve separately. The challenge of cooperative robotics is that performance relies on interactions between robots. The interactions are not always fully understood, which makes the designing process of hardware and software systems complex. Robotic soccer, such as the RoboCup competitions, offers an unpredictable dynamical environment for competing robot teams that encourages research of these complexities. Instead of trying to solve these problems by designing and implement the behavior, the robots can learn how to behave by evolutionary methods. For this reason, this dissertation evaluates evolution of neural controllers for a team of two robots in a competitive soccer environment. The idea is that evolutionary methods may be a solution to the complexities of creating cooperative robots. The methods used in the experiments are influenced by research of evolutionary algorithms with single autonomous robots and on robotic soccer. The results show that robot teams can evolve to a form of cooperative behavior with simple reactive behavior by relying on self-adaptation with little supervision and human interference.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)