Curriculum learning for increasing the performance of a reinforcement learning agent in a static first-person shooter game
Abstract: In this thesis, we trained a reinforcement learning agent using one of the most recent policy gradient methods, proximal policy optimization, in a first-person shooter game with a static player. We investigated how curriculum learning can be used to increase performance of a reinforcement learning agent. Two reinforcement learning agents were trained in two different environments. The first environment was constructed without curriculum learning and the second environment was with curriculum learning. After training the agents, the agents were placed in the same environment where we compared them based on their performance. The performance was measured by the achieved cumulative reward. The result showed that there is a difference in performance between the agents. It was concluded that curriculum learning can be used to increase the performance of a reinforcement learning agent in a first-person shooter game with a static player.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)