A hierarchical neural network approach to learning sensor planning and control

University essay from Uppsala universitet/Datorteknik

Abstract: The ability to search their environment is one of the most fundamental skills for any living creature. Visual search in particular is abundantly common for almost all animals. This act of searching is generally active in nature, with vision not simply reacting to incoming stimuli but also actively searching the environment for potential stimuli (such as by moving their head or eyes). Automatic visual search, likewise, is a crucial and powerful tool within a wide variety of different fields. However, performing such an active search is a nontrivial issue for many machine learning approaches. The added complexity of choosing which area to observe, as well as the common case of having a camera with adaptive field-of-view capabilities further complicates the problem. Hierarchical Reinforcement Learning have in recent years proven to be a particularly powerful means of solving hard machine learning problems by a divide-and-conquer methodology, where one highly complex task can be broken down into smaller sub-tasks which on their own may be more easily learnable. In this thesis, we present a hierarchical reinforcement learning system for solving a visual search problem in a stationary camera environment with adjustable pan, tilt and field-of-view capabilities. This hierarchical model also incorporates non-reinforcement learning agents in its workflow to better utilize the strengths of different agents and form a more powerful overall model. This model is then compared to a non-hierarchical baseline as well as some learning-free approaches.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)