Modelling homeostatic regulation in multi-objective decision-making

University essay from KTH/Beräkningsvetenskap och beräkningsteknik (CST)

Author: Naresh Balaji Ravichandran; [2018]

Keywords: ;

Abstract: This thesis attempts to model homeostatic regulation, a behavioural phenomenon ubiquitousin animals, in the domain of reinforcement learning. We specifically look at multi-objectivereinforcement learning that can facilitate multi-variate regulation. When multiple objectivesare to be handled, the current framework of Multi-objective Reinforcement Learning provesto be unsuitable without information on some preference over the objectives. We thereforemodel homeostatic regulation as a motivational process, that selectively activates some ob-jectives over others, and implements cognitive control. In doing so, we utilize cognitive con-trol not as behavioural principle, but as a control mechanism that arises as a natural necessityfor homeostatic regulation. We utilize a recent framework for drive reduction theory of reinforcement learning, andattempt to provide a normative account of arbitration of objectives from drives. We showthat a purely reactive agent can face difficulties in achieving this regulation, and would re-quire a persistence-flexibility mechanism. This could be handled effectively in our model byincorporating a progress metric. We attempt to build this model with the intention of actingas a natural extension to the current reinforcement learning framework, while also showingappropriate behavioural properties.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)