Model-based Residual Policy Learning for Sample Efficient Mobile Network Optimization

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: Reinforcement learning is a powerful tool which enables an agent to learn how to control complex systems. However, during the early phases of training, the performance is often poor. Increasing sample efficiency means that fewer interactions with the environment are necessary before achieving good performance, minimizing risk and cost in real world deployment or saving simulation time. We present a novel reinforcement learning method, which we call Model-based Residual Policy Learning, that learns a residual to an existing expert policy using a model-based approach for maximum sample efficiency. We compared its sample efficiency to several methods, including a state-of-the-art model-free method. The comparisons were done on two tasks: coverage and capacity optimization via antenna tilt control for telecommunication networks, as well as a common robotics benchmark task. Performance was measured as the mean episodic reward collected during training. In the coverage and capacity optimization task, the reward signal was a sum of the log reference signal received power, throughput, and signal to interference plus noise ratio averaged across users in the cells. Our method was more sample efficient than the baselines across the board. The sample efficiency was especially good for the coverage and capacity optimization task. We also found that using an expert policy helped to maintain a good initial performance. In the ablation studies of the two components of our method, the complete method achieved the highest sample efficiency in the majority of the experiments.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)