Adaptive network selection for moving agents using deep reinforcement learning

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Author: William Skagerström; [2021]

Keywords: ;

Abstract: With the rapid development and deployment of “Internet of Things”-devices comes a new era of benefits to increase the efficiency of our everyday lives. Many of these devices rely on having an established network connection in order to operate at peak performance, but this requirement could be hard to guarantee in the face of less supported infrastructure in certain parts of the world. Thus there is value in the concept of granting more information to said devices, which could allow them to take proactive actions in order to ensure that they meet certain expectations. One method is the ability to perform adaptive network selection, depending on both the availability of telecom operators within the region as well as their perceived performance. This paper outlines a methodology for the construction of an interactive environment from raw historical data which comes in the form of measurements already available in user equipment. An algorithm is then trained by exploring said environment using reinforcement learning, under the premise of having only limited information about its current whereabouts and target destination. The objective of agents within the environment is to select network operators over the course of a specified geographical route in order to maximize the perceived network performance. The results showcased that, given the existence of a policy that can grant an increase in the perceived performance, it will find it such a policy. Under circumstances where it cannot, it will approximate the performance of the best available operator. Said results showed promise of further development for methods that rely on this type of algorithmic behaviour, and could find interesting applications for the future, especially around instance areas where network infrastructure is still in development.  

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)