Deep reinforcement learning for real-time power grid topology optimization

University essay from Lunds universitet/Matematisk statistik

Abstract: In our pursuit of carbon neutrality, drastic changes to generation and consumption of electricity will cause new and complex demands on the power grid and its operators. A cheap, promising, and under-exploited mitigation is real-time power grid topology optimization (RTTO). However, beyond the simplest action of line switching, the combinatorial and non-linear nature of RTTO has made all computational approaches infeasible for grids of interesting scale. At the same time, checking many of its boxes, RTTO may be a task for deep reinforcement learning (DRL). This thesis starts by providing some further background as to why we care about RTTO. It then covers the deep learning and reinforcement learning concepts that underpin the Deep Q-Network (DQN). Building on this, it explains the DQN, Double DQN, Dueling DQN, and prioritized experience replay, in some depth. After that, it briefly covers the theory behind line losses, line overloads, and cascading line failures, as well as how bus switching and line switching can help. This is followed by a case study on the winning DRL submission of Geirina to the RTTO competition L2RPN 2019. Finally, the case for DRL for RTTO is made.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)