scispace - formally typeset
Open AccessPosted Content

Learning to run a Power Network Challenge: a Retrospective Analysis

Reads0
Chats0
TLDR
The NeurIPS 2020 challenge as mentioned in this paper designed a L2RPN challenge to encourage the development of reinforcement learning solutions to key problems present in the next-generation power networks, and the main contribution of this challenge is the proposed comprehensive Grid2Op framework, and associated benchmark, which plays realistic sequential network operations scenarios.
Abstract
Power networks, responsible for transporting electricity across large geographical regions, are complex infrastructures on which modern life critically depend. Variations in demand and production profiles, with increasing renewable energy integration, as well as the high voltage network technology, constitute a real challenge for human operators when optimizing electricity transportation while avoiding blackouts. Motivated to investigate the potential of Artificial Intelligence methods in enabling adaptability in power network operation, we have designed a L2RPN challenge to encourage the development of reinforcement learning solutions to key problems present in the next-generation power networks. The NeurIPS 2020 competition was well received by the international community attracting over 300 participants worldwide. The main contribution of this challenge is our proposed comprehensive Grid2Op framework, and associated benchmark, which plays realistic sequential network operations scenarios. The framework is open-sourced and easily re-usable to define new environments with its companion GridAlive ecosystem. It relies on existing non-linear physical simulators and let us create a series of perturbations and challenges that are representative of two important problems: a) the uncertainty resulting from the increased use of unpredictable renewable energy sources, and b) the robustness required with contingent line disconnections. In this paper, we provide details about the competition highlights. We present the benchmark suite and analyse the winning solutions of the challenge, observing one super-human performance demonstration by the best agent. We propose our organizational insights for a successful competition and conclude on open research avenues. We expect our work will foster research to create more sustainable solutions for power network operations.

read more

Citations
More filters
Posted Content

A Graph Policy Network Approach for Volt-Var Control in Power Distribution Systems.

TL;DR: In this paper, the authors proposed a framework that combines RL with graph neural networks and study the benefits and limitations of graph-based policy in the VVC setting, and showed that graphbased policies converge to the same rewards asymptotically however at a slower rate when compared to vector representation counterpart.
Posted Content

Adversarial Training for a Continuous Robustness Control Problem in Power Systems

TL;DR: In this article, a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems is proposed, which proves to be computationally efficient online while displaying useful robustness properties.
Proceedings ArticleDOI

Adversarial Training for a Continuous Robustness Control Problem in Power Systems

TL;DR: In this paper, a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems is proposed, which proves to be computationally efficient online while displaying useful robustness properties.
Posted Content

Learning to run a power network with trust.

TL;DR: In this paper, an agent with the ability to send to the operator alarms ahead of time when the proposed actions are of low confidence is proposed, and the operator's available attention is modeled as a budget that decreases when alarms are sent.
Posted Content

PowerGym: A Reinforcement Learning Environment for Volt-Var Control in Power Distribution Systems.

TL;DR: PowerGym as mentioned in this paper is an open-source RL environment for Volt-Var control in power distribution systems, which targets minimizing power loss and voltage violations under physical networked constraints.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Posted Content

Proximal Policy Optimization Algorithms

TL;DR: A new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent, are proposed.
Proceedings Article

Dueling network architectures for deep reinforcement learning

TL;DR: In this paper, a dueling network is proposed to represent two separate estimators for the state value function and the state-dependent advantage function, which leads to better policy evaluation in the presence of many similar-valued actions.
Posted Content

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

TL;DR: This paper generalises the approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains, and convincingly defeated a world-champion program in each case.
Related Papers (5)
Trending Questions (1)
How do you run a retrospective?

The paper does not provide information on how to run a retrospective. The paper is about a L2RPN challenge and the development of reinforcement learning solutions for power network operations.