scispace - formally typeset
Open AccessPosted Content

Adversarial Training for a Continuous Robustness Control Problem in Power Systems

TLDR
In this article, a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems is proposed, which proves to be computationally efficient online while displaying useful robustness properties.
Abstract
We propose a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems. Previous approaches relying deeply on simulations are not able to cope with the rising complexity and are too costly when used online in terms of computation budget. In comparison, our method proves to be computationally efficient online while displaying useful robustness properties. To do so we model an adversarial framework, propose the implementation of a fixed opponent policy and test it on a L2RPN (Learning to Run a Power Network) environment. This environment is a synthetic but realistic modeling of a cyber-physical system accounting for one third of the IEEE 118 grid. Using adversarial testing, we analyze the results of submitted trained agents from the robustness track of the L2RPN competition. We then further assess the performance of these agents in regards to the continuous N-1 problem through tailored evaluation metrics. We discover that some agents trained in an adversarial way demonstrate interesting preventive behaviors in that regard, which we discuss.

read more

Citations
More filters
Posted Content

Learning to run a Power Network Challenge: a Retrospective Analysis

TL;DR: The NeurIPS 2020 challenge as mentioned in this paper designed a L2RPN challenge to encourage the development of reinforcement learning solutions to key problems present in the next-generation power networks, and the main contribution of this challenge is the proposed comprehensive Grid2Op framework, and associated benchmark, which plays realistic sequential network operations scenarios.
Posted Content

Learning to run a power network with trust.

TL;DR: In this paper, an agent with the ability to send to the operator alarms ahead of time when the proposed actions are of low confidence is proposed, and the operator's available attention is modeled as a budget that decreases when alarms are sent.
Posted Content

Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training

TL;DR: In this paper, an adversary Markov Decision Process is used to learn an attack policy, and demonstrate the potency of their attack by successfully attacking multiple winning agents from the Learning To Run a Power Network (L2RPN) challenge, under both white-box and black-box attack settings.
References
More filters
Book

Introduction to Reinforcement Learning

TL;DR: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning.
Posted Content

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

TL;DR: This paper generalises the approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains, and convincingly defeated a world-champion program in each case.
Book

Robust Power System Frequency Control

TL;DR: In this article, the authors provide a comprehensive coverage of robust power system frequency control understanding, simulation and design, and develop an appropriate intuition relative to the robust load frequency regulation problem in real-world power systems, rather than to describe sophisticated mathematical analytical methods.
Posted Content

Challenges of Real-World Reinforcement Learning

TL;DR: A set of nine unique challenges that must be addressed to productionize RL to real world problems are presented and an example domain that has been modified to present these challenges as a testbed for practical RL research is presented.
Journal ArticleDOI

Robust Reinforcement Learning

TL;DR: A new reinforcement learning paradigm that explicitly takes into account input disturbance as well as modeling errors is proposed, which is called robust reinforcement learning (RRL) and tested on the control task of an inverted pendulum.
Related Papers (5)