Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal Abstractions
Thom S. Badings,Licio Romao,Alessandro Abate,David Parker,Hasan A. Poonawala,Mariëlle Stoelinga,Nils Jansen +6 more
TLDR
In this article , the authors present a controller synthesis method that does not rely on any explicit representation of the noise distributions and provides probabilistic guarantees on safely reaching a target, while also avoiding unsafe regions of the state space.Abstract:
Controllers for dynamical systems that operate in safety-critical settings must account for stochastic disturbances. Such disturbances are often modeled as process noise in a dynamical system, and common assumptions are that the underlying distributions are known and/or Gaussian. In practice, however, these assumptions may be unrealistic and can lead to poor approximations of the true noise distribution. We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions. In particular, we address the problem of computing a controller that provides probabilistic guarantees on safely reaching a target, while also avoiding unsafe regions of the state space. First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states. As a key contribution, we adapt tools from the scenario approach to compute probably approximately correct (PAC) bounds on these transition probabilities, based on a finite number of samples of the noise. We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP). This iMDP is, with a user-specified confidence probability, robust against uncertainty in the transition probabilities, and the tightness of the probability intervals can be controlled through the number of samples. We use state-of-the-art verification techniques to provide guarantees on the iMDP and compute a controller for which these guarantees carry over to the original control system. In addition, we develop a tailored computational scheme that reduces the complexity of the synthesis of these guarantees on the iMDP. Benchmarks on realistic control systems show the practical applicability of our method, even when the iMDP has hundreds of millions of transitions.read more
Citations
More filters
Proceedings ArticleDOI
Data-driven memory-dependent abstractions of dynamical systems
TL;DR: In this paper , a sample-based, sequential method is proposed to abstract a dynamical system with a sequence of memory-dependent Markov chains of increasing size, which alleviates a correlation bias that has been observed in samplebased abstractions.
Journal ArticleDOI
Decision-Making Under Uncertainty: Beyond Probabilities
TL;DR: In this article , the focus is on the uncertainty that goes beyond this classical interpretation, particularly by employing a clear distinction between aleatoric and epistemic uncertainty, and a thorough overview of uncertainty models that exhibit uncertainty in a more robust interpretation.
Journal ArticleDOI
Decision-making under uncertainty: beyond probabilities
TL;DR: In this article , the focus is on the uncertainty that goes beyond this classical interpretation, particularly by employing a clear distinction between aleatoric and epistemic uncertainty, and a thorough overview of so-called uncertainty models that exhibit uncertainty in a more robust interpretation.
Journal ArticleDOI
Data-driven abstractions via adaptive refinements and a Kantorovich metric [extended version]
TL;DR: In this article , an adaptive refinement procedure for smart, and scalable abstraction of dynamical systems is introduced, which relies on partitioning the state space depending on the observation of future outputs.
References
More filters
MonographDOI
Markov Decision Processes
P. Whittle,M. L. Puterman +1 more
TL;DR: Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria, and explores several topics that have received little or no attention in other books.
Book
Principles of Model Checking
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Book
Optimal Control: Linear Quadratic Methods
TL;DR: In this article, an augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems, with step-by-step explanations that show clearly how to make practical use of the material.
Book
Concentration Inequalities: A Nonasymptotic Theory of Independence
TL;DR: Deep connections with isoperimetric problems are revealed whilst special attention is paid to applications to the supremum of empirical processes.
Book ChapterDOI
PRISM 4.0: verification of probabilistic real-time systems
TL;DR: A major new release of the PRISMprobabilistic model checker is described, adding, in particular, quantitative verification of (priced) probabilistic timed automata.
Related Papers (5)
State-space approximate dynamic programming for stochastic unit commitment
Weihong Zhang,Daniel Nikovski +1 more