scispace - formally typeset
Open AccessPosted Content

A Theoretical Overview of Neural Contraction Metrics for Learning-based Control with Guaranteed Stability.

TLDR
The Neural Contraction Metric (NCM) as mentioned in this paper is a neural network model of an optimal contraction metric and corresponding differential Lyapunov function, which is a necessary and sufficient condition for incremental exponential stability of non-autonomous nonlinear system trajectories.
Abstract
This paper presents a theoretical overview of a Neural Contraction Metric (NCM): a neural network model of an optimal contraction metric and corresponding differential Lyapunov function, the existence of which is a necessary and sufficient condition for incremental exponential stability of non-autonomous nonlinear system trajectories. Its innovation lies in providing formal robustness guarantees for learning-based control frameworks, utilizing contraction theory as an analytical tool to study the nonlinear stability of learned systems via convex optimization. In particular, we rigorously show in this paper that, by regarding modeling errors of the learning schemes as external disturbances, the NCM control is capable of obtaining an explicit bound on the distance between a time-varying target trajectory and perturbed solution trajectories, which exponentially decreases with time even under the presence of deterministic and stochastic perturbation. These useful features permit simultaneous synthesis of a contraction metric and associated control law by a neural network, thereby enabling real-time computable and probably robust learning-based control for general control-affine nonlinear systems.

read more

Citations
More filters
Journal ArticleDOI

Contraction theory for nonlinear stability analysis and learning-based control: A tutorial overview

TL;DR: Contraction theory is an analytical tool to study differential dynamics of a non-autonomous (i.e., time-varying) nonlinear system under a contraction metric defined with a uniformly positive definite matrix, the existence of which results in a necessary and sufficient characterization of incremental exponential stability of multiple solution trajectories with respect to each other as mentioned in this paper.
Posted Content

Learning-based Adaptive Control via Contraction Theory.

TL;DR: In this article, a deep learning-based adaptive control framework for nonlinear systems with multiplicatively separable parametrization, called aNCM -for adaptive Neural Contraction Metric is presented.
Posted Content

Contraction Theory for Nonlinear Stability Analysis and Learning-based Control: A Tutorial Overview.

TL;DR: Contraction theory is an analytical tool to study differential dynamics of a non-autonomous (i.e., time-varying) nonlinear system under a contraction metric defined with a uniformly positive definite matrix, the existence of which results in a necessary and sufficient characterization of incremental exponential stability of multiple solution trajectories with respect to each other.
Proceedings ArticleDOI

Physics-Informed Machine Learning for Modeling and Control of Dynamical Systems

TL;DR: Physics-informed machine learning (PIML) as mentioned in this paper is a set of methods and tools that systematically integrate machine learning algorithms with physical constraints and abstract mathematical models developed in scientific and engineering domains.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Posted Content

Proximal Policy Optimization Algorithms

TL;DR: A new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent, are proposed.
Book

Neuro-dynamic programming

TL;DR: This is the first textbook that fully explains the neuro-dynamic programming/reinforcement learning methodology, which is a recent breakthrough in the practical application of neural networks and dynamic programming to complex problems of planning, optimal decision making, and intelligent control.
Proceedings Article

Understanding deep learning requires rethinking generalization.

TL;DR: This article showed that deep neural networks can fit a random labeling of the training data, and that this phenomenon is qualitatively unaffected by explicit regularization, and occurs even if the true images are replaced by completely unstructured random noise.
Related Papers (5)