scispace - formally typeset
Search or ask a question
Conference

IEEE Control Systems Letters 

About: IEEE Control Systems Letters is an academic conference. The conference publishes majorly in the area(s): Computer science & Control theory (sociology). Over the lifetime, 1413 publications have been published by the conference receiving 15494 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
01 Jun 2017
TL;DR: This letter extends previously established concepts for barrier functions to a class of nonsmooth barrier functions that operate on systems described by differential inclusions, and validates the results by deploying Boolean compositions of nonsMooth Barrier functions onto a team of mobile robots.
Abstract: As multi-agent systems become more wide-spread and versatile, the ability to satisfy multiple system-level constraints grows increasingly important In applications ranging from automated cruise control to safety in robot swarms, barrier functions have emerged as a tool to provably meet such constraints by guaranteeing forward invariance of desirable sets However, satisfying multiple constraints typically implies formulating multiple barrier functions, which would be ameliorated if the barrier functions could be composed together as Boolean logic formulas The use of max and min operators, which yields nonsmooth functions, represents one path to accomplish Boolean compositions of barrier functions, and this letter extends previously established concepts for barrier functions to a class of nonsmooth barrier functions that operate on systems described by differential inclusions We validate our results by deploying Boolean compositions of nonsmooth barrier functions onto a team of mobile robots

234 citations

Journal ArticleDOI
01 Jan 2019
TL;DR: A framework that is based on control barrier functions and signal temporal logic is proposed, where the temporal properties are used to satisfy signal temporal Logic tasks and the resulting controller is given by a switching strategy between a computationally-efficient convex quadratic program and a local feedback control law.
Abstract: The need for computationally-efficient control methods of dynamical systems under temporal logic tasks has recently become more apparent. Existing methods are computationally demanding and hence often not applicable in practice. Especially with respect to multi-robot systems, these methods do not scale computationally. In this letter, we propose a framework that is based on control barrier functions and signal temporal logic. In particular, timevarying control barrier functions are considered where the temporal properties are used to satisfy signal temporal logic tasks. The resulting controller is given by a switching strategy between a computationally-efficient convex quadratic program and a local feedback control law.

225 citations

Journal ArticleDOI
Ran Xin1, Usman A. Khan1
08 May 2018
TL;DR: In this article, a linear algorithm based on an inexact gradient method and a gradient estimation technique is proposed to minimize the average of locally known convex functions in a directed graph, where each local function is strongly convex with Lipschitz-continuous gradients.
Abstract: In this letter, we study distributed optimization, where a network of agents, abstracted as a directed graph, collaborates to minimize the average of locally known convex functions. Most of the existing approaches over directed graphs are based on push-sum (type) techniques, which use an independent algorithm to asymptotically learn either the left or right eigenvector of the underlying weight matrices. This strategy causes additional computation, communication, and nonlinearity in the algorithm. In contrast, we propose a linear algorithm based on an inexact gradient method and a gradient estimation technique. Under the assumptions that each local function is strongly convex with Lipschitz-continuous gradients, we show that the proposed algorithm geometrically converges to the global minimizer with a sufficiently small step-size. We present simulations to illustrate the theoretical findings.

203 citations

Journal ArticleDOI
04 Jun 2018
TL;DR: In this article, a supervised learning framework is proposed to approximate a model predictive controller (MPC) with reduced computational complexity and guarantees on stability and constraint satisfaction, which can be used for a wide class of nonlinear systems.
Abstract: A supervised learning framework is proposed to approximate a model predictive controller (MPC) with reduced computational complexity and guarantees on stability and constraint satisfaction. The framework can be used for a wide class of nonlinear systems. Any standard supervised learning technique (e.g., neural networks) can be employed to approximate the MPC from samples. In order to obtain closed-loop guarantees for the learned MPC, a robust MPC design is combined with statistical learning bounds. The MPC design ensures robustness to inaccurate inputs within given bounds, and Hoeffding’s Inequality is used to validate that the learned MPC satisfies these bounds with high confidence. The result is a closed-loop statistical guarantee on stability and constraint satisfaction for the learned MPC. The proposed learning-based MPC framework is illustrated on a nonlinear benchmark problem, for which we learn a neural network controller with guarantees.

200 citations

Journal ArticleDOI
01 Jan 2018
TL;DR: A novel gradient-based algorithm for unconstrained convex optimization, which can be seen as an extension of methods such as gradient descent, Nesterov’s accelerated gradient ascent, and the heavy-ball method is designed and analyzed.
Abstract: We design and analyze a novel gradient-based algorithm for unconstrained convex optimization. When the objective function is $m$ -strongly convex and its gradient is $L$ -Lipschitz continuous, the iterates and function values converge linearly to the optimum at rates $\rho $ and $\rho ^{2}$ , respectively, where $\rho = 1-\sqrt {m/L}$ . These are the fastest known guaranteed linear convergence rates for globally convergent first-order methods, and for high desired accuracies the corresponding iteration complexity is within a factor of two of the theoretical lower bound. We use a simple graphical design procedure based on integral quadratic constraints to derive closed-form expressions for the algorithm parameters. The new algorithm, which we call the triple momentum method, can be seen as an extension of methods such as gradient descent, Nesterov’s accelerated gradient descent, and the heavy-ball method.

125 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
2023471
2022796
2021361
2020186
2019181
2018141