scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automatic Control in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors derive a parametrization of linear feedback systems that paves the way to solve important control problems using data-dependent linear matrix inequalities only, which is remarkable in that no explicit system's matrices identification is required.
Abstract: In a paper by Willems et al., it was shown that persistently exciting data can be used to represent the input-output behavior of a linear system. Based on this fundamental result, we derive a parametrization of linear feedback systems that paves the way to solve important control problems using data-dependent linear matrix inequalities only. The result is remarkable in that no explicit system's matrices identification is required. The examples of control problems we solve include the state and output feedback stabilization, and the linear quadratic regulation problem. We also discuss robustness to noise-corrupted measurements and show how the approach can be used to stabilize unstable equilibria of nonlinear systems.

314 citations


Journal ArticleDOI
TL;DR: An optimization algorithm is developed to minimize the trace of the estimated ellipsoid set, and the effect from the adopted event-triggered threshold is thoroughly discussed as well.
Abstract: This paper is concerned with the distributed set-membership filtering problem for a class of general discrete-time nonlinear systems under event-triggered communication protocols over sensor networks. To mitigate the communication burden, each intelligent sensing node broadcasts its measurement to the neighboring nodes only when a predetermined event-based media-access condition is satisfied. According to the interval mathematics theory, a recursive distributed set-membership scheme is designed to obtain an ellipsoid set containing the target states of interest via adequately fusing the measurements from neighboring nodes, where both the accurate estimate on Lagrange remainder and the event-based media-access condition are skillfully utilized to improve the filter performance. Furthermore, such a scheme is only dependent on neighbor information and local adjacency weights, thereby fulfilling the scalability requirement of sensor networks. In addition, an optimization algorithm is developed to minimize the trace of the estimated ellipsoid set, and the effect from the adopted event-triggered threshold is thoroughly discussed as well. Finally, a simulation example is utilized to illustrate the usefulness of the proposed distributed set-membership filtering scheme.

271 citations


Journal ArticleDOI
TL;DR: This article investigates the switching-like event-triggered control for networked control systems (NCSs) under the malicious denial of service (DoS) attacks and a networked invert pendulum on a cart is conducted to show the effectiveness of the proposed method.
Abstract: This article investigates the switching-like event-triggered control for networked control systems (NCSs) under the malicious denial of service (DoS) attacks. First, by dividing the DoS attacks into S-interval (DoS-free case) and D-interval (DoS case), a switching-like event-triggered communication scheme (SETC) is well designed to deal with intermittent DoS attacks to improve communication efficiency while keeping the desired control performance. Second, by considering the SETC and NCSs into a unified framework, the studied system is transferred into a time-delay system. Then, under the constraint of the number of maximum allowable data dropouts induced by DoS attacks, a stability criterion and a stabilization criterion are derived, which can be used to estimate the event-triggered communication parameters and obtain the security controller gain simultaneously. Moreover, the derived stabilization criterion can also provide a tradeoff to balance communication efficiency and $H_{\infty }$ control performance. At last, a networked invert pendulum on a cart is conducted to show the effectiveness of the proposed method.

207 citations


Journal ArticleDOI
TL;DR: This article develops a new framework in order to work with data that are not necessarily persistently exciting, and investigates necessary and sufficient conditions on the informativity of data for several data-driven analysis and control problems.
Abstract: The use of persistently exciting data has recently been popularized in the context of data-driven analysis and control. Such data have been used to assess system-theoretic properties and to construct control laws, without using a system model. Persistency of excitation is a strong condition that also allows unique identification of the underlying dynamical system from the data within a given model class. In this article, we develop a new framework in order to work with data that are not necessarily persistently exciting. Within this framework, we investigate necessary and sufficient conditions on the informativity of data for several data-driven analysis and control problems. For certain analysis and design problems, our results reveal that persistency of excitation is not necessary. In fact, in these cases, data-driven analysis/control is possible while the combination of (unique) system identification and model-based control is not. For certain other control problems, our results justify the use of persistently exciting data, as data-driven control is possible only with data that are informative for system identification.

190 citations


Journal ArticleDOI
TL;DR: This article studies the observer-based output feedback control problem for a class of cyber-physical systems with periodic denial-of-service (DoS) attacks, where the attacks coexist both in the measurement and control channels in the network scenario.
Abstract: This article studies the observer-based output feedback control problem for a class of cyber-physical systems with periodic denial-of-service (DoS) attacks, where the attacks coexist both in the measurement and control channels in the network scenario. The periodic DoS attacks are characterized by a cyclic dwell-time switching strategy, such that the resulting augmented system can be converted into a class of discrete-time cyclic dwell-time switched systems including a stable subsystem and an unstable subsystem. By means of a cyclic piecewise linear Lyapunov function approach, the exponential stability and $l_2$ -gain analysis, and observer-based controller design are carried out for the augmented discrete-time cyclic switched system. Then, the desired observer and controller gains in piecewise linear form are determined simultaneously so as to ensure that the resulting closed-loop system is exponentially stable with a prescribed $\mathcal {H}_{\infty }$ performance index. Finally, a practical application of unmanned ground vehicles under periodic DoS attacks is provided to verify the effectiveness of the developed control approach.

175 citations


Journal ArticleDOI
TL;DR: This article investigates the Lyapunov stability problem for impulsive systems via event-triggered impulsive control, where dynamical systems evolve according to continuous-time equations most of the time, but occasionally exhibit instantaneous jumps when impulsive events are triggered.
Abstract: In this article, we investigate the Lyapunov stability problem for impulsive systems via event-triggered impulsive control, where dynamical systems evolve according to continuous-time equations most of the time, but occasionally exhibit instantaneous jumps when impulsive events are triggered. We provide some Lyapunov-based sufficient conditions for uniform stability and globally asymptotical stability. Unlike normal time-triggered impulsive control, event-triggered impulsive control is triggered only when an event occurs. Thus our stability conditions rely greatly on the event-triggering mechanism given in terms of Lyapunov functions. Moreover, the Zeno behavior can be excluded in our results. Then, we apply the theoretical results to the nonlinear impulsive control system, where event-triggered impulsive control strategies are designed to achieve stability of the addressed system. Finally, two numerical examples and their simulations are provided to demonstrate the effectiveness of the proposed results.

174 citations


Journal ArticleDOI
TL;DR: This article addresses the quantized nonstationary filtering problem for networked Markov switching repeated scalar nonlinear systems (MSRSNSs), in which the correlation among modes of systems, quantizer, and controller are presented in terms of non stationary Markov process.
Abstract: This article addresses the quantized nonstationary filtering problem for networked Markov switching repeated scalar nonlinear systems (MSRSNSs). A more general issue is explored for MSRSNSs, where the measurement outputs are characterized by packet losses, nonstationary quantized output, and randomly occurred sensor nonlinearities (ROSNs) simultaneously. Note that both packet losses and ROSNSs are depicted by Bernoulli distributed sequences. By utilizing a multiple hierarchical structure strategy, the nonstationary filters are designed for MSRSNSs, in which the correlation among modes of systems, quantizer, and controller are presented in terms of nonstationary Markov process. A practical example is provided to verify the proposed theoretical results.

161 citations


Journal ArticleDOI
TL;DR: A key feature is that a set of mode-dependent sufficiently small scalars are introduced into some coupled Lyapunov inequalities such that the feasible solutions are easily obtained for the stochastic finite-time boundedness of the closed-loop systems.
Abstract: This paper addresses a finite-time sliding-mode control problem for a class of Markovian jump cyber-physical systems. It is assumed that the control input signals transmitted via a communication network are vulnerable to cyber-attacks, in which the adversaries may inject false data in a probabilistic way into the control signals. Meanwhile, there may exist randomly occurring uncertainties and peak-bounded external disturbances. A suitable sliding mode controller is designed such that state trajectories are driven onto the specified sliding surface during a given finite-time (possibly short ) interval. By introducing a partitioning strategy, the stochastic finite-time boundedness over the reaching phase and the sliding motion phase is analyzed, respectively. A key feature is that a set of mode-dependent sufficiently small scalars are introduced into some coupled Lyapunov inequalities such that the feasible solutions are easily obtained for the stochastic finite-time boundedness of the closed-loop systems. Finally, the practical system about a single-link robot-arm model is given to illustrate the present method.

161 citations


Journal ArticleDOI
Guannan Qu1, Na Li1
TL;DR: In this article, an accelerated distributed Nesterov gradient descent method was proposed for distributed optimization over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication.
Abstract: This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. We develop an accelerated distributed Nesterov gradient descent method. When the objective function is convex and $L$ -smooth, we show that it achieves a $O(\frac{1}{t^{1.4-\epsilon }})$ convergence rate for all $\epsilon \in (0,1.4)$ . We also show the convergence rate can be improved to $O(\frac{1}{t^2})$ if the objective function is a composition of a linear map and a strongly convex and smooth function. When the objective function is $\mu$ -strongly convex and $L$ -smooth, we show that it achieves a linear convergence rate of $O([ 1 - C (\frac{\mu }{L})^{5/7} ]^t)$ , where $\frac{L}{\mu }$ is the condition number of the objective, and $C>0$ is some constant that does not depend on $\frac{L}{\mu }$ .

154 citations


Journal ArticleDOI
TL;DR: A novel MHE strategy is developed to cope with the underlying NLS with unknown inputs by dedicatedly introducing certain temporary estimates of unknown inputs, where the desired estimator parameters are designed to decouple the estimation error dynamics from the unknown inputs.
Abstract: This article is concerned with the moving horizon estimation (MHE) problem for networked linear systems (NLSs) with unknown inputs under dynamic quantization effects. For the NLSs with unknown input signals, the conventional MHE strategy is incapable of guaranteeing the satisfactory performance as the estimation error is dependent on the external disturbances. In this work, a novel MHE strategy is developed to cope with the underlying NLS with unknown inputs by dedicatedly introducing certain temporary estimates of unknown inputs, where the desired estimator parameters are designed to decouple the estimation error dynamics from the unknown inputs. A two-step design strategy (namely, decoupling step and convergence step) is proposed to obtain the estimator parameters. In the decoupling step, the decoupling parameter of the moving horizon estimator is designed based on certain assumptions on system parameters and quantization parameters. In the convergence step, by employing a special observability decomposition scheme, the convergence parameters of the moving horizon estimator are achieved such that the estimation error dynamics is ultimately bounded. Moreover, the developed MHE strategy is extended to the scenario with direct feedthrough of unknown inputs. Two simulation examples are given to demonstrate the correctness and effectiveness of the proposed MHE strategies.

151 citations


Journal ArticleDOI
TL;DR: The self-learning optimal regulation for discrete-time nonlinear systems under event-driven formulation is investigated and an event-based adaptive critic algorithm is developed with convergence discussion of the iterative process.
Abstract: The self-learning optimal regulation for discrete-time nonlinear systems under event-driven formulation is investigated. An event-based adaptive critic algorithm is developed with convergence discussion of the iterative process. The input-to-state stability (ISS) analysis for the present nonlinear plant is established. Then, a suitable triggering condition is proved to ensure the ISS of the controlled system. An iterative dual heuristic dynamic programming (DHP) strategy is adopted to implement the event-driven framework. Simulation examples are carried out to demonstrate the applicability of the constructed method. Compared with the traditional DHP algorithm, the even-based algorithm is able to substantially reduce the updating times of the control input, while still maintaining an impressive performance.

Journal ArticleDOI
TL;DR: The novel asymptotic stability conditions with less conservatism are derived for the induced switched PWA systems with dual switching mechanism based on the approach of multiple Lyapunov functions in piecewise quadratic form via the smooth approximation technique.
Abstract: In this technical note, the asymptotic stability analysis and state-feedback control design are investigated for a class of discrete-time switched nonlinear systems via the smooth approximation technique. The modal dwell-time switching property is considered to constrain switchings between nonlinear subsystems. A kind of autonomous switchings, i.e., state-partition-dependent switching, is introduced within each approximated piecewise-affine (PWA) subsystem. Combining with the state partition information, the novel asymptotic stability conditions with less conservatism are derived for the induced switched PWA systems with dual switching mechanism based on the approach of multiple Lyapunov functions in piecewise quadratic form. Then, the design of PWA state-feedback controllers is implemented. The effectiveness of the obtained theoretical results is demonstrated by a numerical example.

Journal ArticleDOI
TL;DR: The proposed sliding mode control law is designed to attenuate the influences of uncertainty and nonlinear term in a finite-time region and the practical system about dc motor model is given to verify the validity of the proposed method.
Abstract: This paper deals with the problem of sliding mode control design for nonlinear stochastic singular semi-Markov jump systems (S-MJSs). Stochastic disturbance is first considered in studying S-MJSs with a stochastic semi-Markov process related to Weibull distribution. The specific information including the bound of nonlinearity is known for the control design. Our attention is to design sliding mode control law to attenuate the influences of uncertainty and nonlinear term. First, by the use of the Lyapunov function, a set of sufficient conditions are developed such that the closed-loop sliding mode dynamics are stochastically admissible. Then, the sliding mode control law is proposed to ensure the reachability in a finite-time region. Finally, the practical system about dc motor model is given to verify the validity of the proposed method.

Journal ArticleDOI
TL;DR: It is shown that an (E)NMPC scheme can be tuned to deliver the optimal policy of the real system even when using a wrong model, and that ENMPC can be used as a new type of function approximator within RL.
Abstract: Reinforcement learning (RL) is a powerful tool to perform data-driven optimal control without relying on a model of the system. However, RL struggles to provide hard guarantees on the behavior of the resulting control scheme. In contrast, nonlinear model predictive control (NMPC) and economic NMPC (ENMPC) are standard tools for the closed-loop optimal control of complex systems with constraints and limitations, and benefit from a rich theory to assess their closed-loop behavior. Unfortunately, the performance of (E)NMPC hinges on the quality of the model underlying the control scheme. In this paper, we show that an (E)NMPC scheme can be tuned to deliver the optimal policy of the real system even when using a wrong model. This result also holds for real systems having stochastic dynamics. This entails that ENMPC can be used as a new type of function approximator within RL. Furthermore, we investigate our results in the context of ENMPC and formally connect them to the concept of dissipativity, which is central for the ENMPC stability. Finally, we detail how these results can be used to deploy classic RL tools for tuning (E)NMPC schemes. We apply these tools on both, a classical linear MPC setting and a standard nonlinear example, from the ENMPC literature.

Journal ArticleDOI
TL;DR: It is shown that the proposed control schemes guarantee that all the closed-loop signals are globally bounded and the stabilization error converges to the origin asymptotically.
Abstract: In this note, the event-triggered adaptive control for a class of uncertain nonlinear systems is considered. Unlike the traditional adaptive event-triggered control, the controller and parameter estimator are event-triggered simultaneously. Asymptotical convergence of stabilization error is guaranteed. To solve this problem, we design a set of event-triggering conditions, which is updated for each triggering. At the same time, the input-to-state stable assumption is not needed. It is shown that the proposed control schemes guarantee that all the closed-loop signals are globally bounded and the stabilization error converges to the origin asymptotically. Simulation results illustrate the effectiveness of our scheme.

Journal ArticleDOI
TL;DR: This paper addresses the event-triggered dynamic output feedback control for switched linear systems with frequent asynchronism with the average dwell time approach without limiting the minimum dwell time of each subsystem, and thus frequent switching is allowed to happen in an interevent interval.
Abstract: This paper addresses the event-triggered dynamic output feedback control for switched linear systems with frequent asynchronism. Different from existing work, which limits at most once switching during an interevent interval, we adopt the average dwell time approach without limiting the minimum dwell time of each subsystem, and thus frequent switching is allowed to happen in an interevent interval. Since the difficulty in acquiring the full information of system states, the dynamic output feedback controller is taken into account to stabilize the switched system. By employing a controller-mode-dependent Lyapunov functional, stability criterion is proposed for the resulting closed-loop system, based on which the dynamic output feedback controller together with the mode-dependent event-triggered mechanism is codesigned. Besides, the existence of the lower bound on interevent intervals is attentively discussed, which gets rid of the Zeno behavior. Finally, the effectiveness of the proposed method is illustrated by numerical simulations.

Journal ArticleDOI
Ran Xin1, Usman A. Khan1
TL;DR: In this paper, a distributed heavy-ball algorithm is proposed to minimize a sum of smooth and strongly-convex functions, which combines a simple state transformation with a momentum term and uses non-identical local step sizes.
Abstract: We study distributed optimization to minimize a sum of smooth and strongly-convex functions. Recent work on this problem uses gradient tracking to achieve linear convergence to the exact global minimizer. However, a connection among different approaches has been unclear. In this paper, we first show that many of the existing first-order algorithms are related with a simple state transformation, at the heart of which lies a recently introduced algorithm known as $\mathcal {AB}$ . We then present distributed heavy-ball , denoted as $\mathcal {AB}m$ , that combines $\mathcal {AB}$ with a momentum term and uses nonidentical local step-sizes. By simultaneously implementing both row- and column-stochastic weights, $\mathcal {AB}m$ removes the conservatism in the related work due to doubly stochastic weights or eigenvector estimation. $\mathcal {AB}m$ thus naturally leads to optimization and average consensus over both undirected and directed graphs. We show that $\mathcal {AB}m$ has a global $R$ -linear rate when the largest step-size and momentum parameter are positive and sufficiently small. We numerically show that $\mathcal {AB}m$ achieves acceleration, particularly when the objective functions are ill-conditioned.

Journal ArticleDOI
TL;DR: This article presents a novel data-driven framework for constructing eigenfunctions of the Koopman operator geared toward prediction and control, and is extended to construct generalized eigenFunctions that also give rise Koop man invariant subspaces and hence can be used for linear prediction.
Abstract: This article presents a novel data-driven framework for constructing eigenfunctions of the Koopman operator geared toward prediction and control. The method leverages the richness of the spectrum of the Koopman operator away from attractors to construct a set of eigenfunctions such that the state (or any other observable quantity of interest) is in the span of these eigenfunctions and hence predictable in a linear fashion. The eigenfunction construction is optimization-based with no dictionary selection required. Once a predictor for the uncontrolled part of the system is obtained in this way, the incorporation of control is done through a multistep prediction error minimization, carried out by a simple linear least-squares regression. The predictor so obtained is in the form of a linear controlled dynamical system and can be readily applied within the Koopman model predictive control (MPC) framework of (M. Korda and I. Mezic, 2018) to control nonlinear dynamical systems using linear MPC tools. The method is entirely data-driven and based predominantly on convex optimization. The novel eigenfunction construction method is also analyzed theoretically, proving rigorously that the family of eigenfunctions obtained is rich enough to span the space of all continuous functions. In addition, the method is extended to construct generalized eigenfunctions that also give rise Koopman invariant subspaces and hence can be used for linear prediction. Detailed numerical examples demonstrate the approach, both for prediction and feedback control. * * Code for the numerical examples is available from https://homepages.laas.fr/mkorda/Eigfuns.zip .

Journal ArticleDOI
TL;DR: In this article, the authors studied Lyapunov-like conditions to ensure a class of dynamical systems to exhibit predefined-time stability, where the origin of a dynamical system is fixed-time stable if it is stable, and an upper bound of the settling-time function can be arbitrarily chosen a priori through a suitable selection of the system parameters.
Abstract: This article studies Lyapunov-like conditions to ensure a class of dynamical systems to exhibit predefined-time stability. The origin of a dynamical system is predefined-time stable if it is fixed-time stable, and an upper bound of the settling-time function can be arbitrarily chosen a priori through a suitable selection of the system parameters. We show that the studied Lyapunov-like conditions allow us to demonstrate the equivalence between previous Lyapunov theorems for predefined-time stability for autonomous systems. Moreover, the obtained Lyapunov-like theorem is extended for analyzing the property of predefined-time ultimate boundedness with predefined bound, which is useful when analyzing uncertain dynamical systems. Therefore, the proposed results constitute a general framework for analyzing the predefined-time stability, and they also unify a broad class of systems that present the predefined-time stability property. On the other hand, the proposed framework is used to design robust controllers for affine control systems, which induce predefined-time stability (predefined-time ultimate boundedness of the solutions) w.r.t. to some desired manifold. A simulation example is presented to show the behavior of a developed controller, especially regarding the settling time estimation.

Journal ArticleDOI
TL;DR: In this article, the problem of finding a zero of a sum of monotone operators through primal-dual analysis is recast as a problem of computing the Lagrangian multipliers.
Abstract: We consider distributed computation of generalized Nash equilibrium (GNE) over networks, in games with shared coupling constraints. Existing methods require that each player has full access to opponents’ decisions. In this paper, we assume that players have only partial-decision information, and can communicate with their neighbors over an arbitrary undirected graph. We recast the problem as that of finding a zero of a sum of monotone operators through primal-dual analysis. To distribute the problem, we doubly augment variables, so that each player has local decision estimates and local copies of Lagrangian multipliers. We introduce a single-layer algorithm, fully distributed with respect to both primal and dual variables. We show its convergence to a variational GNE with fixed step sizes, by reformulating it as a forward–backward iteration for a pair of doubly-augmented monotone operators.

Journal ArticleDOI
TL;DR: This article presents a distributed monitoring scheme to provide attack-detection capabilities for linear large-scale systems, which relies on a Luenberger observer together with a bank of unknown-input observers at each subsystem, providing attack detection capabilities.
Abstract: DC microgrids often present a hierarchical control architecture, requiring integration of communication layers. This leads to the possibility of malicious attackers disrupting the overall system. Motivated by this application, in this article, we present a distributed monitoring scheme to provide attack-detection capabilities for linear large-scale systems. The proposed architecture relies on a Luenberger observer together with a bank of unknown-input observers at each subsystem, providing attack detection capabilities. We describe the architecture and analyze conditions under which attacks are guaranteed to be detected, and, conversely, when they are stealthy . Our analysis shows that some classes of attacks cannot be detected using either module independently; rather, by exploiting both modules simultaneously, we are able to improve the detection properties of the diagnostic tool as a whole. Theoretical results are backed up by simulations, where our method is applied to a realistic model of a low-voltage DC microgrid under attack.

Journal ArticleDOI
TL;DR: Following the emulation approach, it is shown how to design local triggering generators to ensure input-to-state stability and $\mathcal {L}_p$ stability for the overall system based on a continuous-time output-feedback controller that robustly stabilizes the network-free system.
Abstract: Periodic event-triggered control (PETC) is an appealing paradigm for the implementation of controllers on platforms with limited communication resources, a typical example being networked control systems. In PETC, transmissions over the communication channel are triggered by an event generator, which depends solely on the available plant and controller data and is only evaluated at given sampling instants to enable its digital implementation. In this paper, we consider the general scenario, where the controller communicates with the plant via multiple decoupled networks. Each network may contain multiple nodes, in which case a dedicated protocol is used to schedule transmissions among these nodes. The transmission instants over the networks are asynchronous and generated by local event generators. At given sampling instants, the local event generator evaluates a rule, which only involves the measurements and the control inputs available locally, to decide whether a transmission is needed over the considered network. Following the emulation approach, we show how to design local triggering generators to ensure input-to-state stability and $\mathcal {L}_p$ stability for the overall system based on a continuous-time output-feedback controller that robustly stabilizes the network-free system. The method is applied to a class of Lipschitz nonlinear systems, for which we formulate the design conditions as linear matrix inequalities. The effectiveness of the scheme is illustrated via simulations of a nonlinear example.

Journal ArticleDOI
TL;DR: In this approach, command filters and one neural network are applied to reconstruct the approximations of unknown nonlinearities, which are related to the system uncertainties including the system's unmodeled dynamics and external disturbances and Lyapunov stability criterion is used to prove the stability of the closed-loop system.
Abstract: This paper presents an improved backstepping control implementation scheme for a $n$ -dimensional strict-feedback uncertain nonlinear system based on command filtered backstepping and adaptive neural network backstepping. In this approach, $n$ command filters and one neural network are applied to reconstruct the approximations of unknown nonlinearities, which are related to the system uncertainties including the system's unmodeled dynamics and external disturbances. Then, one can use the negative feedback of these approximations to compensate the system uncertainties. Moreover, convex optimization and soft computing technique are adopted to design the update law of the weights of the neural network, and Lyapunov stability criterion is used to prove the stability of the closed-loop system. Finally, simulation results are given to show the effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: The unscented Kalman filtering (UKF) problem is investigated for a class of general nonlinear systems with stochastic uncertainties under communication protocols and two resource-saving UKF algorithms are developed, where the impact from the underlying protocols on the filter design is explicitly quantified.
Abstract: In this paper, the unscented Kalman filtering (UKF) problem is investigated for a class of general nonlinear systems with stochastic uncertainties under communication protocols. A modified unscented transformation is put forward to account for stochastic uncertainties caused by modeling errors. For preventing data collisions and mitigating communication burden, the round-robin protocol and the weighted try-once-discard protocol are, respectively, introduced to regulate the data transmission order from sensors to the filter. Then, by employing two kinds of data-holding strategies (i.e., zero-order holder and zero input) for those nodes without transmission privilege, two novel protocol-based measurement models are formulated. Subsequently, by resorting to the sigma point approximation method, two resource-saving UKF algorithms are developed, where the impact from the underlying protocols on the filter design is explicitly quantified. Finally, compared with the protocol-based extended Kalman filtering algorithms, a simulation example is presented to demonstrate the effectiveness of the proposed protocol-based UKF algorithms.

Journal ArticleDOI
TL;DR: A distributed consensus observer for multiagent systems with high-order integrator dynamics to estimate the leader state and stability analysis is carefully studied to explore the convergence properties under undirected and directed communication.
Abstract: This article presents a distributed consensus observer for multiagent systems with high-order integrator dynamics to estimate the leader state. Stability analysis is carefully studied to explore the convergence properties under undirected and directed communication, respectively. Using Lyapunov functions, fixed-time (resp. finite-time) stability is guaranteed for the undirected (resp. directed) interaction topology. Finally, simulation results are presented to demonstrate the theoretical findings.

Journal ArticleDOI
TL;DR: This paper devotes to establishing a bridge between asymptotical stability of a probabilistic Boolean network (PBN) and a solution to its induced equations, which are induced from the PBN's transition matrix by utilizing the semitensor product technique routinely.
Abstract: This paper devotes to establishing a bridge between asymptotical stability of a probabilistic Boolean network (PBN) and a solution to its induced equations, which are induced from the PBN's transition matrix. By utilizing the semitensor product technique routinely, the dynamics of a PBN with coincident state delays can be equivalently converted into that of a higher dimensional PBN without delays. Subsequently, several novel stability criteria are derived from the standpoint of equations’ solution. The most significant finding is that a PBN is globally asymptotically stable at a predesignated one-point distribution if and only if a vector, obtained by adding 1 at the bottom of this distribution, is the unique nonnegative solution to PBN's induced equations. Moreover, the influence of coincident state delays on PBN's asymptotical stability is explicitly analyzed without consideration of the convergence rate. Consequently, such bounded state delays are verified to have no impact on PBN's stability, albeit delays are time-varying. Based on this worthwhile observation, the time computational complexity of the aforementioned approach can be reduced by removing delays directly. Furthermore, this universal procedure is summarized to reduce the time complexity of some previous results in the literature to a certain extent. Two examples are employed to demonstrate the feasibility and effectiveness of the obtained theoretical results.

Journal ArticleDOI
TL;DR: A distributed optimization algorithm that minimizes a sum of convex functions over time-varying, random directed graphs that relies on a novel information mixing approach that exploits both row- and column-stochastic weights to achieve agreement toward the optimal solution when the underlying graph is directed.
Abstract: In this article, we provide a distributed optimization algorithm, termed as TV- $\mathcal {AB}$ , that minimizes a sum of convex functions over time-varying, random directed graphs. Contrary to the existing work, the algorithm we propose does not require eigenvector estimation to estimate the (non- $\mathbf {1}$ ) Perron eigenvector of a stochastic matrix. Instead, the proposed approach relies on a novel information mixing approach that exploits both row- and column-stochastic weights to achieve agreement toward the optimal solution when the underlying graph is directed. We show that TV- $\mathcal {AB}$ converges linearly to the optimal solution when the global objective is smooth and strongly convex, and the underlying time-varying graphs exhibit bounded connectivity, i.e., a union of every $C$ consecutive graphs is strongly connected. We derive the convergence results based on the stability analysis of a linear system of inequalities along with a matrix perturbation argument. Simulations confirm the findings in this article.

Journal ArticleDOI
TL;DR: A learning feedback linearizing control law using online closed-loop identification that ensures high data efficiency and thereby reduces computational complexity, which is a major barrier for using Gaussian processes under real-time constraints.
Abstract: Combining control engineering with nonparametric modeling techniques from machine learning allows for the control of systems without analytic description using data-driven models. Most of the existing approaches separate learning , i.e., the system identification based on a fixed dataset, and control , i.e., the execution of the model-based control law. This separation makes the performance highly sensitive to the initial selection of training data and possibly requires very large datasets. This article proposes a learning feedback linearizing control law using online closed-loop identification. The employed Gaussian process model updates its training data only if the model uncertainty becomes too large. This event-triggered online learning ensures high data efficiency and thereby reduces computational complexity, which is a major barrier for using Gaussian processes under real-time constraints. We propose safe forgetting strategies of data points to adhere to budget constraints and to further increase data efficiency. We show asymptotic stability for the tracking error under the proposed event-triggering law and illustrate the effective identification and control in simulation.

Journal ArticleDOI
TL;DR: Under the proposed control framework, all the agents’ outputs converge to the minimizer of the global cost function in finite time and the distributed finite-time optimization goal is achieved.
Abstract: In this article, the distributed finite-time optimization problem is studied for integrator chain multiagent systems with mismatched and matched disturbances and quadraticlike local cost functions. The agents’ models are permitted to be heterogeneous with different orders ranging from first-order to higher order forms. To solve the problem, a nonsmooth embedded control framework is established, which consists of two parts. In the first part, by using nonsmooth control theory and designing some distributed finite-time estimators to estimate the gradients of the agents’ local cost functions, a distributed finite-time optimal signal generator with fractional powers is constructed, of which the output signals converge to the minimizer of the global function in finite time. In the second part, by embedding the generator into the feedback loop, taking its output signals as the local optimal reference outputs for the agents, and combining nonsmooth control and finite-time disturbance observer techniques together, some feedforward-feedback composite tracking controllers are designed for the agents to track their local optimal reference outputs in finite time. Under the proposed control framework, all the agents’ outputs converge to the minimizer of the global cost function in finite time and the distributed finite-time optimization goal is achieved. Numerical simulations demonstrate the effectiveness of the proposed control framework.

Journal ArticleDOI
TL;DR: To achieve the DOC for linear multiagent systems with unmeasurable states, an observer-based event-triggered control law is proposed and it is proved that no Zeno behavior is exhibited and the global asymptotic convergence is preserved.
Abstract: This note considers the distributed optimal coordination (DOC) problem for heterogeneous linear multiagent systems. The local gradients are locally Lipschitz and the local convexity constants are unknown. A control law is proposed to drive the states of all agents to the optimal coordination that minimizes a global objective function. By exploring certain features of the invariant projection of the Laplacian matrix, the global asymptotic convergence is guaranteed utilizing only local interaction. The proposed control law is then extended with event-triggered communication schemes, which removes the requirement for continuous communications. Under the event-triggered control law, it is proved that no Zeno behavior is exhibited and the global asymptotic convergence is preserved. The proposed control laws are fully distributed, in the sense that the control design only uses the information in the connected neighborhood. Furthermore, to achieve the DOC for linear multiagent systems with unmeasurable states, an observer-based event-triggered control law is proposed. A simulation example is given to validate the proposed control laws.