scispace - formally typeset
Search or ask a question

Showing papers in "Automatica in 2014"


Journal ArticleDOI
TL;DR: This survey reviews the vast literature on the theory and the applications of complex oscillator networks, focusing on phase oscillator models that are widespread in real-world synchronization phenomena, that generalize the celebrated Kuramoto model, and that feature a rich phenomenology.

1,021 citations


Journal ArticleDOI
TL;DR: A survey of kernel-based regularization and its connections with reproducing kernel Hilbert spaces and Bayesian estimation of Gaussian processes to demonstrate that learning techniques tailored to the specific features of dynamic systems may outperform conventional parametric approaches for identification of stable linear systems.

683 citations


Journal ArticleDOI
TL;DR: A sampled-data consensus control protocol is presented, with which the consensus of distributed multi-agent systems can be transformed into the stability of a system with a time-varying delay.

589 citations


Journal ArticleDOI
TL;DR: The advantage of the event-based strategy is the significant decrease of the number of controller updates for cooperative tasks of multi-agent systems involving embedded microprocessors with limited on-board resources.

565 citations


Journal ArticleDOI
TL;DR: The existence criterion of the desired asynchronous filter with piecewise homogeneous Markov chain is proposed in terms of a set of linear matrix inequalities and a numerical example is given to show the effectiveness and potential of the developed theoretical results.

533 citations


Journal ArticleDOI
TL;DR: An observer-based mode-dependent control scheme is developed to stabilize the resulting overall closed-loop jump system and is utilized to eliminate the effects of sensor faults and disturbances.

514 citations


Journal ArticleDOI
TL;DR: The proposed state feedback controller isolates the aforementioned output performance characteristics from control gains selection and exhibits strong robustness against model uncertainties, while completely avoiding the explosion of complexity issue raised by backstepping-like approaches that are typically employed to the control of pure feedback systems.

498 citations


Journal ArticleDOI
TL;DR: A systematic design method of full-order sliding-mode control for nonlinear systems is presented, which allows both the chattering and singularity problems to be resolved.

471 citations


Journal ArticleDOI
TL;DR: This formulation extends the integral reinforcement learning (IRL) technique, a method for solving optimal regulation problems, to learn the solution to the OTCP, and it also takes into account the input constraints a priori.

440 citations


Journal ArticleDOI
TL;DR: This paper addresses distributed state estimation over a sensor network wherein each node-equipped with processing, communication and sensing capabilities-repeatedly fuses local information with information from the neighbors, and derives a novel distributed state estimator.

429 citations


Journal ArticleDOI
TL;DR: An integral reinforcement learning algorithm on an actor-critic structure is developed to learn online the solution to the Hamilton-Jacobi-Bellman equation for partially-unknown constrained-input systems and it is shown that using this technique, an easy-to-check condition on the richness of the recorded data is sufficient to guarantee convergence to a near-optimal control law.

Journal ArticleDOI
TL;DR: It is proved that for all practical choices of these parameters global boundedness of trajectories is ensured and a design criterion for the controller gains and setpoints such that a desired steady-state active power distribution is achieved.

Journal ArticleDOI
TL;DR: It is proved that the boundedness of all closed-loop signals and the asymptotically consensus tracking for all the subsystems’ outputs are ensured and the design strategy is successfully applied to solve a formation control problem for multiple nonholonomic mobile robots.

Journal ArticleDOI
TL;DR: A novel approach based on the Q -learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner and the optimal control input is obtained by only solving an augmented ARE.

Journal ArticleDOI
TL;DR: A sufficient condition ensuring the asymptotic stability of switched continuous-time systems with all modes unstable is proposed, using a discretized Lyapunov function approach in the framework of dwell time.

Journal ArticleDOI
TL;DR: It is shown that the feasibility of the event-triggered MPC algorithm can be guaranteed if, the prediction horizon is designed properly and the disturbances are small enough and that the state trajectory converges to a robust invariant set under the proposed conditions.

Journal ArticleDOI
TL;DR: The proposed FO SM-ESC, involving an FO derivative function 0 D t q sgn ( e ) , 0 ?

Journal ArticleDOI
TL;DR: A perspective on feedback control's growth is presented, and the interplay of industry, applications, technology, theory and research is discussed.

Journal ArticleDOI
TL;DR: The goals of this paper are to highlight certain properties of the models which greatly influence the control law design; overview the literature; present two control design approaches in depth; and indicate some of the many open problems.

Journal ArticleDOI
TL;DR: This paper first gives necessary conditions for achieving global consensus via a distributed protocol based on relative state measurements of the agent itself and its neighboring agents, and then focuses on two special cases, where the agent model is either neutrally stable or a double integrator.

Journal ArticleDOI
TL;DR: This paper model opinions as continuous scalars ranging from 0 to 1 with 1 ( 0 ) representing extremely positive (negative) opinion and studies whether an equilibrium can emerge as a result of such local interactions and how such equilibrium possibly depends on the network structure, initial opinions of the agents, and the location of stubborn agents and the extent of their stubbornness.

Journal ArticleDOI
TL;DR: The approach presented in this paper provides both a decentralized control law and a decentralized communication policy, able to design thresholds that only depend on local information and guarantee asymptotic consensus.

Journal ArticleDOI
TL;DR: It is shown that, if the delays are constant and exactly known, the consensus problems can be solved by both full state feedback and observer based output feedback protocols for arbitrarily large yet bounded delays.

Journal ArticleDOI
TL;DR: This communique proposes a multivariable super-twisting sliding mode structure which represents an extension of the well-known single input case and is used to create a sliding mode observer to detect and isolate faults for a satellite system.

Journal ArticleDOI
TL;DR: This survey addresses stability analysis for stochastic hybrid systems (SHS) by reviewing many of the stability concepts that have been studied, including Lyapunov stability, Lagrange stability, asymptotic stability, and recurrence.

Journal ArticleDOI
TL;DR: It is shown that even with the significantly reduced sampling frequency, the global uniform ultimate boundedness of the event-driven closed-loop systems can also be guaranteed.

Journal ArticleDOI
TL;DR: A novel SCMPC method can be devised for general linear systems with additive and multiplicative disturbances, for which the number of scenarios is significantly reduced.

Journal ArticleDOI
TL;DR: A distributed adaptive consensus protocol is proposed to ensure the boundedness of the consensus error of linear multi-agent systems subject to different matching uncertainties for both the cases without and with a leader of bounded unknown control input.

Journal ArticleDOI
TL;DR: A survey of the most significant results on robust control theory including robust stability analysis for systems with unstructured uncertainty, robustness analysis for Systems with structured uncertainty, and robust control system design including H ∞ control methods.

Journal ArticleDOI
TL;DR: This paper addresses the model-free nonlinear optimal control problem based on data by introducing the reinforcement learning (RL) technique by using a data-based approximate policy iteration (API) method by using real system data rather than a system model.