scispace - formally typeset
Search or ask a question
Author

Alessandro N. Vargas

Bio: Alessandro N. Vargas is an academic researcher from Polytechnic University of Catalonia. The author has contributed to research in topics: Linear system & Markov chain. The author has an hindex of 14, co-authored 75 publications receiving 755 citations. Previous affiliations of Alessandro N. Vargas include University of Brasília & Basque Center for Applied Mathematics.


Papers
More filters
Journal ArticleDOI
TL;DR: With the proposed systematization of the Unscented Kalman Filter theory, the symmetric sets of sigma points in the literature are formally justified, and the proposed SRUKF has improved computational properties when compared to state-of-the-art methods.
Abstract: In this paper, we propose a systematization of the (discrete-time) Unscented Kalman Filter (UKF) theory. We gather all available UKF variants in the literature, present corrections to theoretical inconsistencies, and provide a tool for the construction of new UKF's in a consistent way. This systematization is done, mainly, by revisiting the concepts of Sigma-Representation, Unscented Transformation (UT), Scaled Unscented Transformation (SUT), UKF, and Square-Root Unscented Kalman Filter (SRUKF). Inconsistencies are related to 1) matching the order of the transformed covariance and cross-covariance matrices of both the UT and the SUT; 2) multiple UKF definitions; 3) issue with some reduced sets of sigma points described in the literature; 4) the conservativeness of the SUT; 5) the scaling effect of the SUT on both its transformed covariance and cross-covariance matrices; and 6) possibly ill-conditioned results in SRUKF's. With the proposed systematization, the symmetric sets of sigma points in the literature are formally justified, and we are able to provide new consistent variations for UKF's, such as the Scaled SRUKF's and the UKF's composed by the minimum number of sigma points. Furthermore, our proposed SRUKF has improved computational properties when compared to state-of-the-art methods.

210 citations

Journal ArticleDOI
TL;DR: A control strategy for Markov jump linear systems (MJLS) with no access to the Markov state (or mode) is presented and its usefulness is illustrated by an application that considers the velocity control of a DC motor device subject to abrupt failures that is modeled as an MJLS.
Abstract: This brief presents a control strategy for Markov jump linear systems (MJLS) with no access to the Markov state (or mode). The controller is assumed to be in the linear state-feedback format and the aim of the control problem is to design a static mode-independent gain that minimizes a bound to the corresponding H 2 -cost. This approach has a practical appeal since it is often difficult to measure or to estimate the actual operating mode. The result of the proposed method is compared with that of a previous design, and its usefulness is illustrated by an application that considers the velocity control of a DC motor device subject to abrupt failures that is modeled as an MJLS.

91 citations

Journal ArticleDOI
TL;DR: In this paper, the control problem of discrete-time Markov jump linear systems for the case in which the controller does not have access to the state of the Markov chain is addressed, and a necessary optimal condition is introduced, which is nonlinear with respect to the optimizing variables, and the corresponding solution is obtained through a variational convergent method.
Abstract: SUMMARY This paper deals with the control problem of discrete-time Markov jump linear systems for the case in which the controller does not have access to the state of the Markov chain. A necessary optimal condition, which is nonlinear with respect to the optimizing variables, is introduced, and the corresponding solution is obtained through a variational convergent method. We illustrate the practical usefulness of the derived approach by applying it in the control speed of a real DC Motor device subject to abrupt power failures.Copyright © 2012 John Wiley & Sons, Ltd.

88 citations

Journal ArticleDOI
TL;DR: The note presents an algorithm for the average cost control problem of continuous-time Markov jump linear systems that derives a global convergent algorithm that generates a gain satisfying necessary optimality conditions.
Abstract: The note presents an algorithm for the average cost control problem of continuous-time Markov jump linear systems. The controller assumes a linear state-feedback form and the corresponding control gain does not depend on the Markov chain. In this scenario, the control problem is that of minimizing the long-run average cost. As an attempt to solve the problem, we derive a global convergent algorithm that generates a gain satisfying necessary optimality conditions. Our algorithm has practical implications, as illustrated by the experiments that were carried out to control an electronic dc-dc buck converter. The buck converter supplied a load that suffered abrupt changes driven by a homogeneous Markov chain. Besides, the source of the buck converter also suffered abrupt Markov-driven changes. The experimental results support the usefulness of our algorithm.

53 citations

Journal ArticleDOI
TL;DR: The stability and the problem of ℋ2 guaranteed cost computation for discrete-time Markov jump linear systems (MJLS) are investigated, assuming that the transition probability matrix is not precisely known, and a sequence of linear matrix inequalities (LMIs) is proposed to verify the stability and to solve theℋ1 guaranteed cost with increasing precision.
Abstract: The stability and the problem of ℋ2 guaranteed cost computation for discrete-time Markov jump linear systems (MJLS) are investigated, assuming that the transition probability matrix is not precisely known. It is generally difficult to estimate the exact transition matrix of the underlying Markov chain and the setting has a special interest for applications of MJLS. The exact matrix is assumed to belong to a polytopic domain made up by known probability matrices, and a sequence of linear matrix inequalities (LMIs) is proposed to verify the stability and to solve the ℋ2 guaranteed cost with increasing precision. These LMI problems are connected to homogeneous polynomially parameter-dependent Lyapunov matrix of increasing degree g. The mean square stability (MSS) can be established by the method since the conditions that are sufficient, eventually turns out to also be necessary, provided that the degree g is large enough. The ℋ2 guaranteed cost under MSS is also studied here, and an extension to cope with th...

41 citations


Cited by
More filters
Book ChapterDOI
15 Feb 2011

1,876 citations

Journal Article
TL;DR: In this paper, two major figures in adaptive control provide a wealth of material for researchers, practitioners, and students to enhance their work through the information on many new theoretical developments, and can be used by mathematical control theory specialists to adapt their research to practical needs.
Abstract: This book, written by two major figures in adaptive control, provides a wealth of material for researchers, practitioners, and students. While some researchers in adaptive control may note the absence of a particular topic, the book‘s scope represents a high-gain instrument. It can be used by designers of control systems to enhance their work through the information on many new theoretical developments, and can be used by mathematical control theory specialists to adapt their research to practical needs. The book is strongly recommended to anyone interested in adaptive control.

1,814 citations

01 Jan 2015
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Abstract: Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications, and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book’s practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.

1,102 citations

01 Jan 2016
TL;DR: The stochastic differential equations and applications is universally compatible with any devices to read, and an online access to it is set as public so you can get it instantly.
Abstract: stochastic differential equations and applications is available in our digital library an online access to it is set as public so you can get it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the stochastic differential equations and applications is universally compatible with any devices to read.

741 citations

Journal ArticleDOI
TL;DR: In this article, a model predictive control (MPC) approach is proposed to solve an open-loop constrained optimal control problem (OCP) repeatedly in a receding-horizon manner, where the OCP is solved over a finite sequence of control actions at every sampling time instant that the current state of the system is measured.
Abstract: Model predictive control (MPC) has demonstrated exceptional success for the high-performance control of complex systems [1], [2]. The conceptual simplicity of MPC as well as its ability to effectively cope with the complex dynamics of systems with multiple inputs and outputs, input and state/output constraints, and conflicting control objectives have made it an attractive multivariable constrained control approach [1]. MPC (a.k.a. receding-horizon control) solves an open-loop constrained optimal control problem (OCP) repeatedly in a receding-horizon manner [3]. The OCP is solved over a finite sequence of control actions {u0,u1,f,uN- 1} at every sampling time instant that the current state of the system is measured. The first element of the sequence of optimal control actions is applied to the system, and the computations are then repeated at the next sampling time. Thus, MPC replaces a feedback control law p(m), which can have formidable offline computation, with the repeated solution of an open-loop OCP [2]. In fact, repeated solution of the OCP confers an "implicit" feedback action to MPC to cope with system uncertainties and disturbances. Alternatively, explicit MPC approaches circumvent the need to solve an OCP online by deriving relationships for the optimal control actions in terms of an "explicit" function of the state and reference vectors. However, explicit MPC is not typically intended to replace standard MPC but, rather, to extend its area of application [4]-[6].

657 citations