scispace - formally typeset
Search or ask a question
Author

Naira Hovakimyan

Bio: Naira Hovakimyan is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Adaptive control & Control theory. The author has an hindex of 48, co-authored 476 publications receiving 10255 citations. Previous affiliations of Naira Hovakimyan include Virginia Tech & Wentworth Institute of Technology.


Papers
More filters
Book
30 Sep 2010
TL;DR: This book presents a comprehensive overview of the recently developed L1 adaptive control theory, including detailed proofs of the main results and also presents the flight test results that have used this theory and contains results not yet published in technical journals and conference proceedings.
Abstract: This book presents a comprehensive overview of the recently developed L1 adaptive control theory, including detailed proofs of the main results. The key feature of the L1 adaptive control theory is the decoupling of adaptation from robustness. The architectures of L1 adaptive control theory have guaranteed transient performance and robustness in the presence of fast adaptation, without enforcing persistent excitation, applying gain-scheduling, or resorting to high-gain feedback. The book covers detailed proofs of the main results and also presents the flight test results that have used this theory and contains results not yet published in technical journals and conference proceedings. The material is organized into six chapters and concludes with an appendix that summarizes the mathematical results used to support the proofs. Software is available on a supplementary Web page. Audience: L1 Adaptive Control Theory is intended for graduate students; researchers; and aerospace, mechanical, chemical, industrial, and electrical engineers interested in pursuing new directions in research and developing technology at reduced costs. Contents: Foreword; Preface; Chapter 1: Introduction; Chapter 2: State Feedback in the Presence of Matched Uncertainties; Chapter 3: State Feedback in the Presence of Unmatched Uncertainties; Chapter 4: Output Feedback; Chapter 5: L1 Adaptive Controller for Time-Varying Reference Systems; Chapter 6: Applications, Conclusions, and Open Problems; Appendix A: Systems Theory; Appendix B: Projection Operator for Adaptation Laws; Appendix C: Basic Facts on Linear Matrix Inequalities; Bibliography

504 citations

Journal ArticleDOI
TL;DR: A direct adaptive output feedback control design procedure is developed for highly uncertain nonlinear systems, that does not rely on state estimation, and extends the universal function approximation property of linearly parameterized neural networks to model unknown system dynamics from input/output data.

431 citations

Posted Content
TL;DR: A novel adaptive control architecture that adapts fast and ensures uniformly bounded transient response for system's both signals, input and output, simultaneously is presented, which relies on the small-gain theorem for the proof of asymptotic stability.
Abstract: Novel adaptive control architecture is presented that has guaranteed transient performance for system's both signals, input and output, simultaneously.

367 citations

Journal ArticleDOI
TL;DR: In this paper, a low-pass filter in the feedback loop is proposed to ensure uniformly bounded transient response for system's both signals, input and output simultaneously, and the small-gain theorem is used for the proof of asymptotic stability.
Abstract: This paper presents a novel adaptive control architecture that adapts fast and ensures uniformly bounded transient response for system's both signals, input and output, simultaneously. This new architecture has a low-pass filter in the feedback loop and relies on the small-gain theorem for the proof of asymptotic stability. The tools from this paper can be used to develop a theoretically justified verification and validation framework for adaptive systems. Simulations illustrate the theoretical findings.

350 citations

Journal ArticleDOI
TL;DR: It is argued that it is sufficient to build an observer for the output tracking error of uncertain nonlinear systems to ensureUltimate boundedness of the error signals is shown through Lyapunov's direct method.
Abstract: We consider adaptive output feedback control of uncertain nonlinear systems, in which both the dynamics and the dimension of the regulated system may be unknown. However, the relative degree of the regulated output is assumed to be known. Given a smooth reference trajectory, the problem is to design a controller that forces the system measurement to track it with bounded errors. The classical approach requires a state observer. Finding a good observer for an uncertain nonlinear system is not an obvious task. We argue that it is sufficient to build an observer for the output tracking error. Ultimate boundedness of the error signals is shown through Lyapunov's direct method. The theoretical results are illustrated in the design of a controller for a fourth-order nonlinear system of relative degree two and a high-bandwidth attitude command system for a model R-50 helicopter.

326 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Posted Content
TL;DR: This paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies which are adaptive, distributed, asynchronous, and verifiably correct.
Abstract: This paper presents control and coordination algorithms for groups of vehicles. The focus is on autonomous vehicle networks performing distributed sensing tasks where each vehicle plays the role of a mobile tunable sensor. The paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies. The resulting closed-loop behavior is adaptive, distributed, asynchronous, and verifiably correct.

2,198 citations

01 Nov 1981
TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.
Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations