scispace - formally typeset
Search or ask a question

Showing papers presented at "American Control Conference in 1995"


Proceedings ArticleDOI
15 Oct 1995
TL;DR: In this article, the authors present a model for dynamic control systems based on Adaptive Control System Design Steps (ACDS) with Adaptive Observers and Parameter Identifiers.
Abstract: 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty.

4,378 citations



Proceedings ArticleDOI
21 Jun 1995
TL;DR: In this paper, the authors reveal the relationship between eigenvectors and robust Schur stability of uncertain matrices and show that robust pole clustering in a general circle region for uncertain system matrices of discrete-time systems or continuous time systems can be achieved.
Abstract: This paper reveals the relationship between eigenvectors and robust Schur stability of uncertain matrices, including the relationship between eigenvectors and Schur stability of matrices. The results are derived and valid for robust pole clustering in a general circle region for uncertain system matrices of discrete-time systems or continuous-time systems. Robust Schur stability is dealt as a special case of robust pole clustering in a general circle region. The conditions on eigenvectors for robust pole clustering (or pole clustering) within a general circle are necessary and sufficient conditions. These conditions of eigenvector directions are viewed in some fixed basis which is constituted by the orthonormal eigenvectors of a symmetric matrix, called a criterion matrix. Three types of criterion matrices are adopted: the direct symmetric criterion matrix, the similarity transformed criterion matrix and the Lyapunov-type criterion matrix. The concerned uncertainties include both structured/unstructured uncertainties.

8 citations


Proceedings ArticleDOI
21 Jun 1995
TL;DR: In this paper, the authors investigated the feasibility of a decision and control system for life extension and performance enhancement of a reusable rocket engine, such as the Space Shuttle Main Engine (SSME).
Abstract: The goal of life extending control in reusable rocket engines is to achieve high performance without overstraining the mechanical structure; and the major benefit is an increase in structural durability with no significant loss of performance. This paper investigates the feasibility of a decision and control system for life extension and performance enhancement of a reusable rocket engine, such as the Space Shuttle Main Engine (SSME). Creep damage in the coolant channel ligament in the main thrust chamber is controlled while engine performance is maximized. For open loop control of up-thrust transients of the rocket engine, an optimal feedforward policy has been synthesized based on an integrated model of plant, structural and damage dynamics. Optimization is based on the integrated model of plant, structural and damage dynamics under creep damage constraints in the critical plant component, the coolant channel ligament in the main thrust chamber. The results demonstrate the potential of life extension of reusable rocket engines via damage mitigating control. The concept of life extending control, as presented in this paper, is not restricted to rocket engines; it can be applied to any system where structural durability is an important issue.

5 citations



Proceedings Article
01 Jan 1995

1 citations




Proceedings Article
01 Jan 1995
TL;DR: Taking the neural network as a neuro model of the system, control signals are directly obtained by minimizing either the instant difference or the cummulative differences between a setpoint and the output of the neuro model.
Abstract: Presents a direct adaptive neural network control strategy for unknown nonlinear systems which are described by an unknown NARMA model. Taking the neural network as a neuro model of the system, control signals are directly obtained by minimizing either the instant difference or the cummulative differences between a setpoint and the output of the neuro model. An application to a flow rate control system is studied and desired results are obtained.

1 citations


Proceedings Article
01 Jan 1995
TL;DR: In this article, the use of iterative dynamic programming employing exact penalty functions for minimum energy control problems is presented, and it is shown that the choice of an appropriate penalty function factor depends on the relative size of the time delay with respect to the final time and the expected value of the energy consumption.
Abstract: This paper presents the use of iterative dynamic programming employing exact penalty functions for minimum energy control problems. We show that exact continuously non-differentiable penalty functions are superior to continuously differentiable penalty functions in terms of satisfying final state constraints. We also demonstrate that the choice of an appropriate penalty function factor depends on the relative size of the time delay with respect to the final time and on the expected value of the energy consumption. A quadratic approximation (QA) of the delayed variables is much better than a linear approximation (LA) of the same for relatively large time delays. The QA improves the rate of convergence and avoids the formation of 'kinks'. A more general way of selecting appropriate penalty function factors is given and the results obtained using four illustrative examples of varying complexity corroborate the efficacy of the method.