scispace - formally typeset
Search or ask a question

Showing papers on "Optimal control published in 1995"


Book
01 May 1995
TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Abstract: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an extensive treatment of the far-reaching methodology of Neuro-Dynamic Programming/Reinforcement Learning.

10,834 citations


Book
17 Aug 1995
TL;DR: This paper reviewed the history of the relationship between robust control and optimal control and H-infinity theory and concluded that robust control has become thoroughly mainstream, and robust control methods permeate robust control theory.
Abstract: This paper will very briefly review the history of the relationship between modern optimal control and robust control. The latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former. More recently, the distinction has effectively disappeared. Once-controversial notions of robust control have become thoroughly mainstream, and optimal control methods permeate robust control theory. This has been especially true in H-infinity theory, the primary focus of this paper.

6,945 citations


Journal ArticleDOI
TL;DR: Taking a model matching approach, suboptimal solutions are presented that stem from the resulting l ∞ -induced norm-minimization problem.

2,950 citations


Book
01 Jan 1995
TL;DR: This theory allows us to determine if a linear time invariant control system, containing several uncertain real parameters remains stable as the parameters vary over a set and nicely complements the optimal theories as well as Classical Control and considerably extends the range of possibilities available to the control specialist.
Abstract: From the Book: PREFACE: The subject of robust control began to receive worldwide attention in the late 1970's when it was found that Linear Quadratic Optimal Control (optimal control), state feedback through observers, and other prevailing methods of control system synthesis such as Adaptive Control, lacked any guarantees of stability or performance under uncertainty Thus, the issue of robustness, prominent in Classical Control, took rebirth in a modern setting Optimal control was proposed as a first approach to the solution of the robustness problem This elegant approach, and its offshoots, such as theory, have been intensely developed over the past 12 years or so, and constitutes one of the triumphs of control theory The theory provides a precise formulation and solution of the problem of synthesizing an output feedback compensator that minimizes the norm of a prescribed system transfer function Many robust stabilization and performance problems can be cast in this formulation and there now exists effective, and fairly complete theory for control system synthesis subjected to perturbations, in the framework The theory delivers an "optimal" feedback compensator for the system Before such a compensator can be eployed in a physical (real-world) system it is natural to test its capabilities with regard to additional design criteria, not covered by the optimality criterion used In particular the performance of any controller under real parameter uncertainty, as well as mixed parametric-unstructured uncertainty, is an issue which is vital to most control systems However, optimal theory is incapable of providing a direct and nonconservative answer to thisimportantquestion The problem of robustness under parametric uncertainty received a shot in the arm in the form of Kharitonov's Theorem for interval polynomials, which appeared in the mid-1980's in the Western literature It was originally published in 1978 in a Russian journal With this surprising theorem the entire field of robust control under real parametric uncertainty came alive and it can be said that Kharitonov's Theorem is the most important occurrence in this area after the development of the Routh-Hurwitz criterion A significant development following Kharitonov's Theorem was the calculation, in 1985, by Soh, Berger and Dabke of the radius of the stability ball in the space of coefficients of a polynomial From the mid-1980's rapid and spectacular developments have taken place in this field As a result we now have a rigorous, coherent, and comprehensive theory to deal directly and effectively with real parameter uncertainty in control systems This theory nicely complements the optimal theories as well as Classical Control and considerably extends the range of possibilities available to the control specialist The main accomplishment of this theory is that it allows us to determine if a linear time invariant control system, containing several uncertain real parameters remains stable as the parameters vary over a set This question can be answered in a precise manner, that is, nonconservatively, when the parameters appear linearly or multilinearly in the characteristic polynomial In developing the solution to the above problem, several important control system design problems are answered These are 1) the calculation of the real parametric stability margin, 2) the determination of stability and stability margins under mixed parametric and unstructured (norm-bounded or nonlinear) uncertainty 3) the evaluation of the worst case or robust performance measured in the norm, over a prescribed parametric uncertainty set and 4) the extension of classical design techniques involving Nyquist, Nichols and Bode plots and root-loci to systems containing several uncertain real parameters These results are made possible because the theory developed provides built-in solutions to several extremal problems It identifies apriori the critical subset of the uncertain parameter set over which stability or performance will be lost and thereby reduces to a very small set, usually points or lines, the parameters over which robustness must be verified This built-in optimality of the parametric theory is its main strong point particularly from the point of view of applications It allows us, for the first time, to devise methods to effectively carry out robust stability and performance analysis of control systems under parametric and mixed uncertainty To balance this rather strong claim we point out that a significant deficiency of control theory at the present time is the lack of nonconservative synthesis methods to achieve robustness under parameter uncertainty Nevertheless, even here the sharp analysis results obtained in the parametric framework can be exploited in conjunction with synthesis techniques developed in the framework to develop design techniques to partially cover this drawback The objective of this book is to describe the parametric theory in a self-contained manner The book is suitable for use as a graduate textbook and also for self-study The entire subject matter of the book is developed from the single fundamental fact that the roots of a polynomial depend continuously on its coefficients This fact is the basis of the Boundary Crossing Theorem developed in Chapter 1 and is repeatedly used throughout the book Surprisingly enough this simple idea, used systematically is sufficient to derive even the most mathematically sophisticated results This economy and transparency of concepts is another strength of the parametric theory It makes the results accessible and appealing to a wide audience and allows for a unified and systematic development of the subject The contents of the book can therefore be covered in one semester despite the size of the book In accordance with our focus we do not develop any results in theory although some results from theory are used in the chapter on synthesis In Chapter 0, which serves as an extension of this preface, we rapidly overview some basic aspects of control systems, uncertainty models and robustness issues We also give a brief historical sketch of Control Theory, and then describe the contents of the rest of the chapters in some detail The theory developed in the book is presented in mathematical language The results described in these theorems and lemmas however are completely oriented towards control systems applications and in fact lead to effective algorithms and graphical displays for design and analysis We have throughout included examples to illustrate the theory and indeed the reader who wants to avoid reading the proofs can understand the significance and utility of the results by reading through the examples A MATLAB based software package, the Robust Parametric Control ToolBox, has been developed by the authors in collaboration with Samir Ahmad, our graduate student It implements most of the theory presented in the book In fact, all the examples and figures in this book have been generated by this ToolBox We gratefully acknowledge Samir's dedication and help in the preparation of the numerical examples given in the book A demonstration diskette illustrating this package is included with this book SPB would like to thank R Kishan Baheti, Director of the Engineering Systems Program at the National Science Foundation, for supporting his research program LHK thanks Harry Frisch and Frank Bauer of NASA Goddard Space Flight Center and Jer-Nan Juang of NASA Langley Research Center for their support of his research, and Mike Busby, Director of the Center of Excellence in Information Systems at Tennessee State University for his encouragement It is a pleasure to express our gratitude to several colleagues and coworkers in this field We thank Antonio Vicino, Alberto Tesi, Mario Milanese, Jo W Howze, Aniruddha Datta, Mohammed Mansour, J Boyd Pearson, Peter Dorato, Yakov Z Tsypkin, Boris T Polyak, Vladimir L Kharitonov, Kris Hollot, Juergen Ackermann, Diedrich Hinrichsen, Tony Pritchard, Dragoslav D Siljak, Charles A Desoer, Soura Dasgupta, Suhada Jayasuriya, Rama K Yedavalli, Bob R Barmish, Mohammed Dahleh, and Biswa N Datta for their, support, enthusiasm, ideas and friendship In particular we thank Nirmal K Bose, John A Fleming and Bahram Shafai for thoroughly reviewing the manuscript and suggesting many improvements We are indeed honored that Academician Ya Z Tsypkin, one of the leading control theorists of the world, has written a Foreword to our book Professor Tsypkin's pioneering contributions range from the stability analysis of time-delay systems in the 1940's, learning control systems in the 1960's to robust control under parameter uncertainty in the 1980's and 1990's His observations on the contents of the book and this subject based on this wide perspective are of great value The first draft of this book was written in 1989 We have added new results of our own and others as we became aware of them However, because of the rapid pace of developments of the subject and the sheer volume of literature that has been published in the last few years, it is possible that we have inadvertently omitted some results and references worthy of inclusion We apologize in advance to any authors or readers who feel that we have not given credit where it is due S P Bhattacharyya H Chapellat L H Keel December 5, 1994

1,052 citations


Journal ArticleDOI
TL;DR: The relations between the different sets of optimality conditions arising in Pontryagin's maximum principle are shown and the application of these maximum principle conditions is demonstrated by solving some illustrative examples.
Abstract: This paper gives a survey of the various forms of Pontryagin’s maximum principle for optimal control problems with state variable inequality constraints. The relations between the different sets of optimality conditions arising in these forms are shown. Furthermore, the application of these maximum principle conditions is demonstrated by solving some illustrative examples.

937 citations


Journal ArticleDOI
TL;DR: This paper presents a computational technique for optimal control problems including state and control inequality constraints based on spectral collocation methods used in the solution of differential equations that is easy to implement, capable of handling various types of constraints, and yields very accurate results.
Abstract: This paper presents a computational technique for optimal control problems including state and control inequality constraints. The technique is based on spectral collocation methods used in the solution of differential equations. The system dynamics are collocated at Legendre-Gauss-Lobatto points. The derivative x/spl dot/(t) of the state x(t) is approximated by the analytic derivative of the corresponding interpolating polynomial. State and control inequality constraints are collocated at Legendre-Gauss-Lobatto nodes. The integral involved in the definition of the performance index is discretized based on the Gauss-Lobatto quadrature rule. The optimal control problem is thereby converted into a mathematical programming program. Thus existing, well-developed optimization algorithms may be used to solve the transformed problem. The method is easy to implement, capable of handling various types of constraints, and yields very accurate results. Illustrative examples are included to demonstrate the capability of the proposed method, and a comparison is made with existing methods in the literature. >

703 citations


Book
17 Mar 1995
TL;DR: In this article, the authors focus on implementation issues for model predictive controllers in industry, filling the gap between the empirical way practitioners use control algorithms and the sometimes abstractly formulated techniques developed by researchers.
Abstract: Model Predictive Control is an important technique used in the process control industries. It has developed considerably in the last few years, because it is the most general way of posing the process control problem in the time domain. The Model Predictive Control formulation integrates optimal control, stochastic control, control of processes with dead time, multivariable control and future references. The finite control horizon makes it possible to handle constraints and non linear processes in general which are frequently found in industry. Focusing on implementation issues for Model Predictive Controllers in industry, it fills the gap between the empirical way practitioners use control algorithms and the sometimes abstractly formulated techniques developed by researchers. The text is firmly based on material from lectures given to senior undergraduate and graduate students and articles written by the authors.

689 citations


Journal ArticleDOI
TL;DR: A human sensorimotor control model, compatible with previous work by others, was assembled that incorporates linearized equations and full-state feedback with provision for state estimation, and produces a control that reasonably matches experimental data.
Abstract: The question posed in this study is whether optimal control and state estimation can explain selection of control strategies used by humans, in response to small perturbations to stable upright balance. To answer this question, a human sensorimotor control model, compatible with previous work by others, was assembled. This model incorporates linearized equations and full-state feedback with provision for state estimation. A form of gain-scheduling is employed to account for nonlinearities caused by control and biomechanical constraints. By decoupling the mechanics and transforming the controls into the space of experimentally observed strategies, the model is made amenable to the study of a number of possible control objectives. The objectives studied include cost functions on the state deviations, so as to control the center of mass, provide a stable platform for the head, or maintain upright stance, along with a cost function on control effort. Also studied was the effect of time delay on the stability of controls produced using various control strategies. An objective function weighting excursion of the center of mass and deviations from the upright stable position, while taking advantage of fast modes of the system, as dictated by inertial parameters and musculoskeletal geometry, produces a control that reasonably matches experimental data. Given estimates of sensor performance, the model is also suited for prediction of uncertainty in the response. >

537 citations


Journal ArticleDOI
01 Feb 1995
TL;DR: In this paper, a method based on the /spl Hscr//sub /spl infin//-optimal control and /spl mu/-synthesis frameworks is introduced to design a controller for the teleoperator that achieves stability for a prespecified time-delay margin while optimizing performance specifications.
Abstract: In the standard teleoperator system, force and velocity signals are communicated between a master robot and a slave robot. It is well known that the system can become unstable when even a small time delay exists in the communication channel. In this paper, a method based on the /spl Hscr//sub /spl infin//-optimal control and /spl mu/-synthesis frameworks is introduced to design a controller for the teleoperator that achieves stability for a prespecified time-delay margin while optimizing performance specifications. A numerical design example is included. >

377 citations


Journal ArticleDOI
TL;DR: The reliable LQ design is shown to be equivalent to a standard LQ-optimal design with a modified performance index, and is seen as a means of choosing a particular quadratic performance index for which the optimal control will possess the desired reliability properties.

350 citations


Journal ArticleDOI
TL;DR: In this article, an evolutionary programming (EP) method was applied to optimal reactive power dispatch and voltage control for large-scale power systems, and the proposed method has been evaluated on the IEEE 30-bus system.
Abstract: This paper is concerned with application of evolutionary programming (EP) to optimal reactive power dispatch and voltage control of power systems. Practical implementation of the EP for global optimization problems of large-scale power systems has been considered. The proposed EP method has been evaluated on the IEEE 30-bus system. Simulation results, compared with those obtained using a conventional gradient-based optimization method, are presented to show the potential of application of the proposed method to power system economical operations. >

Journal ArticleDOI
TL;DR: In this paper, the authors considered stochastic control problems on an infinite time horizon with exponential cost criteria, and the Donsker-Varadhan large deviation rate was used as a criterion to optimize the optimum rate.
Abstract: Stochastic control problems on an infinite time horizon with exponential cost criteria are considered. The Donsker--Varadhan large deviation rate is used as a criterion to be optimized. The optimum rate is characterized as the value of an associated stochastic differential game, with an ergodic (expected average cost per unit time) cost criterion. If we take a small-noise limit, a deterministic differential game with average cost per unit time cost criterion is obtained. This differential game is related to robust control of nonlinear systems.

Journal ArticleDOI
TL;DR: A receding-horizon (RH) optimal control scheme for a discrete-time nonlinear dynamic system is presented, and constraints are imposed on both the state and control vectors.

Journal ArticleDOI
TL;DR: The proposed neural network has a quite simple structure and provides a highly accurate identification of the optimal operating point and also ahighly accurate estimation of the maximum power from the PV modules.
Abstract: This paper presents an application of a neural network for the identification of the optimal operating point of PV modules for the real time maximum power tracking control. The output power from the modules depends on the environmental factors such as insolation, cell temperature, and so on. Therefore, accurate identification of optimal operating point and real time continuous control are required to achieve the maximum output efficiency. The proposed neural network has a quite simple structure and provides a highly accurate identification of the optimal operating point and also a highly accurate estimation of the maximum power from the PV modules. >

Book
31 Dec 1995
TL;DR: It turns out that NLq Theory is unifying with respect to many problems arising in neural networks, systems and control, and examples show that complex non-linear systems can be modelled and controlled within NLq theory, including mastering chaos.
Abstract: Artificial neural networks possess several properties that make them particularly attractive for applications to modelling and control of complex non-linear systems. Among these properties are their universal approximation ability, their parallel network structure and the availability of on- and off-line learning methods for the interconnection weights. However, dynamic models that contain neural network architectures might be highly non-linear and difficult to analyse as a result. Artificial Neural Networks for Modelling and Control of Non-Linear Systems investigates the subject from a system theoretical point of view. However the mathematical theory that is required from the reader is limited to matrix calculus, basic analysis, differential equations and basic linear system theory. No preliminary knowledge of neural networks is explicitly required. The book presents both classical and novel network architectures and learning algorithms for modelling and control. Topics include non-linear system identification, neural optimal control, top-down model based neural control design and stability analysis of neural control systems. A major contribution of this book is to introduce NLq Theory as an extension towards modern control theory, in order to analyze and synthesize non-linear systems that contain linear together with static non-linear operators that satisfy a sector condition: neural state space control systems are an example. Moreover, it turns out that NLq Theory is unifying with respect to many problems arising in neural networks, systems and control. Examples show that complex non-linear systems can be modelled and controlled within NLq theory, including mastering chaos. The didactic flavor of this book makes it suitable for use as a text for a course on Neural Networks. In addition, researchers and designers will find many important new techniques, in particular NLq Theory, that have applications in control theory, system theory, circuit theory and Time Series Analysis.

Journal ArticleDOI
TL;DR: It is shown that with state feedback, MPC is globally asymptotically stabilizing if and only if all the eigenvalues of the open loop system are in the closed unit disk.
Abstract: We derive stability conditions for model predictive control (MPC) with hard constraints on the inputs and "soft" constraints on the outputs for an infinitely long output horizon. We show that with state feedback, MPC is globally asymptotically stabilizing if and only if all the eigenvalues of the open loop system are in the closed unit disk. With output feedback, we show that the results hold if all the eigenvalues are strictly inside the unit circle. The online optimization problem defining MPC can be posed as a finite dimensional quadratic program even though the output constraints are specified over an infinite horizon. >

Journal ArticleDOI
TL;DR: A recursive formulation of discounted costs for a linear quadratic exponential Gaussian linear regulator problem which implies time-invariant linear decision rules in the infinite horizon case is described.
Abstract: In this note, we describe a recursive formulation of discounted costs for a linear quadratic exponential Gaussian linear regulator problem which implies time-invariant linear decision rules in the infinite horizon case. Time invariance in the discounted case is attained by surrendering state-separability of the risk-adjusted costs. >

Journal ArticleDOI
TL;DR: An extension of a Lyapunov equation result is derived for the countably infinite Markov state-space case and guarantees existence and uniqueness of a stationary measure and consequently existence of an optimal stationary control policy.
Abstract: Optimal control problems for discrete-time linear systems subject to Markovian jumps in the parameters are considered for the case in which the Markov chain takes values in a countably infinite set. Two situations are considered: the noiseless case and the case in which an additive noise is appended to the model. The solution for these problems relies, in part, on the study of a countably infinite set of coupled algebraic Riccati equations (ICARE). Conditions for existence and uniqueness of a positive semidefinite solution to the ICARE are obtained via the extended concepts of stochastic stabilizability (SS) and stochastic detectability (SD), which turn out to be equivalent to the spectral radius of certain infinite dimensional linear operators in a Banach space being less than one. For the long-run average cost, SS and SD guarantee existence and uniqueness of a stationary measure and consequently existence of an optimal stationary control policy. Furthermore, an extension of a Lyapunov equation result is derived for the countably infinite Markov state-space case.

Journal ArticleDOI
TL;DR: In this paper, a generalized Lyapunov theorem for continuous-time descriptor systems is presented, which is applied to the infinite-horizon descriptor LQ regulator problem, and the result is extended to continuous time descriptor systems.

Journal ArticleDOI
TL;DR: The ability of the genetic algorithm to develop a proportional-integral (PI) controller and a state feedback controller for a nonlinear multi-input/multi-output (MIMO) plant model is studied.
Abstract: This paper discusses the application of a genetic algorithm to control system design for boiler-turbine plant. In particular we study the ability of the genetic algorithm to develop a proportional-integral (PI) controller and a state feedback controller for a nonlinear multi-input/multi-output (MIMO) plant model. The plant model is presented along with a discussion of the inherent difficulties in such controller development. A sketch of the genetic algorithm (GA) is presented and its strategy as a method of control system design is discussed. Results are presented for two different control systems that have been designed with the genetic algorithm.

Journal ArticleDOI
TL;DR: The VVC algorithm is based on the oriented discrete coordinate descent method and takes into account all the optimization objectives of interest in distribution system analysis: minimum power loss, power demand or the number of control steps to keep the system within constraints.
Abstract: In this paper, a centralized volt/VAr control (VVC) algorithm for a distribution management system is presented. The algorithm is based on the oriented discrete coordinate descent method and takes into account all the optimization objectives of interest in distribution system analysis: minimum power loss, power demand or the number of control steps to keep the system within constraints. Although the optimization method used belongs to the traditional class of combinatorial integer programming, the algorithm provides good speed for real-time application. Numerical examples illustrate how well the VVC algorithm works for the different types of objective functions and it's advantages in comparison with other possible optimization strategies. >

Book
30 Oct 1995
TL;DR: Semi-regenerative decision models as discussed by the authors describe a basic decision model with robust definitions and assumptions, and examples of Controlled Queues Optimization Problems Renewal Kernels of the decision model special classes of strategies Sufficiency of Markov Strategies Dynamic Programming Discounting in Continuous Time Dynamic Programming Equation Bellman Functions Finite-Horizon Problem Infinite-Horzon Discounted-Cost Problem Random-Horzone Problem Average Cost Criterion Preliminaries: Weak Topology, Limit Passages Prelimineurs: Taboo Probabilities, Limit Theorems for Markov Renewal
Abstract: Semi-Regenerative Decision Models Description of Basic Decision Model Rigorous Definitions and Assumptions Examples of Controlled Queues Optimization Problems Renewal Kernels of the Decision Model Special Classes of Strategies Sufficiency of Markov Strategies Dynamic Programming Discounting in Continuous Time Dynamic Programming Equation Bellman Functions Finite-Horizon Problem Infinite-Horizon Discounted-Cost Problem Random-Horizon Problem Average Cost Criterion Preliminaries: Weak Topology, Limit Passages Preliminaries: Taboo Probabilities, Limit Theorems for Markov Renewal Processes Notation, Recurrence-Communication Assumptions, Examples Existence of Optimal Policies Existence of Optimal Strategies: General Criterion Existence of Optimal Strategies: Sufficient Conditions Optimality Equation Constrained Average-Cost Problem Average-Cost Optimality as Limiting Case of Discounted-Cost Optimality Continuously Controlled Markov Jump Processes Facts About Measurability of Stochastic Processes Marked Point Processes and Random Measures The Predictable s-Algebra Dual Predictable Projections of Random Measures Definition of Controlled Markov Jump Process An M/M/1 Queue With Controllable Input and Service Rate Dynamic Programming Optimization Problems Structured Optimization Problems for Decision Processes Convex Regularization Submodular and Supermodular Functions Existence of Monotone Solutions for Optimization Problems Processes with Bounded Drift Birth and Death Processes Control of Arrivals The Model Description Finite-Horizon Discounted-Cost Problem Cost Functionals Infinite-Horizon Case with and without Discounting Optimal Dynamic Pricing Policy: Model Results Control of Service Mechanism Description of the System Static Optimization Problem Optimal Policies for the Queueing Process Service System with Two Interacting Servers Analysis of Optimality Equation Optimal Control in Models with Several Classes of Customers Description of Models and Processes Associated Controlled Processes Existence of Optimal Simple Strategies for the Systems with Alternating Priority Existence of Optimal Simple Strategy for the System with Feedback Equations for Stationary Distributions Stationary Characteristics of the Systems with Alternating Priority Stationary Characteristics of the System with Feedback Models with Alternating Priority: Linear Programming Problem Linear Programming Problem in the Model with Feedback Model with Periods of Idleness and Discounted-Cost Criterion Basic Formulas Construction of Optimal Modified Priority Discipline Bibliography Index Each chapter also includes an Introduction, and a Remarks and Exercises section

Journal ArticleDOI
TL;DR: Linear algebra and matrix theory state space and transfer function models discretizing an analogue compensator discrete-time models of analogue plants designing regulators with state feedback observers tracking systems optimal control.
Abstract: Linear algebra and matrix theory state space and transfer function models discretizing an analogue compensator discrete-time models of analogue plants designing regulators with state feedback observers tracking systems optimal control.

Journal ArticleDOI
TL;DR: In this paper, a linear time-invariant system with several disturbance inputs and controlled outputs is considered, and the authors show how to minimize the nominal H/sub 2/-norm performance in one channel while keeping bounds on the H/ sub 2/Ω(H/sub /spl infin//-norm performance (implying robust stability) in other channels.
Abstract: For a linear time-invariant system with several disturbance inputs and controlled outputs, we show how to minimize the nominal H/sub 2/-norm performance in one channel while keeping bounds on the H/sub 2/-norm or H/sub /spl infin//-norm performance (implying robust stability) in the other channels. This multiobjective H/sub 2//H/sub /spl infin//-problem in an infinite dimensional space is reduced to sequences of finite dimensional convex optimization problems. We show how to compute the optimal value and how to numerically detect the existence of a rational optimal controller. If it exists, we reveal how the novel trick of optimizing the trace norm of the Youla parameter over certain convex constraints allows one to design a nearly optimal controller whose Youla parameter is of the same order as the optimal one. >

Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of absolute error penalty functions (AEPF) in handling constrained optimal control problems in chemical engineering by posing the problem as a nonsmooth dynamic optimization problem.


Journal ArticleDOI
TL;DR: The existence of Nash equilibria in noncooperative flow control in a general product-form network shared by K users is investigated and Brouwer's theorem implies that the best reply function has a fixed point.
Abstract: The existence of Nash equilibria in noncooperative flow control in a general product-form network shared by K users is investigated. The performance objective of each user is to maximize its average throughput subject to an upper bound on its average time-delay. Previous attempts to study existence of equilibria for this flow control model were not successful, partly because the time-delay constraints couple the strategy spaces of the individual users in a way that does not allow the application of standard equilibrium existence theorems from the game theory literature. To overcome this difficulty, a more general approach to study the existence of Nash equilibria for decentralized control schemes is introduced. This approach is based on directly proving the existence of a fixed point of the best reply correspondence of the underlying game. For the investigated flow control model, the best reply correspondence is shown to be a function, implicitly defined by means of K interdependent linear programs. Employing an appropriate definition for continuity of the set of optimal solutions of parameterized linear programs, it is shown that, under appropriate conditions, the best reply function is continuous. Brouwer's theorem implies, then, that the best reply function has a fixed point.

Journal ArticleDOI
TL;DR: In this article, the authors deal with residual generation for the diagnosis of faults in the presence of disturbances, represented as multiplicative disturbances, and on parametric faults, both characterized as discrepancies in a set of underlying parameters.
Abstract: his paper deals with residual generation for the diagnosis of faults in the presence of disturbances. The emphasis is on modelling errors, represented as multiplicative disturbances, and on parametric faults. These are both characterized as discrepancies in a set of underlying parameters. The residuals are obtained using parity equations. To address the situation when the number of uncertain parameters is too high to allow perfect decoupling, two approximate decoupling methods are introduced. One utilizes rank reduction of the model-error/fault entry matrix via singular value decomposition. The other minimizes a least squares performance index, formulated on the residuals, under a set of equality constraints. It is shown that, by the appropriate construction of the entry matrix or of the performance index and the constraints, a broad variety of structured and directional residual strategies can be implemented.

Book
01 May 1995
TL;DR: In this article, Orthogonal functions in systems and control are defined as a historical perspective least squares approximation of signals signal processing in continuous-time domain analysis of time-delay systems identification of lumped parameter systems.
Abstract: Orthogonal functions in systems and control - a historical perspective least squares approximation of signals signal processing in continuous-time domain analysis of time-delay systems identification of lumped parameter systems identification of linear time-invariant distributed parameter systems identification of linear time-varying and nonlinear distributed parameter systems optimal control of linear systems.

Journal ArticleDOI
TL;DR: In this paper, the dual linear matrix inequality (LMI) problem is defined and a computational algorithm to solve the dual LMI problem is given. But the proposed algorithm is limited to fixed-order suboptimal control problems.
Abstract: Many fixed-order suboptimal control problems with stability, performance and robustness specifications can be reduced to a search for a matrix X > 0 satisfying a linear matrix inequality (LMI) while X −1 satisfies another LMI. This paper defines a certain class of these problems we shall call the ‘dual LMI problem’, and a computational algorithm to solve our dual LMI problem is given. Properties and limitations of the algorithm are discussed in comparison with the existing algorithm (the min/max algorithm). An extension to optimal control problems is provided. Numerical examples for the fixed-order stabilization problem and the static output feedback linear quadratic optimal control problem demonstrate the applicability of the proposed algorithm.