scispace - formally typeset
Search or ask a question
Topic

Gain scheduling

About: Gain scheduling is a research topic. Over the lifetime, 4234 publications have been published within this topic receiving 101144 citations.


Papers
More filters
Book
01 Oct 1972
TL;DR: In this article, the authors provide an excellent introduction to feedback control system design, including a theoretical approach that captures the essential issues and can be applied to a wide range of practical problems.
Abstract: Linear Optimal Control SystemsFeedback Control TheoryOptimal ControlLinear Optimal ControlOptimal Control SystemsThe Zeros of Linear Optimal Control Systems and Their Role in High Feedback Gain Stability DesignOptimal ControlLinear State-Space Control SystemsOptimal Control of Dynamic Systems Driven by Vector MeasuresApplied Linear Optimal Control Paperback with CD-ROMNonlinear and Optimal Control SystemsLinear SystemsLinear Control TheoryLinear Systems and Optimal ControlOptimal Control Methods for Linear Discrete-Time Economic SystemsOptimal Control Theory for Infinite Dimensional SystemsInfinite Dimensional Linear Control SystemsStochastic Linear-Quadratic Optimal Control Theory: Open-Loop and Closed-Loop SolutionsApplications of Optimal Control Theory to Computer Controller DesignSwitching and Learning in Feedback SystemsContinuous Time Dynamical SystemsNew Trends in Optimal Filtering and Control for Polynomial and Time-Delay SystemsThe Theory and Application of Linear Optimal ControlTurnpike Theory of Continuous-Time Linear Optimal Control ProblemsLinear Optimal Control SystemsLinear Control TheoryCalculus of Variations and Optimal Control TheoryOptimal ControlNonlinear Controllability and Optimal ControlOptimal Control TheoryOptimal Control Of Singularly Perturbed Linear Systems And ApplicationsOptimal Control SystemsDesign criterion for improving the sensitivity of linear optimal control systemsLinear Stochastic Control SystemsConstrained Optimal Control of Linear and Hybrid SystemsOptimal Control Of Singularly Perturbed Linear Systems And ApplicationsPredictive Control for Linear and Hybrid SystemsOptimal ControlOptimal Control Theory with Applications in EconomicsNonlinear Optimal Control Theory Successfully classroom-tested at the graduate level, Linear Control Theory: Structure, Robustness, and Optimization covers three major areas of control engineering (PID control, robust control, and optimal control). It provides balanced coverage of elegant mathematical theory and useful engineering-oriented results. The first part of the book develops results relating to the design of PID and first-order controllers for continuous and discrete-time linear systems with possible delays. The second section deals with the robust stability and performance of systems under parametric and unstructured uncertainty. This section describes several elegant and sharp results, such as Kharitonov’s theorem and its extensions, the edge theorem, and the mapping theorem. Focusing on the optimal control of linear systems, the third part discusses the standard theories of the linear quadratic regulator, Hinfinity and l1 optimal control, and associated results. Written by recognized leaders in the field, this book explains how control theory can be applied to the design of real-world systems. It shows that the techniques of three term controllers, along with the results on robust and optimal control, are invaluable to developing and solving research problems in many areas of engineering.An excellent introduction to feedback control system design, this book offers a theoretical approach that captures the essential issues and can be applied to a wide range of practical problems. Its explorations of recent developments in the field emphasize the relationship of new procedures to classical control theory, with a focus on single input and output systems that keeps concepts accessible to students with limited backgrounds. The text is geared toward a single-semester senior course or a graduate-level class for students of electrical engineering. The opening chapters constitute a basic treatment of feedback design. Topics include a detailed formulation of the control design program, the fundamental issue of performance/stability robustness tradeoff, and the graphical design technique of loopshaping. Subsequent chapters extend the discussion of the loopshaping technique and connect it with notions of optimality. Concluding chapters examine controller design via optimization, offering a mathematical approach that is useful for multivariable systems.Upper-level undergraduate text introduces aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization. Numerous figures, tables. Solution guide available upon request. 1970 edition.Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book.For more than forty years, the equation y’(t) = Ay(t) + u(t) in Banach spaces has been used as model for optimal control processes described by partial differential equations, in particular heat and diffusion processes. Many of the outstanding open problems, however, have remained open until recently, and some have never been solved. This book is a survey of all results know to the author, with emphasis on very recent results (1999 to date). The book is restricted to linear equations and two particular problems (the time optimal problem, the norm optimal problem) which results in a more focused and concrete treatment. As experience shows, results on linear equations are the basis for the treatment of their semilinear counterparts, and techniques for the time and norm optimal problems can often be generalized to more general cost functionals. The main object of this book is to be a state-of-the-art monograph on the theory of the time and norm optimal controls for y’(t) = Ay(t) + u(t) that ends at the very latest frontier of research, with open problems and indications for future research. Key features: · Applications to optimal diffusion processes. · Applications to optimal heat propagation processes. · Modelling of optimal processes governed by partial differential equations. · Complete bibliography. · Includes the latest research on the subject. · Does not assume anything from the reader except basic functional analysis. · Accessible to researchers and advanced graduate students alike · Applications to optimal diffusion processes. · Applications to optimal heat propagation processes. · Modelling of optimal processes governed by partial differential equations. · Complete bibliography. · Includes the latest research on the subject. · Does not assume anything from the reader except basic functional analysis. · Accessible to researchers and advanced graduate students alikeLinear Stochastic Control Systems presents a thorough description of the mathematical theory and fundamental principles of linear stochastic control systems. Both continuous-time and discrete-time systems are thoroughly covered. Reviews of the modern probability and random processes theories and the Itô stochastic differential equations are provided. Discrete-time stochastic systems theory, optimal estimation and Kalman filtering, and optimal stochastic control theory are studied in detail. A modern treatment of these same topics for continuous-time stochastic control systems is included. The text is written in an easy-to-understand style, and the reader needs only to have a background of elementary real analysis and linear deterministic systems theory to comprehend the subject matter. This graduate textbook is also suitable for self-study, professional training, and as a handy research reference. Linear Stochastic Control Systems is self-contained and provides a step-by-step development of the theory, with many illustrative examples, exercises, and engineering applications.This outstanding reference presents current, state-of-the-art research on importantproblems of finite-dimensional nonlinear optimal control and controllability theory. Itpresents an overview of a broad variety of new techniques useful in solving classicalcontrol theory problems.Written and edited by renowned mathematicians at the forefront of research in thisevolving field, Nonlinear Controllability and Optimal Control providesdetailed coverage of the construction of solutions of differential inclusions by means ofdirectionally continuous sections Lie algebraic conditions for local controllability the use of the Campbell-Hausdorff series to derive properties of optimal trajectories the Fuller phenomenon the theory of orbits and more.Containing more than 1,300 display equations, this exemplary, instructive reference is aninvaluable source for mathematical researchers and applied mathematicians, electrical andelectronics, aerospace, mechanical, control, systems, and computer engineers, and graduatestudents in these disciplines .This book is based on lectures from a one-year course at the Far Eastern Federal University (Vladivostok, Russia) as well as on workshops on optimal control offered to students at various mathematical departments at the university level. The main themes of the theory of linear and nonlinear systems are considered, including the basic problem of establishing the necessary and sufficient conditions of optimal processes. In the

4,294 citations

Book
26 Jan 2012
TL;DR: In this article, the authors present a model predictive controller for a water heating system, which is based on the T Polynomial Process (TOP) model of the MPC.
Abstract: 1 Introduction to Model Predictive Control.- 1.1 MPC Strategy.- 1.2 Historical Perspective.- 1.3 Industrial Technology.- 1.4 Outline of the Chapters.- 2 Model Predictive Controllers.- 2.1 MPC Elements.- 2.1.1 Prediction Model.- 2.1.2 Objective Function.- 2.1.3 Obtaining the Control Law.- 2.2 Review of Some MPC Algorithms.- 2.3 State Space Formulation.- 3 Commercial Model Predictive Control Schemes.- 3.1 Dynamic Matrix Control.- 3.1.1 Prediction.- 3.1.2 Measurable Disturbances.- 3.1.3 Control Algorithm.- 3.2 Model Algorithmic Control.- 3.2.1 Process Model and Prediction.- 3.2.2 Control Law.- 3.3 Predictive Functional Control.- 3.3.1 Formulation.- 3.4 Case Study: A Water Heater.- 3.5 Exercises.- 4 Generalized Predictive Control.- 4.1 Introduction.- 4.2 Formulation of Generalized Predictive Control.- 4.3 The Coloured Noise Case.- 4.4 An Example.- 4.5 Closed-Loop Relationships.- 4.6 The Role of the T Polynomial.- 4.6.1 Selection of the T Polynomial.- 4.6.2 Relationships with Other Formulations.- 4.7 The P Polynomial.- 4.8 Consideration of Measurable Disturbances.- 4.9 Use of a Different Predictor in GPC.- 4.9.1 Equivalent Structure.- 4.9.2 A Comparative Example.- 4.10 Constrained Receding Horizon Predictive Control.- 4.10.1 Computation of the Control Law.- 4.10.2 Properties.- 4.11 Stable GPC.- 4.11.1 Formulation of the Control Law.- 4.12 Exercises.- 5 Simple Implementation of GPC for Industrial Processes.- 5.1 Plant Model.- 5.1.1 Plant Identification: The Reaction Curve Method.- 5.2 The Dead Time Multiple of the Sampling Time Case.- 5.2.1 Discrete Plant Model.- 5.2.2 Problem Formulation.- 5.2.3 Computation of the Controller Parameters.- 5.2.4 Role of the Control-weighting Factor.- 5.2.5 Implementation Algorithm.- 5.2.6 An Implementation Example.- 5.3 The Dead Time Nonmultiple of the Sampling Time Case.- 5.3.1 Discrete Model of the Plant.- 5.3.2 Controller Parameters.- 5.3.3 Example.- 5.4 Integrating Processes.- 5.4.1 Derivation of the Control Law.- 5.4.2 Controller Parameters.- 5.4.3 Example.- 5.5 Consideration of Ramp Setpoints.- 5.5.1 Example.- 5.6 Comparison with Standard GPC.- 5.7 Stability Robustness Analysis.- 5.7.1 Structured Uncertainties.- 5.7.2 Unstructured Uncertainties.- 5.7.3 General Comments.- 5.8 Composition Control in an Evaporator.- 5.8.1 Description of the Process.- 5.8.2 Obtaining the Linear Model.- 5.8.3 Controller Design.- 5.8.4 Results.- 5.9 Exercises.- 6 Multivariable Model Predictive Control.- 6.1 Derivation of Multivariable GPC.- 6.1.1 White Noise Case.- 6.1.2 Coloured Noise Case.- 6.1.3 Measurable Disturbances.- 6.2 Obtaining a Matrix Fraction Description.- 6.2.1 Transfer Matrix Representation.- 6.2.2 Parametric Identification.- 6.3 State Space Formulation.- 6.3.1 Matrix Fraction and State Space Equivalences.- 6.4 Case Study: Flight Control.- 6.5 Convolution Models Formulation.- 6.6 Case Study: Chemical Reactor.- 6.6.1 Plant Description.- 6.6.2 Obtaining the Plant Model.- 6.6.3 Control Law.- 6.6.4 Simulation Results.- 6.7 Dead Time Problems.- 6.8 Case Study: Distillation Column.- 6.9 Multivariable MPC and Transmission Zeros.- 6.9.1 Simulation Example.- 6.9.2 Tuning MPC for Processes with OUD Zeros.- 6.10 Exercises.- 7 Constrained Model Predictive Control.- 7.1 Constraints and MPC.- 7.1.1 Constraint General Form.- 7.1.2 Illustrative Examples.- 7.2 Constraints and Optimization.- 7.3 Revision of Main Quadratic Programming Algorithms.- 7.3.1 The Active Set Methods.- 7.3.2 Feasible Direction Methods.- 7.3.3 Initial Feasible Point.- 7.3.4 Pivoting Methods.- 7.4 Constraints Handling.- 7.4.1 Slew Rate Constraints.- 7.4.2 Amplitude Constraints.- 7.4.3 Output Constraints.- 7.4.4 Constraint Reduction.- 7.5 1-norm.- 7.6 Case Study: A Compressor.- 7.7 Constraint Management.- 7.7.1 Feasibility.- 7.7.2 Techniques for Improving Feasibility.- 7.8 Constrained MPC and Stability.- 7.9 Multiobjective MPC.- 7.9.1 Priorization of Objectives.- 7.10 Exercises.- 8 Robust Model Predictive Control.- 8.1 Process Models and Uncertainties.- 8.1.1 Truncated Impulse Response Uncertainties.- 8.1.2 Matrix Fraction Description Uncertainties.- 8.1.3 Global Uncertainties.- 8.2 Objective Functions.- 8.2.1 Quadratic Cost Function.- 8.2.2 ?-? norm.- 8.2.3 1-norm.- 8.3 Robustness by Imposing Constraints.- 8.4 Constraint Handling.- 8.5 Illustrative Examples.- 8.5.1 Bounds on the Output.- 8.5.2 Uncertainties in the Gain.- 8.6 Robust MPC and Linear Matrix Inequalities.- 8.7 Closed-Loop Predictions.- 8.7.1 An Illustrative Example.- 8.7.2 Increasing the Number of Decision Variables.- 8.7.3 Dynamic Programming Approach.- 8.7.4 Linear Feedback.- 8.7.5 An Illustrative Example.- 8.8 Exercises.- 9 Nonlinear Model Predictive Control.- 9.1 Nonlinear MPC Versus Linear MPC.- 9.2 Nonlinear Models.- 9.2.1 Empirical Models.- 9.2.2 Fundamental Models.- 9.2.3 Grey-box Models.- 9.2.4 Modelling Example.- 9.3 Solution of the NMPC Problem.- 9.3.1 Problem Formulation.- 9.3.2 Solution.- 9.4 Techniques for Nonlinear Predictive Control.- 9.4.1 Extended Linear MPC.- 9.4.2 Local Models.- 9.4.3 Suboptimal NPMC.- 9.4.4 Use of Short Horizons.- 9.4.5 Decomposition of the Control Sequence.- 9.4.6 Feedback Linearization.- 9.4.7 MPC Based on Volterra Models.- 9.4.8 Neural Networks.- 9.4.9 Commercial Products.- 9.5 Stability and Nonlinear MPC.- 9.6 Case Study: pH Neutralization Process.- 9.6.1 Process Model.- 9.6.2 Results.- 9.7 Exercises.- 10 Model Predictive Control and Hybrid Systems.- 10.1 Hybrid System Modelling.- 10.2 Example: A Jacket Cooled Batch Reactor.- 10.2.1 Mixed Logical Dynamical Systems.- 10.2.2 Example.- 10.3 Model Predictive Control of MLD Systems.- 10.3.1 Branch and Bound Mixed Integer Programming.- 10.3.2 An Illustrative Example.- 10.4 Piecewise Affine Systems.- 10.4.1 Example: Tankwith Different Area Sections.- 10.4.2 Reach Set, Controllable Set, and STG Algorithm.- 10.5 Exercises.- 11 Fast Methods for Implementing Model Predictive Control.- 11.1 Piecewise Affinity of MPC.- 11.2 MPC and Multiparametric Programming.- 11.3 Piecewise Implementation of MPC.- 11.3.1 Illustrative Example: The Double Integrator.- 11.3.2 Nonconstant References and Measurable Disturbances.- 11.3.3 Example.- 11.3.4 The 1-norm and ?-norm Cases.- 11.4 Fast Implementation of MPC forUncertain Systems.- 11.4.1 Example.- 11.4.2 The Closed-Loop Min-max MPC.- 11.5 Approximated Implementation for MPC.- 11.6 Fast Implementation of MPC and Dead Time Considerations.- 11.7 Exercises.- 12 Applications.- 12.1 Solar Power Plant.- 12.1.1 Selftuning GPC Control Strategy.- 12.1.2 Gain Scheduling Generalized Predictive Control.- 12.2 Pilot Plant.- 12.2.1 Plant Description.- 12.2.2 Plant Control.- 12.2.3 Flow Control.- 12.2.4 Temperature Control at the Exchanger Output.- 12.2.5 Temperature Control in the Tank.- 12.2.6 Level Control.- 12.2.7 Remarks.- 12.3 Model Predictive Control in a Sugar Refinery.- 12.4 Olive Oil Mill.- 12.4.1 Plant Description.- 12.4.2 Process Modelling and Validation.- 12.4.3 Controller Synthesis.- 12.4.4 Experimental Results.- 12.5 Mobile Robot.- 12.5.1 Problem Definition.- 12.5.2 Prediction Model.- 12.5.3 Parametrization of the Desired Path.- 12.5.4 Potential Function for Considering Fixed Obstacles.- 12.5.5 The Neural Network Approach.- 12.5.6 Training Phase.- 12.5.7 Results.- A Revision of the Simplex Method.- A.1 Equality Constraints.- A.2 Finding an Initial Solution.- A.3 Inequality Constraints.- B Dynamic Programming and Linear Quadratic Optimal Control.- B.1 LinearQuadratic Problem.- B.2 InfiniteHorizon.- References.

3,913 citations

Book
01 Jan 1987
TL;DR: Discrete-time control systems, Discrete- time control systems , مرکز فناوری اطلاعات و ا�ل squares رسانی, کسورزی.
Abstract: Discrete-time control systems , Discrete-time control systems , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

2,098 citations

Journal ArticleDOI
TL;DR: Current research on gain scheduling is clarifying customary practices as well as devising new approaches and methods for the design of nonlinear control systems.

1,621 citations

Book
08 Aug 2005
TL;DR: Advanced PID Control builds on the basics learned in PID Controllers but augments it through use of advanced control techniques, including auto-tuning, gain scheduling and adaptation.
Abstract: The authors of the best-selling book PID Controllers: Theory, Design, and Tuning once again combine their extensive knowledge in the PID arena to bring you an in-depth look at the world of PID control. A new book, Advanced PID Control builds on the basics learned in PID Controllers but augments it through use of advanced control techniques. Design of PID controllers are brought into the mainstream of control system design by focusing on requirements that capture effects of load disturbances, measurement noise, robustness to process variations and maintaining set points. In this way it is possible to make a smooth transition from PID control to more advanced model based controllers. It is also possible to get insight into fundamental limitations and to determine the information needed to design good controllers. The book provides a solid foundation for understanding, operating and implementing the more advanced features of PID controllers, including auto-tuning, gain scheduling and adaptation. Particular attention is given to specific challenges such as reset windup, long process dead times, and oscillatory systems. As in their other book, modeling methods, implementation details, and problem-solving techniques are also presented.

1,533 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
93% related
Control system
129K papers, 1.5M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Robustness (computer science)
94.7K papers, 1.6M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202330
202271
202178
202086
201999
2018117