scispace - formally typeset
Search or ask a question

Showing papers on "Optimal control published in 2005"


Book
02 Dec 2005
TL;DR: In this article, the authors propose a constrained optimization and equilibrium approach for optimal control of evolution systems in Banach spaces. And they apply this approach to distributed systems in economics applications.
Abstract: Applications.- Constrained Optimization and Equilibria.- Optimal Control of Evolution Systems in Banach Spaces.- Optimal Control of Distributed Systems.- Applications to Economics.

1,540 citations


Journal ArticleDOI
TL;DR: The application of these pulse engineering methods to design pulse sequences that are robust to experimentally important parameter variations, such as chemical shift dispersion or radiofrequency variations due to imperfections such as rf inhomogeneity is explained.

1,516 citations


Journal ArticleDOI
TL;DR: This paper provides a novel solution to the problem of robust model predictive control of constrained, linear, discrete-time systems in the presence of bounded disturbances by solving the optimal control problem that is solved online.

1,357 citations


Book
30 Jun 2005
TL;DR: In this article, the authors present a set of control analysis methods for MIMO linear systems, including the phase plane method, M.S. Atherton, and A.R. Stubberud.
Abstract: FUNDAMENTALS OF CONTROL Mathematical Foundations Ordinary Linear Differential and Difference Equations, B.P. Lathi The Fourier, Laplace, and Z-Transforms, E.W. Kamen Matrices and Linear Algebra, B.W. Dickinson Complex Variables, C.W. Gray Models for Dynamical Systems Standard Mathematical Models Input-Output Models, W.S. Levine State Space, J. Gillis Graphical Models Block Diagrams, D.K. Frederick and C.M. Close Signal Flow Graphs, N.S. Nise Determining Models Modeling from Physical Principles, F.E. Cellier, H. Elmqvist, and M. Otter System Identification When Noise is Negligible, W.S. Levine Analysis and Design Methods for Continuous-Time Systems Analysis Methods Time Response of Linear Time-Invariant Systems, R.T. Stefani Controllability and Observability, W.A. Wolovich Stability Tests The Routh-Hurwitz Stability Criterion, R.H. Bishop and R.C. Dorf The Nyquist Stability Test, C.E. Rohrs Discrete-Time and Sampled-Data Stability Tests, M. Mansour Gain Margin and Phase Margin, R.T. Stefani Design Methods Specification of Control Systems, J.-S. Yang and W.S. Levine Design Using Performance Indices, R.C. Dorf and R.H. Bishop Nyquist, Bode, and Nichols Plots, J.J. D'Azzo and C.H. Houpis The Root Locus Plot, W.S. Levine PID Control, K.J. Astrom and T. Hagglund State Space - Pole Placement, K. Ogata Internal Model Control, R.D. Braatz Time-Delay Compensation - Smith Predictor and Its Modifications, Z.J. Palmor Digital Control Discrete-Time Systems, M.S. Santina and A.R. Stubberud Sampled-Data Systems, A. Feuer and G.C. Goodwin Discrete-Time Equivalents to Continuous-Time Systems, M.S. Santina and A.R. Stubberud Design Methods for Discrete-Time Linear Time-Invariant Systems, M.S. Santina and A.R. Stubberud Quantization Effects, M.S. Santina and A.R. Stubberud Sample-Rate Selection, M.S. Santina and A.R. Stubberud Real Time Software for Implementation of Digital Control, D.M. Auslander, J.R. Ridgely, and J. Jones Programmable Controllers, G. Olsson Analysis and Design Methods for Nonlinear Systems Analysis Methods The Describing Function Method, D.P. Atherton The Phase Plane Method, D.P. Atherton Design Methods Dealing with Actuator Saturation, R.H. Middleton Bumpless Transfer, A. Ahlen and S.F. Graebe Linearization and Gain-Scheduling, J.S. Shamma Software for Control System Analysis and Design Numerical and Computational Issues in Linear Control and System Theory, R.V. Patel, A.J. Laub, and P.M. Van Dooren Software for Modeling and Simulating Control Systems, M. Otter and F.E. Cellier Computer-Aided Control Systems Design, C.M. Rimvall and C.P Jobling ADVANCED METHODS OF CONTROL Analysis Methods for MIMO Linear Systems Multivariable Poles, Zeros, and Pole/Zero Cancellations, J. Douglas and M. Athans Fundamentals of Linear Time-Varying Systems, E.W. Kamen Geometric Theory of Linear Systems, F. Hamano Polynomial and Matrix Fraction Descriptions, D.F. Delchamps Robustness Analysis with Real Parametric Uncertainty, R. Tempo and F. Blanchini MIMO Frequency Response Analysis and the Singular Value Decomposition, S.D. Patek and M. Athans Stability Robustness to Unstructured Uncertainty for Linear Time-Invariant Systems, A. Chao and M. Athans Tradeoffs and Limitations in Feedback Systems, D.P. Looze and J.S. Freudenberg Modeling Deterministic Uncertainty, J. Raisch and B.A. Francis The Use of Multivariate Statistics in Process Control, M.J. Piovoso and K.A. Kosanovich Kalman Filter and Observers Linear Systems and White Noise, W.S. Levine Kalman Filter, M. Athans Riccati Equations and Their Solution, V. Kucera Observers, B. Friedland Design Methods for MIMO LTI Systems Eigenstructure Assignment, K.M. Sobel, E.Y. Shapiro, and A.N. Andry, Jr. Linear Quadratic Regulator Control, L. Lublin and M. Athans H2 (LQG) and H8 Control, L. Lublin, S. Grocott, and M. Athens Robust Control: Theory, Computation, and Design, M. Dahleh The Structured Singular Value (m) Framework, G.J. Balas and A. Packard Algebraic Design Methods, V. Kucera Quantitative Feedback Theory (QFT) Technique, C.H. Houpis The Inverse Nyquist Array and Characteristic Locus Design Methods, N. Munro and J.M. Edmunds Robust Servomechanism Problem, E.J. Davidson Numerical Optimization-Based Design, V. Balakrishnan and A.L. Tits Optimal Control, F.L. Lewis Decentralized Control, M.E. Sezer and D.D. Siljak Decoupling, T. Williams and P.J. Antsaklis Predictive Control, A.W. Pike, M.J. Grimble, M.A. Johnson, A.W. Ordys, and S. Shakoor Adaptive Control Automatic Tuning of PID Controllers, T. Hagglund and K.J. Astrom Self-Tuning Control, D.W. Clarke Model Reference Adaptive Control, P.A. Ioannou Analysis and Design of Nonlinear Systems Analysis Methods The Lie Bracket and Control, V. Jurdjevic Two Time Scale and Averaging Methods, H.K. Khalil Volterra and Fliess Series Expansion for Nonlinear Systems, F. Lamnabi-Lagarrique Stability Lyapunov Stability, H.K. Khalil Input-Output Stability, A.R. Teel, T.T. Georgiou, L. Praly, and E. Sontag Design Methods Feedback Linearization of Nonlinear Systems, A. Isidori and M.D. Di Benedetto Nonlinear Zero Dynamics, A. Isidori and C.I. Byrnes Nonlinear Output Regulation and Tracking, A. Isidori Lyapunov Design, R.A. Freeman and P.V. Kokotovic Variable Structure and Sliding Mode Controller Design, R.A. De Carlo, S.H. Zak, and S.V. Drakunov Control of Bifurcation and Chaos, E.H. Abed, H.O. Wang, and A. Tesi Open-Loop Control Using Oscillatory Inputs, J. Baillieul and B. Lehman Adaptive Nonlinear Control, M. Krstic and P.V. Kokotovic Intelligent Control, K.M. Passino Fuzzy Control, K.M. Passino and S. Yurkovich Neural Control, J.A. Farrell System Identification System Identification, L. Ljung Stochastic Control Discrete Time Markov Processes, A. Schwartz Stochastic Differential Equations, J.A. Gubner Linear Stochastic Input-Output Models, T. Soderstrom Minimum Variance Control, M.R. Katebi and A.W. Ordys Dynamic Programming, P.R. Kumar Stability of Stochastic Systems, K.O. Loparo and X. Feng Stochastic Adaptive Control, T.E. Duncan and B. Pasik-Duncan Control of Distributed Parameter Systems Controllability of Thin Elastic Beams and Plates, J.E. Lagnese and G. Leugering Control of the Heat Equation, T.I. Seidman Observability of Linear Distributed Parameter Systems, D.L. Russell APPLICATIONS OF CONTROL Process Control Water Level Control for the Toilet Tank: A Historical Perspective, B.G. Coury Temperature Control in Large Buildings, C.C. Federspiel and J.E. Seem Control of pH, F.G. Shinskey Control of the Pulp and Paper-Making Process, W.L. Bialkowski Control for Advanced Semiconductor Device Manufacturing: A Case History, T. Kailath, C. Schaper, Y. Cho, P. Gyugyi, S. Norman, P. Park, S. Boyd, G. Franklin, K. Saraswat, M. Modehi, and C. Davis Mechanical Control Systems Automotive Control Systems Engine Control, J.A. Cook, J.W. Grizzle, and J. Sun Adaptive Automotive Speed Control, M.K. Liubakka, D.S. Rhode, J.R. Winkelman, and P.V. Kokotovic Aerospace Controls Flight Control of Piloted Aircraft, M. Pachter and C.H. Houpis Spacecraft Attitude Control, V.T. Coppola and N.H. McClamroch Control of Flexible Space Structures, S.M. Joshi and A.G. Kelkar Line-of-Sight Pointing and Stabilization Control Systems, D.A. Haessig Control of Robots and Manipulators Motion Control of Robotic Manipulators, M.W. Spong Force Control of Robotic Manipulators, J. De Schutter and H. Bruyninckx Control of Nonholonomic Systems, J.T.-Y. Wen Miscellaneous Mechanical Control Systems Friction Compensation, B. Armstrong-Helouvry and C. Canudas de Wit Motion Control Systems, J. Tal Ultra-High Precision Control, T.R. Kurfess and H. Jenkins Robust Control of a Compact Disc Mechanism, M. Steinbuch, G. Schootstra, and O.H. Bosgra Electrical and Electronic Control Systems Power Electronic Controls Dynamic Modeling and Control in Power Electronics, G.C. Verghese Motion Control with Electric Motors by Input-Output Linearization, D.G. Taylor Control of Electric Generators, T. Jahns and R.W. De Doncker Control of Electrical Power Control of Electrical Power Generating Plants, H.G. Kwatny and C. Maffezzoni Control of Power Transmission, J.J. Paserba, J.J. Sanchez-Gasca, and E.V. Larsen Control Systems Including Humans Human-in-the-Loop Control, R.A. Hess Index

1,351 citations


Book
02 Feb 2005
TL;DR: Markov jump linear systems as mentioned in this paper have been used in a variety of applications, such as: optimal control, filtering, and Quadratic Optimal Control with Partial Information (QOPI).
Abstract: Markov Jump Linear Systems.- Background Material.- On Stability.- Optimal Control.- Filtering.- Quadratic Optimal Control with Partial Information.- H?-Control.- Design Techniques and Examples.

1,195 citations


Journal Article
TL;DR: Within this framework, several classical tree-based supervised learning methods and two newly proposed ensemble algorithms, namely extremely and totally randomized trees, are described and found that the ensemble methods based on regression trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples.
Abstract: Reinforcement learning aims to determine an optimal control policy from interaction with a system or from observations gathered from a system. In batch mode, it can be achieved by approximating the so-called Q-function based on a set of four-tuples (xt, ut , rt, xt+1) where xt denotes the system state at time t, ut the control action taken, rt the instantaneous reward obtained and xt+1 the successor state of the system, and by determining the control policy from this Q-function. The Q-function approximation may be obtained from the limit of a sequence of (batch mode) supervised learning problems. Within this framework we describe the use of several classical tree-based supervised learning methods (CART, Kd-tree, tree bagging) and two newly proposed ensemble algorithms, namely extremely and totally randomized trees. We study their performances on several examples and find that the ensemble methods based on regression trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples. In particular, the totally randomized trees give good results while ensuring the convergence of the sequence, whereas by relaxing the convergence constraint even better accuracy results are provided by the extremely randomized trees.

1,079 citations


Journal ArticleDOI
TL;DR: It is shown that the constrained optimal control law has the largest region of asymptotic stability (RAS) and the result is a nearly optimal constrained state feedback controller that has been tuned a priori off-line.

1,045 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a critical literature review and an up-to-date and exhaustive bibliography on the AGC of power systems, highlighting various control aspects concerning the AGG problem.
Abstract: An attempt is made in This work to present critical literature review and an up-to-date and exhaustive bibliography on the AGC of power systems. Various control aspects concerning the AGC problem have been highlighted. AGC schemes based on parameters, such as linear and nonlinear power system models, classical and optimal control, and centralized, decentralized, and multilevel control, are discussed. AGC strategies based on digital, self-tuning control, adaptive, VSS systems, and intelligent/soft computing control have been included. Finally, the investigations on AGC systems incorporating BES/SMES, wind turbines, FACTS devices, and PV systems have also been discussed.

836 citations


Journal ArticleDOI
TL;DR: The notion of quadratic invariance of a constraint set with respect to a system is defined, and it is shown that if the constraint set has this property, then the constrained minimum-norm problem may be solved via convex programming.
Abstract: We consider the problem of constructing optimal decentralized controllers. We formulate this problem as one of minimizing the closed-loop norm of a feedback system subject to constraints on the controller structure. We define the notion of quadratic invariance of a constraint set with respect to a system, and show that if the constraint set has this property, then the constrained minimum-norm problem may be solved via convex programming. We also show that quadratic invariance is necessary and sufficient for the constraint set to be preserved under feedback. These results are developed in a very general framework, and are shown to hold in both continuous and discrete time, for both stable and unstable systems, and for any norm. This notion unifies many previous results identifying specific tractable decentralized control problems, and delineates the largest known class of convex problems in decentralized control. As an example, we show that optimal stabilizing controllers may be efficiently computed in the case where distributed controllers can communicate faster than their dynamics propagate. We also show that symmetric synthesis is included in this classification, and provide a test for sparsity constraints to be quadratically invariant, and thus amenable to convex synthesis.

775 citations


Proceedings ArticleDOI
08 Jun 2005
TL;DR: Todorov et al. as discussed by the authors presented an iterative linear-quadratic-Gaussian method for locally-optimal feedback control of nonlinear stochastic systems subject to control constraints.
Abstract: We present an iterative linear-quadratic-Gaussian method for locally-optimal feedback control of nonlinear stochastic systems subject to control constraints. Previously, similar methods have been restricted to deterministic unconstrained problems with quadratic costs. The new method constructs an affine feedback control law, obtained by minimizing a novel quadratic approximation to the optimal cost-to-go function. Global convergence is guaranteed through a Levenberg-Marquardt method; convergence in the vicinity of a local minimum is quadratic. Performance is illustrated on a limited-torque inverted pendulum problem, as well as a complex biomechanical control problem involving a stochastic model of the human arm, with 10 state dimensions and 6 muscle actuators. A Matlab implementation of the new algorithm is availabe at www.cogsci.ucsd.edu//spl sim/todorov.

730 citations


Journal ArticleDOI
TL;DR: A new control strategy called Adaptive Equivalent Consumption Minimization Strategy (A-ECMS) is presented, adding to the ECMS framework an on-the-fly algorithm for the estimation of the equivalence factor according to the driving conditions.

Journal ArticleDOI
TL;DR: This paper provides a concise and timely survey on analysis and synthesis of switchedlinear control systems, and presents the basic concepts and main properties of switched linear systems in a systematic manner.

Journal ArticleDOI
TL;DR: In this paper, the optimal coordination of variable speed limits and ramp metering in a freeway traffic network is discussed, where the objective of the control is to minimize the total time that vehicles spend in the network.
Abstract: This paper discusses the optimal coordination of variable speed limits and ramp metering in a freeway traffic network, where the objective of the control is to minimize the total time that vehicles spend in the network. Coordinated freeway traffic control is a new development where the control problem is to find the combination of control measures that results in the best network performance. This problem is solved by model predictive control, where the macroscopic traffic flow model METANET is used as the prediction model. We extend this model with a model for dynamic speed limits and for main-stream origins. This approach results in a predictive coordinated control approach where variable speed limits can prevent a traffic breakdown and maintain a higher outflow even when ramp metering is unable to prevent congestion (e.g., because of an on-ramp queue constraint). The use of dynamic speed limits significantly reduces congestion and results in a lower total time spent. Since the primary effect of the speed limits is the limitation of the main-stream flow, a comparison is made with the case where the speed limits are replaced by main-stream metering. The resulting performances are comparable. Since the range of flows that main-stream metering and dynamic speed limits can control is different, the choice between the two should be primarily based on the traffic demands.

Journal ArticleDOI
TL;DR: The robustness and excellent real-time performance of the method is demonstrated in a numerical experiment, the control of an unstable system, namely, an airborne kite that shall fly loops.
Abstract: An efficient Newton-type scheme for the approximate on-line solution of optimization problems as they occur in optimal feedback control is presented. The scheme allows a fast reaction to disturbances by delivering approximations of the exact optimal feedback control which are iteratively refined during the runtime of the controlled process. The contractivity of this real-time iteration scheme is proven, and a bound on the loss of optimality---compared with the theoretical optimal solution---is given. The robustness and excellent real-time performance of the method is demonstrated in a numerical experiment, the control of an unstable system, namely, an airborne kite that shall fly loops.

Journal ArticleDOI
TL;DR: It is shown that the new discretization concept for optimal control problems with control constraints is numerically implementable with only slight increase in program management and an optimal error estimate is proved.
Abstract: A new discretization concept for optimal control problems with control constraints is introduced which utilizes for the discretization of the control variable the relation between adjoint state and control. Its key feature is not to discretize the space of admissible controls but to implicitly utilize the first order optimality conditions and the discretization of the state and adjoint equations for the discretization of the control. For discrete controls obtained in this way an optimal error estimate is proved. The application to control of elliptic equations is discussed. Finally it is shown that the new concept is numerically implementable with only slight increase in program management. A numerical test confirms the theoretical investigations.

Book
17 Jun 2005
TL;DR: This book discusses linear algebra, dynamic programming, and non-cooperative feedback information games in the context of linear dynamical systems, and introduces the Deterministic approach, which aims at solving optimal control problems.
Abstract: Preface. Notation and symbols. 1 Introduction. 1.1 Historical perspective. 1.2 How to use this book. 1.3 Outline of this book. 1.4 Notes and references. 2 Linear algebra. 2.1 Basic concepts in linear algebra. 2.2 Eigenvalues and eigenvectors. 2.3 Complex eigenvalues. 2.4 Cayley-Hamilton theorem. 2.5 Invariant subspaces and Jordan canonical form. 2.6 Semi-definite matrices. 2.7 Algebraic Riccati equations. 2.8 Notes and references. 2.9 Exercises. 2.10 Appendix. 3 Dynamical systems. 3.1 Description of linear dynamical systems. 3.2 Existence-uniqueness results for differential equations. 3.2.1 General case. 3.2.2 Control theoretic extensions. 3.3 Stability theory: general case. 3.4 Stability theory of planar systems. 3.5 Geometric concepts. 3.6 Performance specifications. 3.7 Examples of differential games. 3.8 Information, commitment and strategies. 3.9 Notes and references. 3.10 Exercises. 3.11 Appendix. 4 Optimization techniques. 4.1 Optimization of functions. 4.2 The Euler-Lagrange equation. 4.3 Pontryagin's maximum principle. 4.4 Dynamic programming principle. 4.5 Solving optimal control problems. 4.6 Notes and references. 4.7 Exercises. 4.8 Appendix. 5 Regular linear quadratic optimal control. 5.1 Problem statement. 5.2 Finite-planning horizon. 5.3 Riccati differential equations. 5.4 Infinite-planning horizon. 5.5 Convergence results. 5.6 Notes and references. 5.7 Exercises. 5.8 Appendix. 6 Cooperative games. 6.1 Pareto solutions. 6.2 Bargaining concepts. 6.3 Nash bargaining solution. 6.4 Numerical solution. 6.5 Notes and references. 6.6 Exercises. 6.7 Appendix. 7 Non-cooperative open-loop information games. 7.1 Introduction. 7.2 Finite-planning horizon. 7.3 Open-loop Nash algebraic Riccati equations. 7.4 Infinite-planning horizon. 7.5 Computational aspects and illustrative examples. 7.6 Convergence results. 7.7 Scalar case. 7.8 Economics examples. 7.8.1 A simple government debt stabilization game. 7.8.2 A game on dynamic duopolistic competition. 7.9 Notes and references. 7.10 Exercises. 7.11 Appendix. 8 Non-cooperative feedback information games. 8.1 Introduction. 8.2 Finite-planning horizon. 8.3 Infinite-planning horizon. 8.4 Two-player scalar case. 8.5 Computational aspects. 8.5.1 Preliminaries. 8.5.2 A scalar numerical algorithm: the two-player case. 8.5.3 The N-player scalar case. 8.6 Convergence results for the two-player scalar case. 8.7 Notes and references. 8.8 Exercises. 8.9 Appendix. 9 Uncertain non-cooperative feedback information games. 9.1 Stochastic approach. 9.2 Deterministic approach: introduction. 9.3 The one-player case. 9.4 The one-player scalar case. 9.5 The two-player case. 9.6 A fishery management game. 9.7 A scalar numerical algorithm. 9.8 Stochastic interpretation. 9.9 Notes and references. 9.10 Exercises. 9.11 Appendix. References. Index.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events.
Abstract: We consider optimal control for general networks with both wireless and wireline components and time varying channels. A dynamic strategy is developed to support all traffic whenever possible, and to make optimally fair decisions about which data to serve when inputs exceed network capacity. The strategy is decoupled into separate algorithms for flow control, routing, and resource allocation, and allows each user to make decisions independent of the actions of others. The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events. The cost of approaching this fair operating point is an end-to-end delay increase for data that is served by the network. Analysis is performed at the packet level and considers the full effects of queueing.

Book
01 Jan 2005
TL;DR: This book discusses the formulation of a shape optimization problem, the problem of optimal partitions, and some open questions on Boundary variation for Neumann problems.
Abstract: * Preface * Introduction to Shape Optimization Theory and Some Classical Problems > General formulation of a shape optimization problem > The isoperimetric problem and some of its variants > The Newton problem of minimal aerodynamical resistance > Optimal interfaces between two media > The optimal shape of a thin insulating layer * Optimization Problems Over Classes of Convex Domains > A general existence result for variational integrals > Some necessary conditions of optimality > Optimization for boundary integrals > Problems governed by PDE of higher order * Optimal Control Problems: A General Scheme > A topological framework for general optimization problems > A quick survey on 'gamma'-convergence theory > The topology of 'gamma'-convergence for control variables > A general definition of relaxed controls > Optimal control problems governed by ODE > Examples of relaxed shape optimization problems * Shape Optimization Problems with Dirichlet Condition on the Free Boundary > A short survey on capacities > Nonexistence of optimal solutions > The relaxed form of a Dirichlet problem > Necessary conditions of optimality > Boundary variation > Continuity under geometric constraints > Continuity under topological constraints: Sverak's result > Nonlinear operators: necessary and sufficient conditions for the 'gamma'-convergence > Stability in the sense of Keldysh > Further remarks and generalizations * Existence of Classical Solutions > Existence of optimal domains under geometrical constraints > A general abstract result for monotone costs > The weak'gamma'-convergence for quasi-open domains > Examples of monotone costs > The problem of optimal partitions > Optimal obstacles * Optimization Problems for Functions of Eigenvalues > Stability of eigenvalues under geometric domain perturbation > Setting the optimization problem > A short survey on continuous Steiner symmetrization > The case of the first two eigenvalues of the Laplace operator > Unbounded design regions > Some open questions * Shape Optimization Problems with Neumann Condition on the Free Boundary > Some examples > Boundary variation for Neumann problems > General facts in RN > Topological constraints for shape stability > The optimal cutting problem > Eigenvalues of the Neumann Laplacian * Bibliography * Index

Journal ArticleDOI
TL;DR: In this article, a generalized analysis of different multiloop control approaches using alternative feedback control variables for several popularly adopted system configurations, highlighting similarities and identifying a generalized optimal control-variable selection criterion that is applicable across most multiloper-controlled inverter systems.
Abstract: Multiloop control strategies have commonly been used to control power inverters of both the voltage-source and current-source topologies for power conversion applications including uninterruptible power supplies and utility interfaces for distributed power generation. However, these control strategies tend to be developed and comparatively evaluated for a particular application, with strategies for other applications presented as independent new developments. This paper presents a generalized analysis of different multiloop control approaches using alternative feedback control variables for several popularly adopted system configurations, highlighting similarities and identifying a generalized optimal control-variable selection criterion that is applicable across most multiloop-controlled inverter systems. The generality of the presented optimal variable selection criterion has been verified through the close similarities between the time-domain waveforms of the different inverter systems simulated in MATLAB Simulink and implemented experimentally in the laboratory.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper studies how the performance of cross-layer rate control can be impacted if the network can only use an imperfect scheduling component that is easier to implement, and designs a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.
Abstract: In this paper, we study cross-layer design for rate control in multihop wireless networks. In our previous work, we have developed an optimal cross-layered rate control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal cross-layered rate control scheme has to solve a complex global optimization problem at each time, and hence is too computationally expensive for online implementation. In this paper, we study how the performance of cross-layer rate control can be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish desirable results on the performance bounds of cross-layered rate control with imperfect scheduling. Compared with a layered approach that does not design rate control and scheduling together, our cross-layered approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyses also enable us to design a fully distributed cross-layered rate control and scheduling algorithm for a restrictive interference model.

Journal ArticleDOI
TL;DR: An approach for the efficient solution of motion-planning problems for time-invariant dynamical control systems with symmetries, such as mobile robots and autonomous vehicles, under a variety of differential and algebraic constraints on the state and on the control input.
Abstract: In this paper, we introduce an approach for the efficient solution of motion-planning problems for time-invariant dynamical control systems with symmetries, such as mobile robots and autonomous vehicles, under a variety of differential and algebraic constraints on the state and on the control inputs. Motion plans are described as the concatenation of a number of well-defined motion primitives, selected from a finite library. Rules for the concatenation of primitives are given in the form of a regular language, defined through a finite-state machine called a Maneuver Automaton. We analyze the reachability properties of the language, and present algorithms for the solution of a class of motion-planning problems. In particular, it is shown that the solution of steering problems for nonlinear dynamical systems with symmetries and invariant constraints can be reduced to the solution of a sequence of kinematic inversion problems. A detailed example of the application of the proposed approach to motion planning for a small aerobatic helicopter is presented.

Journal ArticleDOI
TL;DR: The adaptive control laws proposed in this paper are optimal with respect to a family of cost functionals by the inverse optimality approach, without solving the associated Hamilton-Jacobi-Isaacs partial differential equation directly.
Abstract: The attitude tracking control problem of a rigid spacecraft with external disturbances and an uncertain inertia matrix is addressed using the adaptive control method. The adaptive control laws proposed in this paper are optimal with respect to a family of cost functionals. This is achieved by the inverse optimality approach, without solving the associated Hamilton-Jacobi-Isaacs partial differential (HJIPD) equation directly. The design of the optimal adaptive controllers is separated into two stages by means of integrator backstepping, and a control Lyapunov argument is constructed to show that the inverse optimal adaptive controllers achieve H/sub /spl infin// disturbance attenuation with respect to external disturbances and global asymptotic convergence of tracking errors to zero for disturbances with bounded energy. The convergence of adaptive parameters is also analyzed in terms of invariant manifold. Numerical simulations illustrate the performance of the proposed control algorithms.

Journal ArticleDOI
TL;DR: It is shown that the set of trajectories of the switching system is dense in the set that is embedded into a larger family of systems and the optimization problem is formulated for the latter, and the relationship between the two sets of trajectory motivates the shift of focus from the original problem to the more general one.

Journal ArticleDOI
TL;DR: The aim of the paper is to give basic theoretical results on the structure of the optimal state-feedback solution and of the value function and to describe how the state- feedback optimal control law can be constructed by combining multiparametric programming and dynamic programming.

Proceedings Article
01 Jan 2005
TL;DR: In this article, the authors investigate the relationship between optimal control design and control allocation when the performance indexes are quadratic in the control input and show that for a particular class of nonlinear systems, they give exactly the same design freedom in distributing the control effort among the actuators.
Abstract: This paper considers actuator redundancy management for a class of overactuated nonlinear systems. Two tools for distributing the control effort among a redundant set of actuators are optimal control design and control allocation. In this paper, we investigate the relationship between these two design tools when the performance indexes are quadratic in the control input. We show that for a particular class of nonlinear systems, they give exactly the same design freedom in distributing the control effort among the actuators. Linear quadratic optimal control is contained as a special case. A benefit of using a separate control allocator is that actuator constraints can be considered, which is illustrated with a flight control example.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the relationship between optimal control design and control allocation when the performance indexes are quadratic in the control input and show that for a particular class of nonlinear systems, they give exactly the same design freedom in distributing the control effort among the actuators.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the non-linear Hamilton-Jacobi-Bellman equation can be transformed into a linear equation, and the usual backward computation can be replaced by a forward diffusion process that can be computed by stochastic integration or by the evaluation of a path integral.
Abstract: This paper considers linear-quadratic control of a non-linear dynamical system subject to arbitrary cost. I show that for this class of stochastic control problems the non-linear Hamilton–Jacobi–Bellman equation can be transformed into a linear equation. The transformation is similar to the transformation used to relate the classical Hamilton–Jacobi equation to the Schrodinger equation. As a result of the linearity, the usual backward computation can be replaced by a forward diffusion process that can be computed by stochastic integration or by the evaluation of a path integral. It is shown how in the deterministic limit the Pontryagin minimum principle formalism is recovered. The significance of the path integral approach is that it forms the basis for a number of efficient computational methods, such as Monte Carlo sampling, the Laplace approximation and the variational approximation. We show the effectiveness of the first two methods in a number of examples. Examples are given that show the qualitative difference between stochastic and deterministic control and the occurrence of symmetry breaking as a function of the noise.

Journal ArticleDOI
TL;DR: The role of noise and the issue of efficient computation in stochastic optimal control problems are addressed and a class of nonlinear control problems that can be formulated as a path integral and where the noise plays the role of temperature is considered.
Abstract: We address the role of noise and the issue of efficient computation in stochastic optimal control problems. We consider a class of nonlinear control problems that can be formulated as a path integral and where the noise plays the role of temperature. The path integral displays symmetry breaking and there exists a critical noise value that separates regimes where optimal control yields qualitatively different solutions. The path integral can be computed efficiently by Monte Carlo integration or by a Laplace approximation, and can therefore be used to solve high dimensional stochastic control problems.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the optimal control approach for the active control and drag optimization of incompressible viscous flow past circular cylinders using a proper orthogonal decomposition (POD) reduced-order model.
Abstract: In this paper we investigate the optimal control approach for the active control and drag optimization of incompressible viscous flow past circular cylinders. The control function is the time angular velocity of the rotating cylinder. The wake flow is solved in the laminar regime Re=200 with a finite-element method. Due to the CPU and memory costs related to the optimal control theory, a proper orthogonal decomposition (POD) reduced-order model (ROM) is used as the state equation. The key enablers to an accurate and robust POD ROM are the introduction of a time-dependent eddy-viscosity estimated for each POD mode as the solution of an auxiliary optimization problem and the use of a snapshot ensemble for POD based on chirp-forced transients. Since the POD basis represents only velocities, we minimize a drag-related cost functional characteristic of the wake unsteadiness. The optimization problem is solved using Lagrange multipliers to enforce the constraints. 25% of relative drag reduction is found when the Navier-Stokes equations are controlled using a harmonic control function deduced from the optimal solution determined with the POD ROM. Earlier numerical studies concerning mean drag reduction are confirmed: it is shown, in particular, that without a sufficient penalization of the control input, our approach is energetically inefficient. The main result is that cost-reduction factors of 100 and 760 are obtained for the CPU time and the memory, respectively. Finally, limits of the performance of our approach are discussed.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: This article extends existing concepts in linear model predictive control to a unified, theoretical framework for distributed MPC with guaranteed nominal stability and performance properties and addresses Kalman filtering framework for state estimation.
Abstract: This article extends existing concepts in linear model predictive control (MPC) to a unified, theoretical framework for distributed MPC with guaranteed nominal stability and performance properties. Centralized MPC is largely viewed as impractical, inflexible and unsuitable for control of large, networked systems. Incorporation of the proposed distributed regulator provides a means of achieving optimal systemwide control performance (centralized) while essentially operating in a decentralized manner. The distributed regulators work iteratively and cooperatively towards achieving a common, systemwide control objective. An attractive attribute of the proposed MPC algorithm is that all intermediate iterates are feasible and the resulting distributed MPC controllers stabilize the nominal closed-loop system. These two features allow the practitioner to terminate the distributed control algorithm at the end of each sampling interval, even if convergence is not attained. Distributed MPC with output feedback is addressed using the well established Kalman filtering framework for state estimation.