scispace - formally typeset
Search or ask a question

Showing papers on "Optimal control published in 1991"



Book
01 Jan 1991
TL;DR: That's it, a book to wait for in this month; differential inclusions and optimal control; you may not be able to get in some stress, so don't go around and seek fro the book until you really get it.
Abstract: That's it, a book to wait for in this month. Even you have wanted for long time for releasing this book differential inclusions and optimal control; you may not be able to get in some stress. Should you go around and seek fro the book until you really get it? Are you sure? Are you that free? This condition will force you to always end up to get a book. But now, we are coming to give you excellent solution.

538 citations


Book
01 Jan 1991
TL;DR: In this paper, the authors explore connections between adaptive control theory and practice, and treat the techniques of linear quadratic optimal control and estimation (Kalman filtering), recursive identification, linear systems theory and robust arguments.
Abstract: Exploring connections between adaptive control theory and practice, this book treats the techniques of linear quadratic optimal control and estimation (Kalman filtering), recursive identification, linear systems theory and robust arguments.

492 citations


Book
Han-Fu Chen1, Lei Guo1
01 Nov 1991
TL;DR: In this paper, the authors proposed a Kalman filter-based adaptive control algorithm for tracking systems with Random Variables and Stochastic Integral Equations (RIE) to achieve stability and consistency.
Abstract: 1 Probability Theory Preliminaries.- 1.1 Random Variables.- 1.2 Expectation.- 1.3 Conditional Expectation.- 1.4 Independence, Characteristic Functions.- 1.5 Random Processes.- 1.6 Stochastic Integral.- 1.7 Stochastic Differential Equations.- 2 Limit Theorems on Martingales.- 2.1 Martingale Convergence Theorems.- 2.2 Local Convergence Theorems.- 2.3 Estimation for Weighted Sums of a Martingale Difference Sequence.- 2.4 Estimation for Double Array Martingales.- 3 Filtering and Control for Linear Systems.- 3.1 Controllability and Observability.- 3.2 Kalman Filtering for Systems with Random Coefficients.- 3.3 Discrete-Time Riccati Equations.- 3.4 Optimal Control for Quadratic Costs.- 3.5 Optimal Tracking.- 3.6 Model Reference Control.- 3.7 Control for CARIMA Models.- 4 Coefficient Estimation for ARMAX Models.- 4.1 Estimation Algorithms.- 4.2 Convergence of ELS Without the PE Condition.- 4.3 Local Convergence of SG.- 4.4 Convergence of SG Without the PE Condition.- 4.5 Convergence Rate of SG.- 4.6 Removing the SPR Condition By An Overparameterization Technique.- 4.7 Removing the SPR Condition By Using Increasing Lag Least Squares.- 5 Stochastic Adaptive Tracking.- 5.1 SG-Based Adaptive Tracker With d = 1.- 5.2 SG-Based Adaptive Tracker With d ?1.- 5.3 Stability and Optimality of Astrom-Wittenmark Self-Tuning Tracker.- 5.4 Stability and Optimality of ELS-Based Adaptive Trackers.- 5.5 Model Reference Adaptive Control.- 6 Coefficient Estimation in Adaptive Control Systems.- 6.1 Necessity of Excitation for Consistency of Estimates.- 6.2 Reference Signal With Decaying Richness.- 6.3 Diminishingly Excited Control.- 7 Order Estimation.- 7.1 Order Estimation by Use of a Priori Information.- 7.2 Order Estimation by not Using Upper Bounds for Orders.- 7.3 Time-Delay Estimation.- 7.4 Connections of CIC and BIC.- 8 Optimal Adaptive Control with Consistent Parameter Estimate.- 8.1 Simultaneously Gaining Optimality and Consistency in Tracking Systems.- 8.2 Adaptive Control for Quadratic Cost.- 8.3 Connection Between Adaptive Controls for Tracking and Quadratic Cost.- 8.4 Model Reference Adaptive Control With Consistent Estimate.- 8.5 Adaptive Control With Unknown Orders, Time-Delay and Coefficients.- 9 ARX(?) Model Approximation.- 9.1 Statement of Problem.- 9.2 Transfer Function Approximation.- 9.3 Estimation of Noise Process.- 10 Estimation for Time-Varying Parameters.- 10.1 Stability of Random Time-Varying Equations.- 10.2 Conditional Richness Condition.- 10.3 Analysis of Kalman Filter Based Algorithms.- 10.4 Analysis of LMS-Like Algorithms.- 11 Adaptive Control of Time-Varying Stochastic Systems.- 11.1 Preliminary Results.- 11.2 Systems with Random Parameters.- 11.3 Systems with Deterministic Parameters.- 12 Continuous-Time Stochastic Systems.- 12.1 The Model.- 12.2 Parameter Estimation.- 12.3 Adaptive Control.- References.

473 citations


Journal ArticleDOI
TL;DR: In this article, the standard H∞ optimal control problem using state feedback for smooth nonlinear control systems was studied, and the main theorem obtained roughly states that the L2-induced norm (from disturbances to inputs and outputs) can be made smaller than a constant γ > 0 if the corresponding H ∞ norm for the system linearized at the equilibrium can be reduced by linear state feedback.

450 citations



Journal ArticleDOI
TL;DR: In this article, a lifting technique was developed for periodic linear systems and applied to the H ∞ and H 2 sampled-data control problems, where the lifting technique is applied to periodic linear system.

412 citations


Proceedings ArticleDOI
09 Apr 1991
TL;DR: A general strategy for solving the motion planning problem for real analytic, controllable systems without drift is proposed, and an iterative algorithm is derived that converges very quickly to a solution.
Abstract: A general strategy for solving the motion planning problem for real analytic, controllable systems without drift is proposed. The procedure starts by computing a control that steers the given initial point to the desired target point for an extended system, in which a number of Lie brackets of the system vector fields are added. Using formal calculations with a product expansion relative to P. Hall basis, another control is produced that achieves the desired result on the formal level. This provides an exact solution of the original problem if the given system is nilpotent. For a general system, an iterative algorithm is derived that converges very quickly to a solution. For nonnilpotent systems which are feedback nilpotentizable, the algorithm, in cascade with a precompensator, produces an exact solution. Results of simulations which illustrate the effectiveness of the procedure are presented. >

284 citations


Book
01 Nov 1991
TL;DR: In this paper, the authors present a systematic account of the development of optimal control problems defined on an unbounded time interval - beginning primarily with the work of the early seventies to the present.
Abstract: This book presents a systematic account of the development of optimal control problems defined on an unbounded time interval - beginning primarily with the work of the early seventies to the present. The first five to six chapters provide an introduction to infinite horizon control theory and require only a minimal knowledge of mathematical control theory. The remainder of the book considers extensions of the previous chapters to a variety of control systems, including distributed parameter systems, stochastic control systems and hereditary systems. Throughout the book it is possible to distinguish three categories of research: the extension of the classical necessary conditions to various weaker types of optimality (eg, overtaking optimality); the discussion of various sufficient conditions and verification theorems for the various types of optimality and the discussion of existence theorems for the various types of optimality. The common link between these categories is the "turnpike property" and the notion of "reduction to finite costs". Once these properties are established for a given control system, it is possible to begin investigating the issues described in the above three categories. This monograph on economics, mathematics, systems engineering and operations research is intended for researchers.

268 citations


BookDOI
01 Jan 1991

263 citations


Journal ArticleDOI
TL;DR: In this paper, all solutions to the four block general distance problem which arises in $H^\infty $ optimal control are characterized and a descriptor representation of all solutions is derived.
Abstract: All solutions to the four block general distance problem which arises in $H^\infty $ optimal control are characterized. The procedure is to embed the original problem in an all-pass matrix which is constructed. It is then shown that part of this all-pass matrix acts as a generator of all solutions. Special attention is given to the characterization of all optimal solutions by invoking a new descriptor characterization of all-pass transfer functions. As an application, necessary and sufficient conditions are found for the existence of an $H^\infty $ optimal controller. Following that, a descriptor representation of all solutions is derived.

Proceedings ArticleDOI
11 Dec 1991
TL;DR: In this article, the authors consider four control-related problems, all of which involve reformulation into linear matrix inequalities (LMIs), and propose a partial theory for optimal performance in systems which depend on several independent variables.
Abstract: The authors consider four control-related problems, all of which involve reformulation into linear matrix inequalities (LMIs). The problems are: structured singular value ( mu ) upper bound synthesis for constant matrix problems; robust-state-feedback problem with quadratic stability criteria for uncertain systems; optimal, constant, block diagonal, similarity scaling for full information and state feedback H/sub infinity / problem; and a partial theory for optimal performance in systems which depend on several independent variables. >

Proceedings ArticleDOI
26 Jun 1991
TL;DR: An emerging deeper understanding of neural network reinforcement learning methods is summarized that is obtained by viewing them as a synthesis of dynamic programming and stochastic approximation methods.
Abstract: Control problems can be divided into two classes: 1) regulation and tracking problems, in which the objective is to follow a reference trajectory, and 2) optimal control problems, in which the objective is to extremize a functional of the controlled system's behavior that is not necessarily defined in terms of a reference trajectory. Adaptive methods for problems of the first kind are well known, and include self-tuning regulators and model-reference methods, whereas adaptive methods for optimal-control problems have received relatively little attention. Moreover, the adaptive optimal-control methods that have been studied are almost all indirect methods, in which controls are recomputed from an estimated system model at each step. This computation is inherently complex, making adaptive methods in which the optimal controls are estimated directly more attractive. Here we present reinforcement learning methods as a computationally simple, direct approach to the adaptive optimal control of nonlinear systems.

Journal ArticleDOI
TL;DR: The actuator location selection problem is cast in the framework of a zero-one optimization problem and a genetic algorithmic approach is developed that involves three basic operations: reproduction, crossover, and mutation.
Abstract: The actuator location selection problem is cast in the framework of a zero-one optimization problem. A genetic algorithmic approach is developed. To obtain successive generations that yield the solution corresponding to the maximum fitness value, this approach involves three basic operations: reproduction, crossover, and mutation.

Journal ArticleDOI
TL;DR: This paper gives necessary and sufficient conditions for the existence of a controller that also satisfies a prescribed H®-norm bound on some other closed loop transfer matrix and gives state-space formulae for computing the solutions.

Journal ArticleDOI
TL;DR: In this paper, the standard problem of control theory for finite-dimensional linear time-varying continuous-time plants is considered, where the problem is: given a real number ε > 0, f...
Abstract: In this paper the standard problem of $H^\infty $ control theory for finite-dimensional linear time-varying continuous-time plants is considered. The problem is: given a real number $\gamma > 0$, f...

Journal ArticleDOI
TL;DR: In this article, the problem of forcing a non-degenerate diffusion process to a given final configuration is considered using the logarithmic transformation approach developed by Fleming, and it is shown that the perturbation of the drift suggested by Jamison solves an optimal stochastic control problem.
Abstract: The problem of forcing a nondegenerate diffusion process to a given final configuration is considered. Using the logarithmic transformation approach developed by Fleming, it is shown that the perturbation of the drift suggested by Jamison solves an optimal stochastic control problem. Such perturbation happens to have minimum energy between all controls that “bring” the diffusion to the desired final distribution. A special property of the change of measure on the path-space that corresponds to the aforesaid perturbation of the drift is also shown.

Journal ArticleDOI
TL;DR: The augmented adaptive flight control system has the online capability for learning and accommodating to drastic changes in the aircraft dynamics due to surface or hardware failure and its ability to accommodate control failures and maintain good performance is demonstrated.
Abstract: Surface and hardware failure affect the flight control system of the F-16 fighter aircraft. In the absence of failures and unpredictable changes, the controller, based on gain scheduling, performs very well and exhibits a good degree of robustness, even for high angles of attack. In order to accommodate for possible failure and maintain good performance characteristics, the control system is augmented with a hybrid adaptive linear quadratic control scheme. The augmented adaptive flight control system has the online capability for learning and accommodating to drastic changes in the aircraft dynamics due to surface or hardware failure. The proposed flight control system has been tested on the nonlinear model of the F-16 aircraft, and the simulation results demonstrate its ability to accommodate control failures and maintain good performance. >

Journal ArticleDOI
TL;DR: In this article, a control problem that incorporates uncertainty in initial conditions is formulated by defining a worst-case performance measure, and necessary and sufficient conditions are derived for the existence of controllers that yield a closed-loop system for which the above-mentioned performance measure is less than a prespecified value.
Abstract: In $H_\infty $ (or uniformly optimal) control problems, it is usually assumed that the system initial conditions are zero. In this paper, an $H_\infty $-like control problem that incorporates uncertainty in initial conditions is formulated. This is done by defining a worst-case performance measure. Both finite and infinite horizon problems are considered. Necessary and sufficient conditions are derived for the existence of controllers that yield a closed-loop system for which the above-mentioned performance measure is less than a prespecified value. State-space formulae for the controllers are also presented.

Journal ArticleDOI
TL;DR: The H/sub 2/-optimal control of continuous-time linear time-invariant systems by sampled-data controllers is discussed and the H/ Sub 2/ sampled- data problem is shown to be equivalent to a certain discrete-time H/ sub 2/ problem.
Abstract: The H/sub 2/-optimal control of continuous-time linear time-invariant systems by sampled-data controllers is discussed. Two different solutions, state space and operator theoretic, are given. In both cases, the H/sub 2/ sampled-data problem is shown to be equivalent to a certain discrete-time H/sub 2/ problem. Other topics discussed include input-output stability of sampled-data systems, performance recovery in digital implementation of analog controllers, and sampled-data control of systems with the possibility of multiple-time delays. >

Journal ArticleDOI
TL;DR: In this paper, a stochastic control problem where the state variable follows a Brownian motion is considered and the flow reward is a function of the state, which can be regulated with a lump-sum and linear cost of adjustment.

Journal ArticleDOI
TL;DR: It is shown that the -'~2 optimal control problem for a sampled-data system is equivalent to a standard ,9~' 2 optimal control problems for a related discrete-time system.

Journal ArticleDOI
TL;DR: In this paper, Dill'erenzl et al. stellen wir in einfacher Form einige nichttriviale Probleme aus dem Alltag vor, wie sie im Unterricht an der Temple University behandelt wurden.
Abstract: Finite Dill'erenzlOsung von elektrodynamiscben Problemen Die Einfiihrung der finiten Differenzmethode in Studentenkursen war bisher auf einfache ProbIerne beschriinkt, die so gut wie nichts mit der Welt, in der Studenten leben, zu tun hatten. In diesem Artikel stellen wir in einfacher Form einige nichttriviale Probleme aus dem Alltag vor, wie sie im Unterricht an der Temple University behandelt wurden.

Proceedings ArticleDOI
09 Apr 1991
TL;DR: The authors propose the use of sum-of-squared differences optical flow for the computation of the vector of discrete displacements each instant of time for real-time visual tracking of arbitrary 3-D objects traveling at unknown velocities in a 2-D space.
Abstract: Algorithms for robotic real-time visual tracking of arbitrary 3-D objects traveling at unknown velocities in a 2-D space are presented. The problem of visual tracking is formulated as a problem of combining control with computer vision. A mathematical formulation that is general enough to be extended to the problem of tracking 3-D objects in 3-D space is presented. The authors propose the use of sum-of-squared differences optical flow for the computation of the vector of discrete displacements each instant of time. These displacements can be fed either directly to a PI controller, a pole assignment controller, or a discrete steady-state Kalman filter. In the latter case, the Kalman filter calculates the estimated values of the system's states and exogenous disturbances, and a discrete LQG controller computes the desired motion of the robotic system. The outputs of the controllers are sent to a Cartesian robotic controller that drives the robot. >

Journal ArticleDOI
TL;DR: Results from this study indicate that optimization of control actuators and error sensors provided a method for realizing ‘‘smart’’ structures for active structural acoustic control (ASAC), rivaling in importance the performance increase gain when acoustic control is achieved with microphone error sensors and multiple control actuator.
Abstract: Optimization of the location of a rectangular piezoelectric actuator and both the size and location of a rectangular surface strain error sensor constructed from polyvinylidene fluoride (PVDF) for active structural acoustic control (ASAC) is studied in this work. An algorithm is proposed for choosing the optimal actuator/sensor configuration for controlling sound from a baffled simply supported plate excited harmonically, and the resulting acoustic response is predicted from analytical models. These results are compared to those measured in the lab on a test rig duplicating the appropriate boundary conditions and situated in an anechoic chamber. Results from a single optimally located control actuator are compared to those from control with a nonoptimally positioned actuator as well as multiple control actuators. In addition, either microphones are used to provide error information in the test cases or a single optimally located and dimensioned PVDF error sensor is implemented as the cost function. Results from this study indicate that optimization of control actuators and error sensors provides a method for realizing adaptive structures for active structural acoustic control (ASAC), rivaling in importance the performance increases gained when acoustic control is achieved with microphone error sensors and multiple control actuators.

Journal ArticleDOI
TL;DR: In this article, several characterizations of optimal trajectories for the classical Mayer problem in optimal control are provided, and the problem of optimal design is addressed, obtaining sufficient conditions for optimality.
Abstract: Several characterizations of optimal trajectories for the classical Mayer problem in optimal control are provided. For this purpose the regularity of directional derivatives of the value function is studied: for instance, it is shown that for smooth control systems the value function V is continuously differentiable along an optimal trajectory $x:[t_0 ,1] \to {\bf R}^n $ provided V is differentiable at the initial point $(t_0 ,x(t_0 ))$.Then the upper semicontinuity of the optimal feedback map is deduced. The problem of optimal design is addressed, obtaining sufficient conditions for optimality. Finally, it is shown that the optimal control problem may be reduced to a viability one.

Journal ArticleDOI
TL;DR: In this article, it was shown that the necessary conditions for the optimal control problem of the abort landing of a passenger aircraft in the presence of windshear result in a multipoint boundary-value problem.
Abstract: In Part 1 of the paper (Ref. 2), we have shown that the necessary conditions for the optimal control problem of the abort landing of a passenger aircraft in the presence of windshear result in a multipoint boundary-value problem. This boundary-value problem is especially well suited for numerical treatment by the multiple shooting method. Since this method is basically a Newton iteration, initial guesses of all variables are needed and assumptions about the switching structure have to be made. These are big obstacles, but both can be overcome by a so-called homotopy strategy where the problem is imbedded into a one-parameter family of subproblems in such a way that (at least) the first problem is simple to solve. The solution data to the first problem may serve as an initial guess for the next problem, thus resulting in a whole chain of problems. This process is to be continued until the objective problem is reached. Techniques are presented here on how to handle the various changes of the switching structure during the homotopy run. The windshear problem, of great interest for safety in aviation, also serves as an excellent benchmark problem: Nearly all features that can arise in optimal control appear when solving this problem. For example, the candidate for an optimal trajectory of the minimax optimal control problem shows subarcs with both bang-bang and singular control functions, boundary arcs and touch points of two state constraints, one being of first order and the other being of third order, etc. Therefore, the results of this paper may also serve as some sort of user's guide for the solution of complicated real-life optimal control problems by multiple shooting. The candidate found for an optimal trajectory is discussed and compared with an approximate solution already known (Refs. 3–4). Besides the known necessary conditions, additional sharp necessary conditions based on sign conditions of certain multipliers are also checked. This is not possible when using direct methods.

Journal ArticleDOI
TL;DR: A new method is described for the determination of optimal spacecraft trajectories in an inverse-square field using finite, fixed thrust, which employs a recently developed direct optimization technique that uses a piecewise polynomial representation for the state and controls and collocation, thus converting the optimal control problem into a nonlinear programming problem, which is solved numerically.
Abstract: A new method is described for the determination of optimal spacecraft trajectories in an inverse-square field using finite, fixed thrust. The method employs a recently developed direct optimization technique that uses a piecewise polynomial representation for the state and controls and collocation, thus converting the optimal control problem into a nonlinear programming problem, which is solved numerically. This technique has been modified to provide efficient handling of those portions of the trajectory that can be determined analytically, i.e., the coast arcs. Among the problems that have been solved using this method are optimal rendezvous and transfer (including multirevolution cases) and optimal multiburn orbit insertion from hyperbolic approach.

Journal ArticleDOI
TL;DR: In this paper, a solution to the problems of H/sub infinity /-optimal linear state regulation and filtering is derived based on a transfer function approach which applies standard spectral factorization.
Abstract: A solution is derived to the problems of H/sub infinity /-optimal linear state regulation and filtering. The solution method for both problems is based on a transfer function approach which applies standard spectral factorization. Return difference relations are given which are extensions of the relations associated with the linear quadratic problem. >

Journal ArticleDOI
TL;DR: In this article, the risk sensitive maximum principle for optimal stochastic control derived by the author in an earlier work (System Control Letters, vol.15, 1990) is restated.
Abstract: The risk-sensitive maximum principle for optimal stochastic control derived by the author in an earlier work (System Control Letters, vol.15, 1990) is restated. This is an immediate generalization of the classic Pontryagin principle, to which it reduces in the deterministic case, and is expressed immediately in terms of observables. It is derived on the assumption that the criterion function is the exponential of an additive cost function, and is exact under linear-quadratic Gaussian assumptions, but is otherwise valid as a large deviation approximation. The principle is extended to the case of imperfect state observation after preliminary establishment of a certainty-equivalence principle. The derivation yields as byproduct a large-deviation version of the updating equation for nonlinear filtering. The development is heuristic. It is believed that the mathematical arguments given are the essential ones, and provide a self-contained treatment at this level. >