scispace - formally typeset
Search or ask a question

Showing papers on "Optimal control published in 1982"


Journal ArticleDOI
TL;DR: In this article, the authors present a review of the methods of Kalman filtering in attitude estimation and their development over the last two decades, focusing on three-axis gyros and attitude sensors.
Abstract: HIS report reviews the methods of Kalman filtering in attitude estimation and their development over the last two decades. This review is not intended to be complete but is limited to algorithms suitable for spacecraft equipped with three-axis gyros as well as attitude sensors. These are the systems to which we feel that Kalman filtering is most ap- plicable. The Kalman filter uses a dynamical model for the time development of the system and a model of the sensor measurements to obtain the most accurate estimate possible of the system state using a linear estimator based on present and past measurements. It is, thus, ideally suited to both ground-based and on-board attitude determination. However, the applicability of the Kalman filtering technique rests on the availability of an accurate dynamical model. The dynamic equations for the spacecraft attitude pose many difficulties in the filter modeling. In particular, the external torques and the distribution of momentum internally due to the use of rotating or rastering instruments lead to significant uncertainties in the modeling. For autonomous spacecraft the use of inertial reference units as a model replacement permits the circumvention of these problems. In this representation the angular velocity of the spacecraft is obtained from the gyro data. The kinematic equations are used to obtain the attitude state and this is augmented by means of additional state-vector components for the gyro biases. Thus, gyro data are not treated as observations and the gyro noise appears as state noise rather than as observation noise. It is theoretically possible that a spacecraft is three-axis stabilized with such rigidity that the time development of the system can be described accurately without gyro information, or that it is one-axis stabilized so that only a single gyro is needed to provide information on the time history of the system. The modification of the algorithms presented here in order to apply to those cases is slight. However, this is of little practical importance because a control system capable of such

1,266 citations


Journal ArticleDOI
TL;DR: A dynamical model of container cranes is derived by using Lagrange's equation and a new algorithm which is employed for computing the optimal control is explained in detail.

318 citations



Journal ArticleDOI
TL;DR: In this paper, the authors apply the method of optimal control theory to determine the optimal piston trajectory for successively less idealized models of the Otto cycle, and the resulting increases in efficiency are of the order of 10%.
Abstract: We apply the method of optimal control theory to determine the optimal piston trajectory for successively less idealized models of the Otto cycle. The optimal path has significantly smaller losses from friction and heat leaks than the path with conventional piston motion and the same loss parameters. The resulting increases in efficiency are of the order of 10%.

210 citations


Journal ArticleDOI
01 Jan 1982
TL;DR: In this article, a stabilizing control design for general linear time varying systems is presented and analyzed, where the control is a state-feedback law with gains determined by a standard method employed in optimal regulator problems.
Abstract: A stabilizing control design for general linear time varying systems is presented and analyzed. The control is a state-feedback law with gains determined by a standard method employed in optimal regulator problems. The considered cost function is, however, dynamically redefined over a fixed depth horizon. The method is shown to yield a stable closed loop system and computationally efficient recursions for the feedback gain are provided.

181 citations


Journal ArticleDOI
TL;DR: In this article, the optimal power flow problem is formulated based upon the decoupling principle well recognized in bulk power transmission loadflow, which is exploited by decomposing the OPF formulation into a P-Problem (P-?real power model) and a Q-Problem(Q-V reactive power model), which simplifies the formulation, improves computation time and permits a certain flexibility in the types of calculations desired.
Abstract: The optimal power flow problem is formulated based upon the decoupling principle well recognized in bulk power transmission loadflow. This principle is exploited by decomposing the OPF formulation into a P-Problem (P-?real power model) and a Q-Problem (Q-V reactive power model); which simplifies the formulation, improves computation time and permits a certain flexibility in the types of calculations desired (i.e., P-Problem, Q-Problem or both).

178 citations


Journal ArticleDOI
01 Dec 1982
TL;DR: The optimal structure of the motion generator for the simulator, also called a "washout filter," is derived, which stands in contrast to existing design schemes for motion generators which generally assume a certain fixed structure for the motion generators and concentrate on optimizing its parameters.
Abstract: An abstract simulator design problem is formulated as follows: we are given a dynamic system Sa called the actual system and another dynamic system SS called a simulator for Sa. Furthermore, we are given an input signal which drives the actual system Sa. The problem is to find an operator, properly constrained, which generates the input to the simulator Ss on the basis of the input to Sa so that the discrepancy between the output of Sa and Ss is as small as possible. This abstract simulator design problem is brought to the form of an optimal control problem and then solved for the linear-quadratic-Gaussian special case. Next the soluiion of the abstract simulator problem is applied to the design of motion generators for flight simulators. A fairly elaborate mathematical model of the vestibular organs is used. The optimization criterion that is selected is the mean-square difference between the physiological outputs of the vestibular organs for the pilot in the airplane and for the pilot in the simulator. The dynamical equations are linearized, and the input signal is modeled as a random process with a rational power spectral density. Subject to the above assumptions, the optimal structure of the motion generator for the simulator, also called a "washout filter," is derived. This method stands in contrast to existing design schemes for motion generators which generally assume a certain fixed structure for the motion generator and concentrate on optimizing its parameters.

171 citations


Journal ArticleDOI
TL;DR: In this article, an iterative algorithm for the inversion of the one-dimensional (1-D) wave equation, together with a stabilizing constraint on the sums of the jumps of the desired impedance, is proposed.
Abstract: The well‐known instability of Kunetz’s (1963) inversion algorithm can be explained by the progressive manner in which the calculations are done (descending from the surface) and by the fact that completely different impedances can yield indistinguishable synthetic seismograms. Those difficulties can be overcome by using an iterative algorithm for the inversion of the one‐dimensional (1-D) wave equation, together with a stabilizing constraint on the sums of the jumps of the desired impedance. For computational efficiency, the synthetic seismogram is computed by the method of characteristics, and the gradient of the error criterion is computed by optimal control techniques (adjoint state equation). The numerical results on simulated data confirm the expected stability of the algorithm in the presence of measurement noise (tests include noise levels of 50 percent). The inversion of two field sections demonstrates the practical feasibility of the method and the importance of taking into account all internal a...

156 citations


Journal ArticleDOI
TL;DR: In this article, a method of estimating the rate of convergence of approximation to convex, control-constrained optimal control problems is proposed, where conditions of optimality involving projections on the set of admissible control are exploited.
Abstract: A method of estimating the rate of convergence of approximation to convex, control-constrained optimal-control problems is proposed. In the method, conditions of optimality involving projections on the set of admissible control are exploited. General results are illustrated by examples of Galerkin-type approximations to optimal-control problems for parabolic systems.

154 citations


Journal ArticleDOI
TL;DR: The infinite horizon optimal control problem is considered in the general case of linear discrete time systems and quadratic criteria, both with stochastic parameters which are independent with respect to time.

143 citations


Journal ArticleDOI
F. Moss1, Adrian Segall
TL;DR: The conceptual form of an algorithm is presented for finding a feedback solution to the optimal control problem when the inputs are assumed to be constant in time and the algorithm employs a combination of necessary conditions, dynamic programming, and linear programming to construct a set of convex polyhedral cones which cover the admissible state space with optimal controls.
Abstract: This paper explores the application of optimal control theory to the problem of dynamic routing in networks. The approach derives from a continuous state space model for dynamic routing and an associated linear optimal control problem with linear state and control variable inequality constraints. The conceptual form of an algorithm is presented for finding a feedback solution to the optimal control problem when the inputs are assumed to be constant in time. The algorithm employs a combination of necessary conditions, dynamic programming, and linear programming to construct a set of convex polyhedral cones which cover the admissible state space with optimal controls. An implementable form of the algorithm, along with a simple example, is presented for a special class of single destination networks.


Journal ArticleDOI
TL;DR: In this article, G-convergence and Γ-conversgence were applied to the study of the asymptotic limits of optimal control problems, and general conditions for convergence of the optimal control problem and the control problem were derived.
Abstract: In this paper, we give some applications ofG-convergence and Γ-convergence to the study of the asymptotic limits of optimal control problems. More precisely, given a sequence (Ph) of optimal control problems and a control problem (P∞), we determine some general conditions, involvingG-convergence and Γ-convergence, under which the sequence of the optimal pairs of the problems (Ph) converges to the optimal pair of problem (P∞).

Journal ArticleDOI
TL;DR: In this article, an advanced power flow methodology for optimally dispatching all active and reactive power in a power system is presented, where techniques are presented to improve the solution algorithm, the handling of penalty functions, and the power solution optimization methodologies.
Abstract: This paper presents an advanced power flow methodology for optimally dispatching all active and reactive power in a power system. Two major obstacles impede the success of most optimization algorithms: (1) computational inefficiences associated with large system and (2) the problems associated with handling functional inequality, constraints. In view of these basic problems, techniques are presented to improve the solution algorithm, the handling of penalty functions, and the power solution optimization methodologies. Additionally, new algorithms are provided for the determination of an optimal step length and for scaling of control variable gradients. These improvements and innovations computing techniques have been incorporated into a computer program and demonstrated on practical size power systems.

Book
30 Apr 1982
TL;DR: In this paper, the authors present an approach for optimal control and filtering of In-homogeneous suspended cable systems with two unknown parameters and Vector Measurement methods for linear two-point boundary value problems.
Abstract: I. Introduction.- 1. Introduction.- 1.1. Optimal Control.- 1.2. System Identification.- 1.3. Optimal Inputs.- 1.4. Computational Preliminaries.- Exercises.- II. Optimal Control and Methods for Numerical Solutions.- 2. Optimal Control.- 2.1. Simplest Problem in the Calculus of Variations.- 2.1.1. Euler-Lagrange Equations.- 2.1.2. Dynamic Programming.- 2.1.3. Hamilton-Jacobi Equations.- 2.2. Several Unknown Functions.- 2.3. Isoperimetric Problems.- 2.4. Differential Equation Auxiliary Conditions.- 2.5. Pontryagin's Maximum Principle.- 2.6. Equilibrium of a Perfectly Flexible Inhomogeneous Suspended Cable.- 2.7. New Approaches to Optimal Control and Filtering.- 2.8. Summary of Commonly Used Equations.- Exercises.- 3. Numerical Solutions for Linear Two-Point Boundary-Value Problems..- 3.1. Numerical Solution Methods.- 3.1.1. Matrix Riccati Equation.- 3.1.2. Method of Complementary Functions.- 3.1.3. Invariant Imbedding.- 3.1.4. Analytical Solution.- 3.2. An Optimal Control Problem for a First-Order System.- 3.2.1. The Euler-Lagrange Equations.- 3.2.2. Pontryagin's Maximum Principle.- 3.2.3. Dynamic Programming.- 3.2.4. Kalaba's Initial-Value Method.- 3.2.5. Analytical Solution.- 3.2.6. Numerical Results.- 3.3. An Optimal Control Problem for a Second-Order System.- 3.3.1. Numerical Methods.- 3.3.2. Analytical Solution.- 3.3.3. Numerical Results and Discussion.- Exercises.- 4. Numerical Solutions for Nonlinear Two-Point Boundary-Value Problems.- 4.1. Numerical Solution Methods.- 4.1.1. Quasilinearization.- 4.1.2. Newton-Raphson Method.- 4.2. Examples of Problems Yielding Nonlinear Two-Point Boundary-Value Problems.- 4.2.1. A First-Order Nonlinear Optimal Control Problem.- 4.2.2. Optimization of Functionals Subject to Integral Constraints.- 4.2.3. Design of Linear Regulators with Energy Constraints.- 4.3. Examples Using Integral Equation and Imbedding Methods.- 4.3.1. Integral Equation Method for Buckling Loads.- 4.3.2. An Imbedding Method for Buckling Loads.- 4.3.3. An Imbedding Method for a Nonlinear Two-Point Boundary-Value Problem.- 4.3.4. Post-Buckling Beam Configurations via an Imbedding Method.- 4.3.5. A Sequential Method for Nonlinear Filtering.- Exercises.- III. System Identification.- 5. Gauss-Newton Method for System Identification.- 5.1. Least-Squares Estimation.- 5.1.1. Scalar Least-Squares Estimation.- 5.1.2. Linear Least-Squares Estimation.- 5.2. Maximum Likelihood Estimation.- 5.3. Cramer-Rao Lower Bound.- 5.4. Gauss-Newton Method.- 5.5. Examples of the Gauss-Newton Method.- 5.5.1. First-Order System with Single Unknown Parameter.- 5.5.2. First-Order System with Unknown Initial Condition and Single Unknown Parameter.- 5.5.3. Second-Order System with Two Unknown Parameters and Vector Measurement.- 5.5.4. Second-Order System with Two Unknown Parameters and Scalar Measurement.- Exercises.- 6. Quasilinearization Method for System Identification.- 6.1. System Identification via Quasilinearization.- 6.2. Examples of the Quasilinearization Method.- 6.2.1. First-Order System with Single Unknown Parameter.- 6.2.2. First-Order System with Unknown Initial Condition and Single Unknown Parameter.- 6.2.3. Second-Order System with Two Unknown Parameters and Vector Measurement.- 6.2.4. Second-Order System with Two Unknown Parameters and Scalar Measurement.- Exercises.- 7. Applications of System Identification.- 7.1. Blood Glucose Regulation Parameter Estimation.- 7.1.1. Introduction.- 7.1.2. Physiological Experiments.- 7.1.3. Computational Methods.- 7.1.4. Numerical Results.- 7.1.5. Discussion and Conclusions.- 7.2. Fitting of Nonlinear Models of Drug Metabolism to Experimental Data.- 7.2.1. Introduction.- 7.2.2. A Model Employing Michaelis and Menten Kinetics for Metabolism.- 7.2.3. An Estimation Problem.- 7.2.4. Quasilinearization.- 7.2.5. Numerical Results.- 7.2.6. Discussion.- Exercises.- IV. Optimal Inputs for System Identification.- 8. Optimal Inputs.- 8.1. Historical Background.- 8.2. Linear Optimal Inputs.- 8.2.1. Optimal Inputs and Sensitivities for Parameter Estimation.- 8.2.2. Sensitivity of Parameter Estimates to Observations.- 8.2.3. Optimal Inputs for a Second-Order Linear System.- 8.2.4. Optimal Inputs Using Mehra's Method.- 8.2.5. Comparison of Optimal Inputs for Homogeneous and Nonhomogeneous Boundary Conditions.- 8.3. Nonlinear Optimal Inputs.- 8.3.1. Optimal Input System Identification for Nonlinear Dynamic Systems.- 8.3.2. General Equations for Optimal Inputs for Nonlinear Process Parameter Estimation.- Exercises.- 9. Additional Topics for Optimal Inputs.- 9.1. An Improved Method for the Numerical Determination of Optimal Inputs.- 9.1.1. Introduction.- 9.1.2. A Nonlinear Example.- 9.1.3. Solution via Newton-Raphson Method.- 9.1.4. Numerical Results and Discussion.- 9.2. Multiparameter Optimal Inputs.- 9.2.1. Optimal Inputs for Vector Parameter Estimation.- 9.2.2. Example of Optimal Inputs for Two-Parameter Estimation.- 9.2.3. Example of Optimal Inputs for a Single-Input, Two-Output System.- 9.2.4. Example of Weighted Optimal Inputs.- 9.3. Observability, Controllability, and Identifiability.- 9.4. Optimal Inputs for Systems with Process Noise.- 9.5. Eigenvalue Problems.- 9.5.1. Convergence of the Gauss-Seidel Method.- 9.5.2. Determining the Eigenvalues of Saaty's Matrices for Fuzzy Sets.- 9.5.3. Comparison of Methods for Determining the Weights of Belonging to Fuzzy Sets.- 9.5.4. Variational Equations for the Eigenvalues and Eigenvectors of Nonsymmetric Matrices.- 9.5.5. Individual Tracking of an Eigenvalue and Eigenvector of a Parametrized Matrix.- 9.5.6. A New Differential Equation Method for Finding the Perron Root of a Positive Matrix.- Exercises.- 10. Applications of Optimal Inputs.- 10.1. Optimal Inputs for Blood Glucose Regulation Parameter Estimation.- 10.1.1. Formulation Using Bolie Parameters for Solution by Linear or Dynamic Programming.- 10.1.2. Formulation Using Bolie Parameters for Solution by Method of Complementary Functions or Riccati Equation Method.- 10.1.3. Improved Method Using Bolie and Bergman Parameters for Numerical Determination of the Optimal Inputs.- 10.2. Optimal Inputs for Aircraft Parameter Estimation.- Exercises.- V. Computer Programs.- 11. Computer Programs for the Solution of Boundary-Value and Identification Problems.- 11.1. Two-Point Boundary-Value Problems.- 11.2. System Identification Problems.- References.- Author Index.

Journal ArticleDOI
TL;DR: In this article, a new approach for forcing the state of a linear discrete-time system to zero in a minimum number of steps is discussed, which is formulated as the solution to a steady-state optimal control problem with no cost on the control.
Abstract: A new approach for forcing the state of a linear discrete-time system to zero in a minimum number of steps is discussed. The problem is formulated as the solution to a steady-state optimal control problem with no cost on the control. This problem in turn is set up as the solution to an associated eigenvalue problem. No special assumption on the open loop system matrix and/or the ratio of the number of states to controls is required. Stable numerical techniques are presented for solving for the feedback gain. Robust deadbeat tracking is also discussed.

Journal ArticleDOI
TL;DR: The study of infinite-horizon nonstationary dynamic programs using the operator approach is continued in this article, where the point of view differs slightly from that taken by others, in that Denardo's local income function is not used as a starting point.
Abstract: The study of infinite-horizon nonstationary dynamic programs using the operator approach is continued. The point of view here differs slightly from that taken by others, in that Denardo's local income function is not used as a starting point. Infinite-horizon values are defined as limits of finite-horizon values, as the horizons get long. Two important conditions of an earlier paper are weakened, yet the optimality equations, the optimality criterion, and the existence of optimal “structured” strategies are still obtained.


Journal ArticleDOI
01 Sep 1982
TL;DR: Some aspects of stability, initialization and initial condition independence are studied, and two numerical examples are considered in order to emphasize the advantages of the given procedure: the decentralized Kalman filter and the optimal power-frequency control.
Abstract: In this paper, several aspects of decentralized control theory applied to dynamic systems are studied. First of all, some classical definitions about matricial functions and new results on gradient calculations are presented. In the following we generalize to matricial problems the method of gradient projection of Rosen. Finally, some aspects of stability, initialization and initial condition independence are studied in detail, and two numerical examples are considered in order to emphasize the advantages of the given procedure: the decentralized Kalman filter and the optimal power-frequency control.

Journal ArticleDOI
TL;DR: In this article, the authors consider the solution of a stochastic integral control problem and study its regularity, and characterize the optimal cost as the maximum solution of \[\begin{gathered} \forall...
Abstract: We consider the solution of a stochastic integral control problem and we study its regularity. In particular, we characterize the optimal cost as the maximum solution of \[\begin{gathered} \forall ...

Journal ArticleDOI
TL;DR: In this article, a maximum principle governing solutions to an optimal control problem which involves state constraints is derived in terms of Clarke's generalized Jacobians, which apply in the absence of differentiability assumptions on the data.

Journal ArticleDOI
TL;DR: In this article, it was shown that the set of necessary conditions for an optimal control problem with state-variable inequality constraints given by Bryson, Denham, and Dreyfus is equivalent to the (different) set of conditions given by Jacobson, Lele, and Speyer.
Abstract: It is shown that, when the set of necessary conditions for an optimal control problem with state-variable inequality constraints given by Bryson, Denham, and Dreyfus is appropriately augmented, it is equivalent to the (different) set of conditions given by Jacobson, Lele, and Speyer. Relationships among the various multipliers are given.

01 Jan 1982
TL;DR: In this article, an M!M/1 queue with fixed arrival rate and controllable service rate is considered, where the objective is to minimize the expected long-run average of a cost rate, which is a sum of two functions, associated with the queue length and the service rate, respectively.
Abstract: This thesis consists of three parts. In the first one, optimal policies are constructed for some singe-line queueing situations. The second part deals with finite-state Markovian decision processes, and in the third part the practical modelling of a more complex problem is discussed and exemplified.The central control object of part I is an M!M/1 queue with fixed arrival rate and controllable service rate. The objective is to minimize the expected long-run average of a cost rate, which isa sum of two functions, associated with the queue length (the holding cost) and the service rate (the service cost), respectively. For the case of a fin ite waiting-room, terminal costs are constructed, such that a solution to the associated dynamic programming (Bellman) equation exists, which is affine in the time parameter. The corresponding optimal control is independent of both time and the length of the control interval. It hasa form which is subsequently used in generali zing into the case of an infinite waiting room. For this case, the analysis res ults in an efficient algorithm, and in several structural results. Assuming essentially only that the holding cost is increasing, it is proved that a monotone optimal policy exists, i.e. that the optimal choice of service rate is an in creasing function of the present queue length. Three variations of the ce ntral problem are also treated in part I. These are the M/M/c problem (for which the above monotonicity result holds only under a stronger condition), the problem of a controllable ar rival rate (with fixed service rate), and the discounted cost problem.In part II, finite-state Markovian decision processes are discussed. A brief and heuristic introduction is given, regarding continuous-time Markov chains, cost structures on these, and the problem of constructing an optimal poli cy. The purpose is to point out the relations to the queueing control problem with finite waiting-room. Counterexamples demonstrate that the approach of part I is not universally applicable.In part 111, a simplified mode! is discussed for a situation where th e customers may reenter the queue after a stochastic delay. It is argued that under heavy-traffic conditions, the influx of reentering customers can be approximated with the output of a linear stochastic system with state-dependent Gaussian noise, whose dynamics depend on the delay distribution. This idea is exemplified with the res ults from a simulated experiment on a telephone station.

Journal ArticleDOI
TL;DR: An algorithm is derived which performs optimal symbol-by-symbol detection of a pulse amplitude modulated sequence and a salient common feature is the merge phenomenon which allows common decisions to be made before the entire sequence is received.
Abstract: An algorithm is derived which performs optimal symbol-by-symbol detection of a pulse amplitude modulated sequence. The algorithm is similar to the Viterbi algorithm with the optimality criterion optimal symbol detection rather than optimal sequence detection. A salient common feature is the merge phenomenon which allows common decisions to be made before the entire sequence is received.

Journal ArticleDOI
TL;DR: A new time-domain method of quadratic-optimum control synthesis for systems described by linear finite-memory output predictors updated in real time is presented, leading to algorithms which are numerically robust and therefore suitable for real-time computation using microprocessors with reduced word length.

Journal ArticleDOI
TL;DR: It is shown how the results can be utilized in a closed loop feedback control system and the nature of the optimal controller is established.

Journal ArticleDOI
T. Fischer1
TL;DR: The optimal closed-loop quantized control is derived for the linear-quadratic-Gaussian formulation and shown to be separable in estimation, control, and quantization.
Abstract: The optimal closed-loop quantized control is derived for the linear-quadratic-Gaussian formulation and shown to be separable in estimation, control, and quantization. The optimal quantizer is time-varying and minimizes a quadratic distortion measure with weighting matrix dependent upon the solution to the matrix Riccati equation. The optimal cost-togo is shown to be the sum of the cost-to-go for the optimal continuous-valued control solution and a term reflecting the quantizer distortion.

Journal ArticleDOI
TL;DR: In this article, a group preventive replacement problem is formulated in continuous time for a multicomponent system having identical elements and the dynamic programming equation is obtained in the framework of the theory of optimal control of jump processes.
Abstract: A group preventive replacement problem is formulated in continuous time for a multicomponent system having identical elements. The dynamic programming equation is obtained in the framework of the theory of optimal control of jump processes. For a discrete time version of the model, the numerical computation of optimal and suboptimal strategies of group preventive replacement are done. A monotonicity property of the Bellman functional (or cost-to-go function) is used to reduce the size of the computational problem. Some counterintuitive properties of the optimal strategy are apparent in the numerical results obtained.

Journal ArticleDOI
TL;DR: In this article, the Lagrangian equations of the freeze-drying process were solved for non-fat reconstituted milk and turkey, using the control strategies presented in [4]; the results were then used to evaluate the coefficients representing the necessary conditions of optimality.