scispace - formally typeset
Search or ask a question

Showing papers on "Separation principle published in 1971"


Journal ArticleDOI
P. McLane1
TL;DR: In this article, a review of the solution of the linear regulator problem for linear systems with state and control-dependent disturbances is presented, both the finite and infinite terminal time cases are treated.
Abstract: A review of the solution of the linear regulator problem for linear systems with state- and control-dependent disturbances is presented. Both the finite and infinite terminal time cases are treated. The solution to the complete state feedback case is given in detail and that for the output feedback case is noted. The general conclusion is that control-dependent noise calls for conservative control (small gains) while state-dependent noise calls for vigorous control (large gains). Of course it is the degree of this behavior that is important and this is given explicitly by the algorithms in this paper.

163 citations


Journal ArticleDOI
TL;DR: In this paper, the theory of optimal control for time delay systems and the quadratic performance criterion is presented from two points of view: 1) the geometric approach, which yields a maximum principle, and 2) the dynamic programming-Caratheodory approach.
Abstract: The theory of optimal control for time delay systems and the quadratic performance criterion is presented from two points of view: 1) the geometric approach, which yields a maximum principle, and 2) the dynamic programming-Caratheodory approach, which yields a feedback controller synthesis. The relationship of the two approaches is discussed, as well as extensions of the theory to non-linear problems and nonlinear performance criterion.

133 citations


Journal ArticleDOI
TL;DR: It is proved that the optimal control law can be realized by the cascade of a Kalman filter and a linear feedback and provides some motivation for different extension results.
Abstract: The problem of controlling stochastic linear systems with quadratic criteria is considered. It is proved that the optimal control law can be realized by the cascade of a Kalman filter and a linear feedback. The importance of different assumptions required in this proof is discussed in detail. This discussion provides some motivation for different extension results.

47 citations


Journal ArticleDOI
P. J. McLANE1
TL;DR: In this article, the problem of determining the linear feedback control of the instantaneous system output which minimizes a quadratic performance measure for a linear system with state and control-dependent noise is solved.
Abstract: The problem of determining the linear feedback control of the instantaneous system output which minimizes a quadratic performance measure for a linear system with state and control-dependent noise is solved in this paper. Both the finite and infinite terminal time versions of this problem are treated. For the latter case, a sufficient condition for the existence of an optimal control is obtained. For the finite terminal time problem, it is shown that a two-point boundary value problem must be solved to realize the optimal control. For the infinite terminal time case, two non-linear matrix equations must be solved to realize the optimal control. Some discussion on the numerical methods used by the author to solve these equations is included in the paper.

42 citations


Journal ArticleDOI
TL;DR: In this article, the optimal control of an integrated (hydro-thermal) power system taking into account all the nonlinearities, constraints and discontinuities in the system was analyzed using the generalized maximum principle.
Abstract: This paper deals with the optimal control of an integrated (hydro-thermal) power system taking into account all the non-linearities, constraints and discontinuities in the system. The mathematical solution of the deterministic problem is obtained using the generalized Maximum Principle. It turns out that the optimization equations for the thermal power system obtained by Carpentier and Sirioux, who used the Kuhn and Tucker theory, are a particular case of the optimization equations for the integrated power system obtained here using the generalized maximum principle.

15 citations


Journal ArticleDOI
TL;DR: In this article, the class of control problems considered include problems for which one wishes to minimize Jl x(t), at while requiring that u(t)EUCR, *G[0, T], and either x\\ [T-h,T] He in a manifold in AC(T-H, T), R) or x(T) =f(*) on [T-, T], f a fixed absolutely continuous function.
Abstract: which have been studied extensively (as in [4]) and arise in many applications. The class of control problems considered include problems for which one wishes to minimize Jl x(t) at while requiring that u(t)EUCR, *G[0, T], and either x\\ [T-h,T] He in a manifold in AC([T-h, T], R) or x(t) =f(*) on [T-h, T], f a fixed absolutely continuous function. These functional boundary conditions arise naturally since the \"state\" in neutral systems of the above type is a point in AC([-h, 0], R). Letao, toy and a be fixed in R with — <*> , 1 = [ao, a), I' — [/o, a)> For x continuous on I and t in ƒ', the notation F(x(-), t) will mean F is a functional in x, depending on any or all of the values tf(r),ao^T^. Fo r*G/ ' , l e t

15 citations


Journal ArticleDOI
TL;DR: In this article, a first-order approximation of the system is derived and a set of linear difference equations with time-varying coefficients are obtained, where the discrete-time maximum principle is applied to optimize the system performance.
Abstract: A production—inventory problem is studied in a form amenable to the discrete-time optimal control theory. First, a mathematical model as a first-order approximation of the system is derived and a set of linear difference equations with time-varying coefficients are obtained. A quadratic cost function for the system is optimized with respect to a decision variable using dynamic programming. Then, a more general mathematical model is presented for the production-inventory problem. The discrete-time maximum principle is applied to optimize the system performance. A discussion is also presented on determining the effects of possible errors occurring in the optimal performance of the system, if the optimal continuous-state results must be quantized to integer values. Examples for a production-inventory system are presented. An interpretation of the results is given in a form which can serve as a guideline for decisions by management.

12 citations


Proceedings ArticleDOI
01 Dec 1971
TL;DR: In this paper, the optimal stochastic control with unknown time-invariant model parameters is shown to separate into a bank of model-conditional deterministic control gains and a corresponding bank of known nonlinear functionals of the model conditional, causal, mean-square state-vector estimates.
Abstract: For the quadratic cost, nonlinear, adaptive stochastic control problem with linear discrete plant and measurement models excited by white gaussian noise, and unknown time-invariant model parameters, the optimal stochastic control is obtained and shown to separate ("Nonlinear Separation Theorem") into a bank of model-conditional deterministic control gains and a corresponding bank of known nonlinear functionals of the model-conditional, causal, mean-square state-vector estimates. This separation may also be viewed as a decomposition of the optimal, nonlinear adaptive control into a bank of model-conditional optimal, non-adaptive linear controls, one for each admissible value of the unknown parameter ? and a nonlinear part, namely, the bank of a-posteriori model probabilities, which incorporate the adaptive nature, of the optimal adaptive control. Results are given for several special cases of the above problem that exhibit drastically reduced computational requirements. These are the cases of (a) uncertainty in the measurement matrix only; and (b) the case of completely known models, but with nongaussian initial state-vector. In both special cases, we have explicit separation between control and estimation. Moreover, in both cases only one deterministic controller is required to be used with the nonlinear, adaptive mean-square state-vector estimate. Several illustrative examples are included to demonstrate the adaptive control algorithm developed in this paper.

12 citations


Journal ArticleDOI
TL;DR: In this article, a simple direct method of determining such a controller is presented, based on the fact that usually a linear combination of the set of state variables is all that is required to reconstruct the optimal control.
Abstract: Many optimal control solutions require a complete set of measurements of current state variables, which may not be fully available It is reasonable to ask whether compensators cannot be designed in such a way that the desirable qualities of the optimal control are reproduced One method of constructing a compensator that generates an asymptotically optimal control is to generate an estimate of the complete set of state variables by an auxiliary dynamic system, such as an observer or a Kalman filter It can be shown, however, that a simpler design is often possible by employing the fact that usually a linear combination of the set of state variables is all that is required to reconstruct the optimal control A simple direct method of determining such a controller is presented in this paper,

9 citations


Journal ArticleDOI
TL;DR: A feedback estimation algorithm of a specific form is derived and is shown to be superior to one based upon the separation principle of stochastic control and inferior to one employing a different feedback signal structure.
Abstract: Estimation systems with a feedback link have a structure similar to that encountered in the study of stochastic feedback control systems. A feedback estimation algorithm of a specific form is derived in this paper. This algorithm is shown to be superior to one based upon the separation principle of stochastic control and inferior to one employing a different feedback signal structure. The observed differences in performance of these algorithms give insight into a basic limitation on control policies derived from formal application of the separation principle.

7 citations



Journal ArticleDOI
TL;DR: In this article, the authors consider the effect of modeling inaccuracies on optimal linear stochastic control systems and derive a covariance matrix composed of covariances of the estimates of the state variables, the errors in the estimates, and the correlation between these errors and the estimates.
Abstract: The deterioration of a linear optimal stochastic control scheme, designed under the assumptions of the certainty-equivalence principle (the optimal filter and controller, determined independently, combine to give a totally optimal system), is investigated when the parameters of the actual system do not coincide with the design values. This linear suboptimal stochastic system is described by a covariance matrix composed of covariances of the estimates of the state variables, the errors in the estimates of the state variables, and the correlation between these errors and the estimates. In particular, this paper is concerned with the covariance matrix resulting from a single state dynamical system and a scalar linear measurement function of both the state variable and the control variable (e.g., accelerometer measurements). A modeling error in the control variable coefficient of the measurement function may induce instability in the stochastic system with either unstable or stable dynamics. Furthermore, the absolute magnitude of the error in the control variable coefficient directly influences system stability, not the relative error. Thus, relatively small errors compared to the design value of this coefficient may be quite important. 4 LTHOUGH there are many studies on divergence of op-£^- tirnal filters, little attention has been given to the effect of modeling inaccuracies on optimal linear stochastic control systems. Here we extend Fitzgerald's1 investigation of Kalman filter divergence to optimal linear stochastic control systems. These systems are designed under the certainty equivalence principle2 which states that if the expected value of a quadratic function of the state and the control variables is to be minimized subject to linear dynamics, the optimal system is composed of an optimal filter in cascade with an optimal controller. This separation is possible because the estimate in the state is uncorrelated with the error in this estimate. If the parameters in the assumed model of the dynamics or the measurement device deviate from the parameters of the actual system, the estimate and the error in the estimate become correlated. The behavior of the system because of the gains based on an inaccurate model is studied by considering the coupled matrix covariance equation composed of the Covariances of the error in the estimate, the estimate, and the estimate with its error. Some of the characteristics of this linear matrix equation are studied through a scalar linear dynamic equation. The errors in system parameters enter into the 2X2 covariance equation in a dimensionless form allowing the following general results to be obtained: 1) The stochastic control system may be unstable when the nonoptimal filter and deterministic control systems individually are stable. 2) Instability occurs only when the error in the parameter exceeds a finite threshold value. 3) If the measurement is a linear function of the control variable as well as the state (e.g., accelerometer measurements) and there are errors in the coefficient of the control, then instability of the total system may occur for both stable and unstable dynamical systems. 4) The filter or control gains are not functions of the coefficient of the control variable in the measurement function. Consequently, Presented as Paper 70-36 at the AIAA 8th Aerospace Sciences

Proceedings ArticleDOI
01 Dec 1971
TL;DR: In this article, the use of modern control theory for stabilization and control of an unstable fourth-order macroeconomic model is explored, using observer theory together with pole placement results for state and observer feedback.
Abstract: Use of some aspects of modern control theory for stabilization and control of an unstable fourth-order macroeconomic model is explored. Satisfactory control algorithms for both instantaneous state feedback and for the more realistic case of output feedback with measurement delays are derived. Application is made of observer theory together with pole placement results for state and observer feedback. A detailed numerical example illustrates the discussion.

Journal ArticleDOI
TL;DR: In this article, the synthesis of a feedback controller of non-linear discrete systems is considered and the optimal gains in the feedback control law for the nonlinear systems are then determined by iteration.
Abstract: The synthesis of a feedback controller of non-linear discrete systems is considered. The control systems designed arc optimal and low sensitive to parameter variations. The first step in the synthesis of the controller is to quasilinearize the non-linear difference equations which describe the nonlinear discrete systems. Dynamic programming is then applied to find the feedback control law with respect to a quadratic performance index which includes the state variable, control variable, trajectory sensitivity function and control sensitivity function as its arguments. The optimal gains in the feedback control law for the non-linear systems are then determined by iteration. An example is studied in detail to show the superiority of this technique over the optimal control system without including the sensitivity functions.

Proceedings ArticleDOI
01 Dec 1971
TL;DR: In this article, the authors present a theory of compatible suboptimal controllers that use an auxiliary dynamic feedback system, such as an observer, to generate an asymptotic estimate of the desired optimal control input.
Abstract: Many optimal control problems generally lead to solutions that require complete state feedback for their implementation. For many practical applications, however, not all states are available for direct measurement. The use of an auxiliary dynamic feedback system, such as an observer, is one approach to the design of a controller that generates an asymptotic estimate of the desired optimal control input. However, a simpler design is desirable and is often possible by means of the theory of compatible suboptimal controllers presented in this paper.

Journal ArticleDOI
Bahar J. Uttam1
TL;DR: The effects upon stability are considered when an observer is incorporated in a time-varying system to estimate the immeasurable state variables in order to implement the feedback control law.
Abstract: The effects upon stability are considered when an observer is incorporated in a time-varying system to estimate the immeasurable state variables in order to implement the feedback control law.