scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automatic Control in 1970"


Journal ArticleDOI
TL;DR: In this paper, it was shown that the steady-state optimal Kalman filter gain depends only on n \times r linear functionals of the covariance matrix and the number of unknown elements in the matrix.
Abstract: A Kalman filter requires an exact knowledge of the process noise covariance matrix Q and the measurement noise covariance matrix R . Here we consider the case in which the true values of Q and R are unknown. The system is assumed to be constant, and the random inputs are stationary. First, a correlation test is given which checks whether a particular Kalman filter is working optimally or not. If the filter is suboptimal, a technique is given to obtain asymptotically normal, unbiased, and consistent estimates of Q and R . This technique works only for the case in which the form of Q is known and the number of unknown elements in Q is less than n \times r where n is the dimension of the state vector and r is the dimension of the measurement vector. For other cases, the optimal steady-state gain K op is obtained directly by an iterative procedure without identifying Q . As a corollary, it is shown that the steady-state optimal Kalman filter gain K op depends only on n \times r linear functionals of Q . The results are first derived for discrete systems. They are then extended to continuous systems. A numerical example is given to show the usefulness of the approach.

1,316 citations


Journal ArticleDOI
TL;DR: In this article, the optimal control of linear time-invariant systems with respect to a quadratic performance criterion is discussed and an algorithm for computing FAST is presented.
Abstract: The optimal control of linear time-invariant systems with respect to a quadratic performance criterion is discussed. The problem is posed with the additional constraint that the control vector u(t) is a linear time-invariant function of the output vector y(t) (u(t) = -Fy(t)) rather than of the state vector x(t) . The performance criterion is then averaged, and algebraic necessary conditions for a minimizing F\ast are found. In addition, an algorithm for computing F\ast is presented.

906 citations


Journal ArticleDOI
F. Brasch1, J.B. Pearson1
TL;DR: In this paper, the problem of designing a compensator to obtain arbitrary pole placement in the system consisting of the plant and compensator in cascade is considered, and it is shown that for a controllable observable plant, the compensator of order \beta = \min(nu{c} − 1, u_{o} - 1) is sufficient to achieve this result.
Abstract: The problem of designing a compensator to obtain arbitrary pole placement in the system consisting of the plant and compensator in cascade is considered. The design uses only those state variables which can be measured. It is shown that for a controllable observable plant a compensator of order \beta = \min( u_{c} - 1, u_{o} - 1) is sufficient to achieve this result. Here u_{c}( u_{o}) is the controllability (observability) index of the plant. This result is obtained by first showing that any multi-input multi-output linear time-invariant system may be made controllable (observable) from a single input (output) using only output feedback. The main result is then proved in a constructive manner which explicitly relates the compensator parameters to the coefficients of the desired characteristic polynomial.

364 citations


Journal ArticleDOI
TL;DR: In this paper, a method for designing controllers for linear time-invariant systems whose states are not all available or accessible for measurement and where the structure of the controller is constrained to be a linear time invariant combination of the measurable states of the system is presented.
Abstract: A method is presented for designing controllers for linear time-invariant systems whose states are not all available or accessible for measurement and where the structure of the controller is constrained to be a linear time-invariant combination of the measurable states of the system. Two types of structure constraints are considered: 1) each control channel is constrained to be a linear, time-invariant combination of one set of measurable states; 2) each control channel is constrained to he a linear, time-invariant combination of different sets of measurable states. The control system, subject to these constraints is selected such that the resulting closed-loop system performs as "near" to some known optimal system as is possible, i.e., suboptimal. The nearness of the optimal system to the suboptimal system is defined in two ways and thus, two types of suboptimal controllers are found.

223 citations


Journal ArticleDOI
R. Kashyap1
TL;DR: In this paper, the maximum likelihood estimation of the coefficients of multiple output linear dynamical systems and the noise correlations from the noisy measurements of input and output are discussed and conditions under which the estimates converge to their true values as the number of measurements tend to infinity.
Abstract: The maximum likelihood estimation of the coefficients of multiple output linear dynamical systems and the noise correlations from the noisy measurements of input and output are discussed. Conditions are derived under which the estimates converge to their true values as the number of measurements tend to infinity. The computational methods are illustrated by several numerical examples.

212 citations


Journal ArticleDOI
King-Sun Fu1
TL;DR: The basic concept of learning control is introduced, and the following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) Stochastic automata models.
Abstract: The basic concept of learning control is introduced. The following five learning schemes are briefly reviewed: 1) trainable controllers using pattern classifiers, 2) reinforcement learning control systems, 3) Bayesian estimation, 4) stochastic approximation, and 5) stochastic automata models. Potential applications and problems for further research in learning control are outlined.

204 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that if rank C = l, and if (A,B) are controllable, then a linear feedback of the output variables u = K*y, where K*is a constant matrix, can always be found, so that l eigenvalues of the closed-loop system matrix A + BK*C are arbitrarily close (but not necessarily equal) to l preassigned values.
Abstract: The following system is considered: \dot{x}= Ax + Bu y = Cx where x is an n vector describing the state of the system, u is an m vector of inputs to the system, and y is an l vector ( l \leq n ) of output variables. It is shown that if rank C = l , and if (A,B) are controllable, then a linear feedback of the output variables u = K*y, where K*is a constant matrix, can always be found, so that l eigenvalues of the closed-loop system matrix A + BK*C are arbitrarily close (but not necessarily equal) to l preassigned values. (The preassigned values must be chosen so that any complex numbers appearing do so in complex conjugate pairs.) This generalizes an earlier result of Wonham [1]. An algorithm is described which enables K*to be simply found, and examples of the algorithm applied to some simple systems are included.

200 citations


Journal ArticleDOI
TL;DR: In this article, a nonrecursive algebraic solution for the Riccati equation is presented, which allows direct determination of the transient solution for any particular time without proceeding recursively from the initial conditions.
Abstract: Equations for the optimal linear control and filter gains for linear discrete systems with quadratic performance criteria are widely documented. A nonrecursive algebraic solution for the Riccati equation is presented. These relations allow the determination of the steady-state solution of the Riccati equation directly without iteration. The relations also allow the direct determination of the transient solution for any particular time without proceeding recursively from the initial conditions. The method involves finding the eigenvalues and eigenvectors of the canonical state-costate equations.

189 citations


Journal ArticleDOI
D. Kleinman1
TL;DR: A constructive proof is given for finding constant feedback gains that stabilize a linear time-invariant controllable system.
Abstract: A constructive proof is given for finding constant feedback gains that stabilize a linear time-invariant controllable system. It is not necessary to transform variables or to specify pole locations.

163 citations


Journal ArticleDOI
TL;DR: In this article, an algorithm for constructing minimal linear finite-dimensional realizations (a minimal partial realization) of an unknown (possibly infinite-dimensional) system from an external description as given by its Markov parameters is presented.
Abstract: An algorithm for constructing minimal linear finite-dimensional realizations (a minimal partial realization) of an unknown (possibly infinite-dimensional) system from an external description as given by its Markov parameters is presented. It is shown that the resulting realization in essence models the transient response of the unknown system. If the unknown system is linear, this technique can be used to find a smaller dimensional linear system having the same transient characteristics. If the unknown system is nonlinear, the technique can be used either 1) to determine a useful nonlinear model, or 2) te determine a linear model, both of which approximate the transient response of the nonlinear system.

156 citations


Journal ArticleDOI
TL;DR: In this paper, a simple derivation of the Kalman-Bucy recursive ûltering formulas (for both continuous-time and discrete-time processes) and also some minor generalizations thereof are presented.
Abstract: The innovations approach to linear least-squares aIF proximation problems is ûrst to \"whiten' the observed data by a causal and invertible operation, ând then to treat the resulting simpler white-noise observations problem. This technique was successfully used by Bode and Shannon to obtain a simple derivation of the classical'Wiener ûltering problem for stationary processes over & semi-inûnite interval. Ilere we shall extend the technique to handle nonstationary conlinuous-time processes over ûnite intervals. In Part I we shall apply this method to obtain a simple derivation of the Kalman-Bucy recursive ûltering formulas (for both continuous-time and discrete-time processes) and also some minor generalizations thereof.

Journal ArticleDOI
TL;DR: In this paper, the authors generalize and unify the concepts developed by Kalman and Luenberger pertaining to the design of discrete linear systems which estimate the state of a linear plant on the basis of both noise-free and noisy measurements of the output variables.
Abstract: This paper generalizes and unifies the concepts developed by Kalman and Luenberger pertaining to the design of discrete linear systems which estimate the state of a linear plant on the basis of both noise-free and noisy measurements of the output variables. Classes of minimal-order optimum "observer-estimators" are obtained which yield the conditional mean estimate of the state of the dynamical system. One explicit minimal-order optimal observer-estimator is constructed which generates one version of the conditional mean state estimate.

Journal ArticleDOI
W. Gersch1
TL;DR: In this paper, an asymptotically unbiased estimator of the autoregressive parameters is obtained as the solution of a modified set of Yule-Walker equations, which behaves like a least-squares parameter estimate of an observation set with unknown error covariances.
Abstract: The problem of estimating the autoregressive parameters of a mixed autoregressive moving-average (ARMA) time series (of known order) using the output data alone is treated. This problem is equivalent to the estimation of the denominator terms of the scalar transfer function of a stationary, linear discrete time system excited by an unobserved unenrrelated sequence input by employing only the observations of the scalar output. The solution of this problem solves the problem of the identification of the dynamics of a white-noise excited continuous-time linear stationary system using sampled data. The latter problem was suggested by Bartlett in 1946. The problem treated here has appeared before in the engineering literature. The earlier treatment yielded biased parameter estimates. An asymptotically unbiased estimator of the autoregressive parameters is obtained as the solution of a modified set of Yule-Walker equations. The asymptotic estimator covariance matrix behaves like a least-squares parameter estimate of an observation set with unknown error covariances. The estimators are also shown to be unbiased in the presence of additive independent observation noise of arbitrary finite correlation time. An example illustrates the performance of the estimating procedures.

Journal ArticleDOI
TL;DR: In this paper, the authors generalized the results obtained in [1] to accommodate the case of unmeasurable disturbances, which are known only to satisfy a given ρ th-degree linear differential equation.
Abstract: In a previous paper [1], the conventional optimal linear regulator theory was extended to accommodate the case of external input disturbances \omega(t) which are not directly measurable but which can be assumed to satisfy d^{m+1}\omega(t)/dt^{m+1} = 0 , i.e., represented as m th-degree polynomials in time t with unknown coefficients. In this way, the optimal controller u^{0}(t) was obtained as the sum of: 1) a linear combination of the state variables x_{i}, i = 1,2,...,n , plus 2) a linear combination of the first (m + 1) time integrals of certain other linear combinations of the state variables. In the present paper, the results obtained in [1] are generalized to accommodate the case of unmeasurable disturbances \omega(t) which are known only to satisfy a given \rho th-degree linear differential equation D: d^{\rho}\omega(t)/dt^{\rho} + \beta_{\rho}d^{\rho-1}\omega(t)/dt^{\rho-1}+...+\beta_{2}d\omega/dt + \beta_{1}\omega=0 where the coefficients \beta_{i}, i = 1,...,\rho , are known. By this means, a dynamical feedback controller is derived which will consistently maintain state regulation x(t) \approx 0 in the face of any and every external disturbance function \omega(t) which satisfies the given differential equation D -even steady-state periodic or unstable functions \omega(t) . An essentially different method of deriving this result, based on stabilization theory, is also described, In each cases the results are extended to the case of vector control and vector disturbance.

Journal ArticleDOI
TL;DR: In this paper, the design of linear time-invariant dynamic compensators of fixed dimensionality s, which are to be used for the regulation of an n th-order linear time invariant plant, is dealt with.
Abstract: The design of linear time-invariant dynamic compensators of fixed dimensionality s , which are to be used for the regulation of an n th-order linear time-invariant plant, is dealt with. A modified quadratic cost criterion is employed in which a quadratic penalty on the system state as well as all compensator gains is used; the effects of the initial state are averaged out. The optimal compensator gains are specified by a set of simultaneous nonlinear matrix algebraic equations. The numerical solution of these equations would specify the gain matrices of the dynamic compensator. The proposed method may prove useful in the design of low-order s compensators for high-order n plants that have few r outputs, so that the dimension of the compensator is less than that obtained through the use of the associated Kalman-Bucy filter n or the Luenberger observer n - r .

Journal ArticleDOI
TL;DR: In this article, a class of singular control problems is made nonsingular by the addition of an integral quadratic functional of the control to the cost functional; a parameter \epsilon > 0 multiplies this added functional.
Abstract: A class of singular control problems is made nonsingular by the addition of an integral quadratic functional of the control to the cost functional; a parameter \epsilon > 0 multiplies this added functional. The resulting nonsingular problem is solved for a monotonically decreasing sequence \{\epsilon; \epsilon_{1} > \epsilon_{2} > ... > \epsilon_{k} > 0\} . As k \rightarrow \infty and \epsilon_{k} \rightarrow 0 the solution of the modified problem tends to the solution of the original singular problem. A variant of the method which does not require that \epsilon \rightarrow 0 is also presented. Four illustrative numerical examples are described.


Journal ArticleDOI
W. Vetter1
TL;DR: In this paper, the structure of a matrix derivative on a matrix valued function is defined, and matrix product and chain rules are developed which provide significant simplifications for obtaining derivatives of compound matrix structures.
Abstract: The structure of a matrix derivative on a matrix valued function is defined. Matrix product and chain rules are developed which provide significant simplifications for obtaining derivatives of compound matrix structures, and some closed-form structures for Taylor's expansions of a matrix in terms of derivatives and elements of a second matrix are given.

Journal ArticleDOI
TL;DR: In this article, sufficient conditions for the asymptotic stability of pulse-modulated feedback systems are developed from the operator theoretic viewpoint. But these conditions are restricted to the case that the feedback system is a Lipschitz continuous operator on the extended space L 1e.
Abstract: Sufficient conditions for the asymptotic stability in the large of pulse-modulated feedback systems are developed from the operator theoretic viewpoint. Stability here requires that the pulse-modulated feedback system be a Lipschitz continuous operator on the extended space L_{1e} . This strong definition of stability is motivated by an examination of a first-order pulsewidth-modulated system. To provide a unified format for the main development two distinct general classes of pulse modulators are defined. Type I includes the pulsewidth modulator and more general pulsewidth frequency modulators that contain a sampler. Type II includes, for example, the integral pulse frequency modulator and its generalizations. For elements of Type I conditions are derived to bound the incremental gain (on L_{1e} ) of the modulator in cascade with a linear element; a standard transformation of the feedback loop similar to that used in the derivation of the Popov criterion yields sufficient conditions for stability of the feedback system in the above strong sense. Type II modulators are discontinuous on any normed linear space and thus, only conditions for boundedness of the closed-loop system as an operator on L 1 are given for this case.

Journal ArticleDOI
TL;DR: In this article, the measurement problem in distributed system feedback is posed as an observability question and a new definition of observability is introduced which allows the specification of the space-time domain where the system is observable.
Abstract: The measurement problem in distributed system feedback is posed as an observability question. Sensor location and the information content of the resulting signals relative to a partial differential equation model are the primary questions. A new definition of observability is introduced which allows the specification of the space-time domain where the system is observable. The definition is also applied to the problem of observing a particular set of solution modes. Observability results are given for particular classes of equations. Examples indicate a rationale for using the results in selecting measurement locations in distributed systems.


Journal ArticleDOI
TL;DR: In this paper, the estimation errors of two algorithms proposed by Koopmans and Levin for identifying linear systems described by an n th-order scalar difference equation are examined in detail.
Abstract: This paper examines in detail the estimation errors of two algorithms proposed by Koopmans [1] and Levin [2] for identifying linear systems described by an n th-order scalar difference equation. Necessary and sufficient conditions are established for the strong consistency of the estimates that these algorithms generate. A priori error bounds on estimation error are obtained to provide a quantitative basis for comparing these algorithms in relation to the maximum likelihood estimates. Computational results are also presented to supplement the theoretical discussions.


Journal ArticleDOI
TL;DR: In this paper, a numerical method for evaluating the complex integral I = \frac{1}{2\pij}, where I is a function of the number of polynomials in the unit circle in a positive direction.
Abstract: A numerical method for evaluating the complex integral I = \frac{1}{2\pij} \oint \frac{B(z)B(z^{-1})}{A(z)A(z^{-1})} \frac{dz}{z} along the unit circle in a positive direction (where A and B are polynomials with real coefficients), is presented in this paper. The method developed in this paper is shown to be obtainable as a FORTRAN program as well as a table form. The results achieved represent significant reduction of the computations compared to other existing methods.

Journal ArticleDOI
TL;DR: In this paper, the concepts of specific optimal control, trajectory sensitivity design, and perfect model following are integrated to develop a general design technique for the linear servo model following problem.
Abstract: The concepts of specific optimal control, trajectory sensitivity design, and perfect model following are integrated to develop a general design technique for the linear servo model following problem. Control laws designed by this procedure are characterized by partial state feedback, perfect model following when theoretically possible, relative insensitivity of the compensated plant to mathematical modeling inaccuracies, and plant-model realignment in the presence of state disturbances. The feasibility of the design technique is demonstrated by the design of a nontrivial aircraft control problem.

Journal ArticleDOI
TL;DR: In this article, a geometric formulation of the state feedback triangular decoupling problem is given, and necessary and sufficient conditions for the existence of decoupled matrices are presented, along with a procedure for simultaneously realizing a triangular structure and assigning the poles of the closed-loop system transfer function matrix.
Abstract: A geometric formulation of the state feedback triangular decoupling problem is given. Necessary and sufficient conditions for the existence of decoupling matrices are presented. A procedure is outlined for simultaneously realizing a triangular structure and assigning the poles of the closed-loop system transfer function matrix.


Journal ArticleDOI
TL;DR: In this paper, an algorithm based on the variable gradient method for the construction of Lyapunov functions is proposed, which is applicable to the particular class of second-order systems with state variables expressible both in the form of trigonometrical functions and polynomials.
Abstract: An algorithm based on the variable gradient method for the construction of Lyapunov functions is proposed. This algorithm is applicable to the particular class of second-order systems with state variables expressible both in the form of trigonometrical functions and polynomials; and the extension to third-order systems appears to be possible.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the gradient can be replaced by a more general definition of the direction of steepest descent, and that the adjoint equation must in general be replaced with an adjoint optimal control problem.
Abstract: An important class of optimal control problems, arising frequeutly in an economic framework, is characterized as having a cost functional that is continuous but has discontinuous partial derivatives with respect to the state variables. Such problems are said to have kinks. Along a kink the classical adjoint equation breaks down, and it is impossible to define a gradient. In this paper it is shown that the gradient can be replaced by a more general definition of the direction of steepest descent but that the adjoint equation must in general be replaced by an adjoint optimal control problem. This yields a complete set of necessary conditions for problems of this type. The results derived are then combined with the theory of penalty functions to convert a problem having state constraints to one without such constraints.

Journal ArticleDOI
TL;DR: It becomes necessary to briefly review, in an informal manner, what a DP system is and in what sense it differs, from both a mat,hematical and a practical point of view, from a conventional lumped system.
Abstract: The control of dist,ributed parameter (DP) systems represents a real challenge, both from a theoretical and a practical point of view, to the systems engineer. Distribut.ed parameter systems arise in various application areas, such as chemical proms systems, aerospace systems, magneto-hydrodynamic systems, and communicat. ions systems, to ment.ion just a few. Thus, there is sufficient motivation for research directed t,oward the analysis, synt.hesis, and design techniques for DP systems. On t.he surface, it. may appear that t.he available theory for distributed parameter systems is almost at the same level as that associated with lumped systems. However, there exists a much wider gap between the theory and its applications. In the remainder of this correspondence, we shall briefly discus the reasons for this gap and suggest, certain tentative approaches which may contribute to the development of a theory and computat,ional algorithms which take into account. some of the practical problems associated with the design of controllers for DP systems. In order to make these concepts clear it. becomes necessary to briefly review, in an informal manner, what a DP system is and in what sense it differs, from both a mat,hemat.ical and a practical point of view, from a conventional lumped system.