scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automatic Control in 1969"


Journal ArticleDOI
TL;DR: In this article, a new algorithm for constructing an inverse of a multivariable linear dynamical system is presented, which is considerably more efficient than previous methods, and incorporates a relatively simple criterion for determining if an inverse system exists.
Abstract: A new algorithm for constructing an inverse of a multivariable linear dynamical system is presented. This algorithm, which is considerably more efficient than previous methods, also incorporates a relatively simple criterion for determining if an inverse system exists. New insight into the structure of a system inverse is gained by consideration of the inverse system representations resulting from the algorithm. A precise bound on the number of output differentiations required is obtained as well as a bound on the total number of integrators and differentiators necessary to realize the inverse. This latter bound is equal to the order of the original system. A further advantage of the algorithm and theory developed is that it is applicable to both time-invariant systems and time-variable systems which satisfy certain regularity conditions. One application is also given: a complete description of the set of initial states necessary and sufficient for a specified function to be the output of an invertible system.

804 citations


Journal ArticleDOI
TL;DR: In this article, the problem of estimating the state x of a linear process in the presence of a constant but unknown bias vector b is considered, and it is shown that the optimum estimate \hat{x} of the state can be expressed as x + V_{x}\hat{b} (1) where x is the bias-free estimate, computed as if no bias were present, and V x is a matrix which can be interpreted as the ratio of the covariance of \tilde{x] and b to the variance of b.
Abstract: The problem of estimating the state x of a linear process in the presence of a constant but unknown bias vector b is considered. This bias vector influences the dynamics and/or the observations. It is shown that the optimum estimate \hat{x} of the state can be expressed as \hat{x} = x + V_{x}\hat{b} (1) where \tilde{x} is the bias-free estimate, computed as if no bias were present, \hat{b} is the optimum estimate of the bias, and V x is a matrix which can be interpreted as the ratio of the covariance of \tilde{x} and \hat{b} to the variance of \hat{b} . Moreover, \hat{b} can be computed in terms of the residuals in the bias-free estimate, and the matrix V x depends only on matrices which arise in the computation of the bias-free estimates. As a result, the computation of the optimum estimate \tilde{x} is effectively decoupled from the estimate of the bias \hat{b} , except for the final addition indicated by (1).

728 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept of inherent integration associated with a dynamical system, i.e., the number of integrations which no inverse system can remove unless ideal differentiators are introduced.
Abstract: The question of inverting linear time-invariant dynamical systems has been of interest to control engineers for many years. An example of the nonstate-variable approach and application of this concept is the very well known work of Bode and Shannon in 1950. With the integration of state-variable descriptions and techniques into systems problems, the question has recently reappeared in a somewhat more complicated guise. Basically, the problem of inverting such dynamical systems is to determine the existence of the inverse, its properties, and its construction in terms of the matrices which characterize its state description. The first general study of existence appears to be due indirectly to Brockett and Mesarovic in 1965, and the first general construction seems to have been proposed by Youla and Dorato in 1966. Neither of these works was intended to develop a substantial insight into the properties of the inverse. The present work introduces the concept of the inherent integration associated with a dynamical system, i.e., the number of integrations which no inverse dynamical system can remove unless ideal differentiators are introduced. The existence of the inverse is discussed in terms of a determination of the inherent integration, and the construction which realizes this minimum number of integrations is given. The existence tests introduced are at worst one-half as complex as that of Brockett and Mesarovic and the construction offers a substantial improvement in conceptual simplicity over that of Youla and Dorato. The results are made possible by recognizing an essential equivalence with an associated problem in real sequential circuits and appear to be applicable to related problems in sensitivity, estimation, and game theory.

491 citations


Journal ArticleDOI
TL;DR: In this article, a solution to the optimum linear smoothing problem is presented in which the smoother is interpreted as a combination of two optimum linear filters, and a form of the solution which is convenient for practical computation is developed.
Abstract: A solution to the optimum linear smoothing problem is presented in which the smoother is interpreted as a combination of two optimum linear filters. This result is obtained from the well-known equation for the maximum likelihood combination of two independent estimates and equivalence to previous formulations is demonstrated. Forms of the solution which are convenient for practical computation are developed.

403 citations


Journal ArticleDOI
TL;DR: In this paper, a class of linear systems are studied which are subject to sudden changes in parameter values and an algorithm similar in form to Kushner's stochastic maximum principle is derived.
Abstract: A class of linear systems are studied which are subject to sudden changes in parameter values An algorithm similar in form to Kushner's stochastic maximum principle is derived and the relationship between these algorithms discussed Systems in which the performance measure is quadratic are investigated in detail and a differential equation is derived which yields the optimal feedback gains

347 citations



Journal ArticleDOI
TL;DR: In this article, a slack variable is used to transform an optimal control problem with scalar control and a scalar inequality constraint on the state variables into an unconstrained problem of higher dimension.
Abstract: A slack variable is used to transform an optimal control problem with a scalar control and a scalar inequality constraint on the state variables into an unconstrained problem of higher dimension. It is shown that, for a p th order constraint, the p th time derivative of the slack variable becomes the new control variable. The usual Pontryagin principle or Lagrange multiplier rule gives necessary conditions of optimality. There are no discontinuities in the adjoint variables. A feature of the transformed problem is that any nominal control function produces a feasible trajectory. The optimal trajectory of the transformed problem exhibits singular arcs which correspond, in the original constrained problem, to arcs which lie along the constraint boundary.

214 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that if for each t, the frozen system is stable, then the time-varying system should also be stable, provided A(t) is small enough.
Abstract: A limiting case of great importance in engineering is that of slowly varying parameters. For systems described by \dot{x} = A(t)x , one would intuitively expect that if, for each t , the frozen system is stable, then the time-varying system should also be stable. Provided A(t) is small enough, Rosenbrock has shown that this is the case [1]. Rosenbrock used a continuity argument [1, p. 75]. In this correspondence explicit bounds and slightly sharper results are obtained. Finally, it is pointed out that these results are useful in the study of the exact behavior of non-linear lumped systems with slowly varying operating points.

189 citations


Journal ArticleDOI
TL;DR: In this paper, an optimal feedback control for a linear time-varying system with time delay is presented, where the performance criterion is quadratic with a fixed, finite upper limit and results in a set of differential equations with boundary conditions.
Abstract: A method is presented whereby an optimal control may be obtained for a linear time-varying system with time delay. The performance criterion is quadratic with a fixed, finite upper limit, and results in a set of differential equations with boundary conditions whose solution yields an optimal feedback control. A numerical technique is developed for the solution of the differential equations, and two examples are worked.

176 citations


Journal ArticleDOI
D. Kleinman1
TL;DR: In this article, the problem of controlling a linear system to minimize a quadratic cost criterion is investigated when the system output is a delayed linear combination of system states corrupted by additive observation noise.
Abstract: The problem of controlling a linear system to minimize a quadratic cost criterion is investigated when the system output is a delayed linear combination of system states corrupted by additive observation noise. It is shown that the optimal control is generated by the cascade combination of a Kalman filter and a least mean-squared predictor. Expressions are derived for the minimum cost and for the state variances.

152 citations



Journal ArticleDOI
TL;DR: The hyperstability approach presented in this paper also allows for other solutions to the adaption mechanism and represents a general method for studying this type of adaptive systems.
Abstract: This paper considers the stability problem of the model reference adaptive control systems by means of the properties of hyperstable systems. A theorem concerning the hyperstability of model reference adaptive control systems is presented. This theorem directly gives a structure of the adaption mechanism. The results presented here include all the results obtained by Butchart, Shackcloth, Parks, Winsor, Roy, and Dressler. The hyperstability approach presented in this paper also allows for other solutions to the adaption mechanism and represents a general method for studying this type of adaptive systems. The results are directly applicable to the design of model reference adaptive control systems and they were verified for some particular cases by analogical simulation.

Journal ArticleDOI
D. Kleinman1
TL;DR: In this article, a convergent algorithm is developed for computing the optimal feedback gains for linear systems in which the intensity of the driving noise is proportional to the control input, and conditions are given under which an optimal control always exists.
Abstract: Optimal stochastic control is investigated for linear systems in which the intensity of the driving noise is proportional to control input. Conditions are given under which an optimal control always exists. It is shown that the optimal control is linear in the system state. A convergent algorithm is developed for computing the optimal feedback gains.

Journal ArticleDOI
TL;DR: In this paper, the state feedback matrix of a linear system optimal with respect to a quadratic performance index can be expanded in a MacLaurin series in parameters which change the order of the system.
Abstract: It is shown that the state feedback matrix of a linear system optimal with respect to a quadratic performance index can be expanded in a MacLaurin series in parameters which change the order of the system. The first two terms of this series are employed in a near-optimum design for a high-order plant. The result of the near-optimum design is superior to that achieved by a conventional low-order design, while the amount of computation is considerably less than that required for a high-order design. An example of a second-order design for a fifth-order plant is given.

Journal ArticleDOI
TL;DR: A new form is presented for the transient solution of the matrix Riccati equation associated with the linear optimal regulator and filter problems for time-invariant plants in a form such that the transient terms decay exponentially with time, leaving the steady-state terms.
Abstract: A new form is presented for the transient solution of the matrix Riccati equation associated with the linear optimal regulator and filter problems for time-invariant plants. The solution is expressed in a form such that the transient terms decay exponentially with time, leaving the steady-state terms. In contrast to the automatic synthesis program (ASP) matrix iteration method, the negative exponential solution does not code essential information in numbers of widely differing magnitudes.

Journal ArticleDOI
TL;DR: In this paper, a new recursive algorithm for the calculation of the weighting coefficients was proposed and compared to the original weighting coefficient algorithm of Magill, and it was shown that the memory and computational savings include 1) L memory allocations, 2) L scalar additions per iteration, and 3) scalar multiplications per iteration.
Abstract: The optimal discrete adaptive Kalman filter, as presented by Magill, necessitates the iterative calculation of a weighting coefficient for each value of the quantized parameter space. This correspondence proposes a new recursive algorithm for the calculation of the weighting coefficients and compares it to the weighting coefficient algorithm of Magill. When there are L elements in the a priori known parameter space, it is shown that the memory and computational savings include 1) L memory allocations, 2) L scalar additions per iteration, and 3) L scalar multiplications per iteration.

Journal ArticleDOI
J.B. Pearson1, Chai Ding1
TL;DR: In this paper, the problem of the specification of the order and structure of a linear dynamic compensator in order to obtain arbitrary pole placement in a closed-loop linear system comprised of the compensator and a linear plant is discussed.
Abstract: The problem of the specification of the order and structure of a linear dynamic compensator in order to obtain arbitrary pole placement in a closed-loop linear system comprised of the compensator in cascade with a linear plant is discussed. A significant application of the theory is to the design of optimal systems in those cases where not all the state variables of the plant can be measured. These results permit a completely algorithmized approach to the design of compensators for linear systems.

Journal ArticleDOI
TL;DR: In this paper, sufficient and sufficient conditions for stability with probability 1 are developed for the class of linear stochastic systems with constant probability 1 for the case of linear matrix equilibria, and a simple technique for constructing quadratic Stochastic Lyapunov functions is presented which entails the solution to an n \times n linear matrix equation.
Abstract: Necessary and sufficient conditions for stability with probability 1 are developed for the class of linear stochastic systems. A simple technique for constructing quadratic stochastic Lyapunov functions is presented which entails the solution to an n \times n linear matrix equation.

Journal ArticleDOI
TL;DR: In this article, the authors considered non-deterministic differential games of imperfect information, with particular emphasis on the case of a linear system, a quadratic cost functional, and independent white Gaussian noises additively corrupting the observable output measurements.
Abstract: Nondeterministic differential games of imperfect information are considered, with particular emphasis on the case of a linear system, a quadratic cost functional, and independent white Gaussian noises additively corrupting the observable output measurements Solutions are presented for a number of particular cases of this problem, including those in which one of the two controllers has either no information or, under certain additional restrictions, perfect measurements of the state vector In each case the optimal control for each controller is shown to be closely related to that which would result by assuming a separation theorem to hold Furthermore, the various terms in the resulting optimal cost are shown to be readily assignable to the appropriate contributing source, such as the optimal cost that would result if the problem were instead a deterministic one with perfect information, the effect of estimation errors, or the effect of measurement errors

Journal ArticleDOI
W. Willman1
TL;DR: In this paper, a class of differential pursuit-evasion games is examined in which the dynamics are linear and perturbed by additive white Gaussian noise, the performance index is quadratic, and both players receive measurements perturbed independently by additivewhite Gaussian noises.
Abstract: A class of differential pursuit-evasion games is examined in which the dynamics are linear and perturbed by additive white Gaussian noise, the performance index is quadratic, and both players receive measurements perturbed independently by additive white Gaussian noise. Linear minimax solutions are characterized in terms of a set of implicit integro-differential equations. A game of this type also possesses a "certainty-coincidence" property, meaning that its minimax behavior coincides with that of the corresponding deterministic game in the event that all noise values are zero. This property is used to decompose the minimax strategies into sums of a certainty-equivalent term and error terms.

Journal ArticleDOI
TL;DR: A state variable formulation of the remote manipulation problem is presented, applicable to human supervised or autonomous computer-manipulators, and a method similar to Dynamic Programming is used to determine the optimal history.
Abstract: A state variable formulation of the remote manipulation problem is presented, applicable to human-supervised or autonomous computer-manipulators A discrete state vector, containing position variables for the manipulator and relevant objects, spans a quantized state space comprising many static configurations of objects and hand A manipulation task is a desired new state State transitions are assigned costs and are accomplished by commands: hand motions plus grasp, release, push, twist, etc In control theory terms the problem is to find the cheapest control history (if any) from present to desired state A method similar to dynamic programming is used to determine the optimal history The system is capable of obstacle avoidance, grasp rendezvous, incorporation of new sensor data, remembering results of previous tasks, and so on

Journal ArticleDOI
TL;DR: This work is concerned with the optimal control of a discrete-time linear system with random parameters and the method of solution is based on the dynamic programming approach which leads to functional recurrence equations.
Abstract: This work is concerned with the optimal control of a discrete-time linear system with random parameters. It is assumed that the parameters of the system vary randomly during the process, namely, the parameters constitute sequences of random variables. These random variables are not necessarily independent. An important particular case occurs where there are unknown constant parameters in the system. The measurements of the state of the system contain additive noise. A quadratic function of the state and controller, with appropriate weighting, serves as the criterion function. The solutions for the open-loop controller and the open-loop feedback controller are presented. The method of solution is based on the dynamic programming approach which leads to functional recurrence equations.

Journal ArticleDOI
TL;DR: In this paper, the existence and form of an inverse system is analyzed together with the question of decoupling a fixed plant by state variable feedback in the setting of nonstationary systems.
Abstract: Attention is given to time-varying linear dynamic systems of a discrete or continuous time variable. For this class of systems the question of the existence and form of an inverse system is analyzed together with the question of decoupling a fixed plant by state variable feedback. It is shown that recent results (see [1], [2]) on the decoupling of stationary systems have a partial counterpart in the setting of nonstationary systems. A synopsis of the main results may be obtained by reading Theorem 1 and its corollaries.

Journal ArticleDOI
TL;DR: In this article, random sampled linear systems with linear or non-linear feedback loops are studied by a stochastic Lyapunov function method, which allows the study with nonlinear feedback or nonstationary holding times.
Abstract: Randomly sampled linear systems with linear or non-linear feedback loops are studied by a stochastic Lyapunov function method. The input in this paper is assumed zero; driven systems will be treated in a later paper. Improved criteria for stability (with prebability one, on s th moment s > 1 , or in mean-square) are given when the sequence of holding times are independent. The method is relatively straightforward to apply, especially in comparison with the direct methods, and allows the study with nonlinear feedback or nonstationary holding times. A randomly sampled Lur'e problem is studied. Numerical results, describing some interesting phenomena, such as, jitter stabilized systems are presented.

Journal ArticleDOI
TL;DR: In this article, the authors considered stochastic differential games in which the two controllers have available only noise-corrupted output measurements and proposed a solution to this problem under the constraint that each controller is limited to a linear dynamic system of fixed dimension for the generation of his estimate of the system state.
Abstract: Attention is given to stochastic differential games in which the two controllers have available only noise-corrupted output measurements. Consideration is restricted to the case in which the system is linear, the cost functional quadratic, and the noises corrupting the output measurements are independent, white, and Gaussian. A solution to this problem is presented under the constraint that each controller is limited to a linear dynamic system of fixed dimension for the generation of his estimate of the system state. The optimal controls are shown to satisfy a separation theorem, the optimal estimators are shown to be closely related to Kalman filters, and the various terms in the optimal cost are shown to be readily assignable to the appropriate contributing sources.

Journal ArticleDOI
G. Johnson1
TL;DR: In this article, a deterministic Kalman-Bucy filter is proposed to provide arbitrary and separable stability properties in a feedback control system for linear nonstationary process and measurement systems, subject to uniform controllability and observability.
Abstract: A feedback control system can be structured for linear nonstationary process and measurement systems comprising a deterministic filter whose output is the independent variable of a linear control law. Subject to uniform controllability and observability, the filter and control gains can be specified to provide arbitrary and separable stability properties. If the filter gain is selected to produce a stabilizing effect on the state estimate, and the control gain is selected to produce a stabilizing effect on the process, the filter and control gains are shown to satisfy matrix Riccati differential equations. This suggests the use of stochastic optimal control theory when there is no quantitative measure of optimality, but it is desirable to assure the qualitative property that feedback be stabilizing. A concise derivation of the Kalman-Bucy filter is included in an appendix to illustrate the facility of approaching optimal estimation problems with the methods of stability theory.


Journal ArticleDOI
TL;DR: In this article, simple analytical relations for the inverse of the Vandermonde matrix and its confluent form are given for the two types of inverse matrices, when used according to a suggested procedure.
Abstract: Simple analytical relations are given for the inverse of the Vandermonde matrix as well as for the inverse of its confluent form. These relations, when used according to a suggested procedures allow one to compute the desired inverse matrices efficiently and accurately.

Journal ArticleDOI
TL;DR: In this article, the companion canonic form of a single-input linear time-invariant controllable system was shown to have total symmetry and complete simultaneity properties.
Abstract: New proofs are given for the recently demonstrated total symmetry and complete simultaneity properties for the companion canonic form for single-input linear time-invariant controllable systems. These proofs result in a convenient closed-form expression for the complete simultaneity property. The use of these properties to generate by one n th-order sensitivity model all the sensitivity functions \frac{\partialx_{i}}{\partialv_{j}}|_v^{0}, i=1,...,n, j=1,...,r, for a single-input linear time-invariant controllable n th-order system which depends on r different parameters is reviewed. This method represents an improvement over known methods for generating the sensitivity functions, which generally require a composite dynamic system of order n(r+1) . This result is then extended to the case of multi-input normal linear systems, where, at most, 2m-1 dynamic n th-order systems are needed in addition to the system to generate all the sensitivity functions of the system state with respect to any number of parameters ( m is the dimension of u ). It is shown that the algebraic calculations that must be made in the m -input case are much less than m times the calculations needed for the single-input case. The implications of these results for the computer aided sensitivity analysis of systems are discussed.

Book ChapterDOI
King-Sun Fu1
TL;DR: In designing an optimal control system, if all the a priori information about the controlled process (plant-environment) is known and can be described deterministically, the optimal controller is usually designed by deterministic optimization techniques.
Abstract: In designing an optimal control system, if all the a priori information about the controlled process (plant-environment) is known and can be described deterministically, the optimal controller is usually designed by deterministic optimization techniques. If all or a part of the a priori information can only be described statistically—for example, in terms of probability distribution or density functions—then stochastic or statistical design techniques will be used. However, if the a priori information required is unknown or incompletely known, in general an optimal design cannot be achieved. Two different approaches have been taken to solve this class of problems. One approach is to design a controller based only upon the amount of information available. In that case the unknown information is either ignored or is assumed to take on some known values chosen according to the designer’s best guess. The second approach is to design a controller which is capable of estimating the unknown information during its operation and of determining an optimal control action on the basis of the estimated information. In the first case a rather conservative design criterion (for example, the minimax criterion) is often used; the systems designed are in general inefficient and suboptimal. In the second case, if the estimated information gradually approaches the true information as time proceeds, then the controller thus designed will approach to the optimal controller.