scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automatic Control in 1972"


Journal ArticleDOI
TL;DR: Time series analysis san francisco state university, 6 4 introduction to time series analysis, box and jenkins time seriesAnalysis forecasting and, th15 weeks citation classic eugene garfield, proc arima references 9 3 sas support, time series Analysis forecasting and control pambudi, timeseries analysis forecasting and Control george e.
Abstract: time series analysis san francisco state university, 6 4 introduction to time series analysis, box and jenkins time series analysis forecasting and, th15 weeks citation classic eugene garfield, proc arima references 9 3 sas support, time series analysis forecasting and control pambudi, time series analysis forecasting and control george e, time series analysis forecasting and control ebook, time series analysis forecasting and control 5th edition, time series analysis forecasting and control fourth, time series analysis forecasting and control amazon, wiley time series analysis forecasting and control 5th, time series analysis forecasting and control edition 5, time series analysis forecasting and control 5th edition, time series analysis forecasting and control abebooks, time series analysis for business forecasting, time series analysis forecasting and control wiley, time series analysis forecasting and control book 1976, time series analysis forecasting and control researchgate, time series analysis forecasting and control edition 4, time series analysis forecasting amp control forecasting, george box publications department of statistics, time series analysis forecasting and control london, time series analysis forecasting and control an, time series analysis forecasting and control amazon it, box g e p and jenkins g m 1976 time series, time series analysis forecasting and control pdf slideshare, time series analysis forecasting and control researchgate, time series analysis forecasting and control 5th edition, time series analysis forecasting and control 5th edition, time series wikipedia, time series analysis forecasting and control abebooks, time series analysis forecasting and control, forecasting and time series analysis using the sca system, time series analysis forecasting and control by george e, time series analysis forecasting and control 5th edition, time series analysis forecasting and control 5th edition, box and jenkins time series analysis forecasting and control, time series analysis forecasting and control ebook, time series analysis forecasting and control, time series analysis and forecasting cengage, 6 7 references itl nist gov, time series analysis forecasting and control george e, time series analysis and forecasting statgraphics, time series analysis forecasting and control fourth edition, time series analysis forecasting and control, time series analysis forecasting and control wiley, time series analysis forecasting and control in

10,118 citations


Journal ArticleDOI
TL;DR: In this paper an approximation that permits the explicit calculation of the a posteriori density from the Bayesian recursion relations is discussed and applied to the solution of the nonlinear filtering problem.
Abstract: Knowledge of the probability density function of the state conditioned on all available measurement data provides the most complete possible description of the state, and from this density any of the common types of estimates (e.g., minimum variance or maximum a posteriori) can be determined. Except in the linear Gaussian case, it is extremely difficult to determine this density function. In this paper an approximation that permits the explicit calculation of the a posteriori density from the Bayesian recursion relations is discussed and applied to the solution of the nonlinear filtering problem. In particular, it is noted that a weighted sum of Gaussian probability density functions can be used to approximate arbitrarily closely another density function. This representation provides the basis for procedure that is developed and discussed.

1,267 citations


Journal ArticleDOI

1,013 citations


Journal ArticleDOI
TL;DR: In this article, different methods of adaptive filtering are divided into four categories: Bayesian, maximum likelihood (ML), correlation, and covariance matching, and the relationship between the methods and the difficulties associated with each method are described.
Abstract: The different methods of adaptive filtering are divided into four categories: Bayesian, maximum likelihood (ML), correlation, and covariance matching. The relationship between the methods and the difficulties associated with each method are described. New algorithms for the direct estimation of the optimal gain of a Kalman filter are given.

789 citations


Journal ArticleDOI
TL;DR: Guaranteed cost control is a method of synthesizing a closed-loop system in which the controlled plant has large parameter uncertainty as mentioned in this paper, and it can be incorporated into an adaptive system by either online measurement and evaluation or prior knowledge on the parametric dependence of a certain easily measured situation parameter.
Abstract: Guaranteed cost control is a method of synthesizing a closed-loop system in which the controlled plant has large parameter uncertainty This paper gives the basic theoretical development of guaranteed cost control, and shows how it can be incorporated into an adaptive system The uncertainty in system parameters is reduced first by either: 1) on-line measurement and evaluation, or 2) prior knowledge on the parametric dependence of a certain easily measured situation parameter Guaranteed cost control is then used to take up the residual uncertainty It is shown that the uncertainty in system parameters can be taken care of by an additional term in the Riccati equation A Fortran program for computing the guaranteed cost matrix and control law is developed and applied to an airframe control problem with large parameter variations

688 citations


Journal ArticleDOI
TL;DR: Equivalence relations in information and in control functions among different systems are developed and aid in the solving of many general problems by relating their solutions to those of the systems with "perfect memory".
Abstract: General dynamic team decision problems with linear information structures and quadratic payoff functions are studied. The primitive random variables are jointly Gaussian. No constraints on the information structures are imposed except causality. Equivalence relations in information and in control functions among different systems are developed. These equivalence relations aid in the solving of many general problems by relating their solutions to those of the systems with "perfect memory." The latter can be obtained by the method derived in Part I. A condition is found which enables each decision maker to infer the information available to his precedents, while at the same time the controls which will affect the information assessed can be proven optimal. When this condition fails, upper and lower bounds of the payoff function can still be obtained systematically, and suboptimal controls can be obtained.

677 citations



Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for the optimal control and filtering problem to yield an optimal asymptotically stable closed-loop system is given, which involves the concepts of stabilizability and detectability.
Abstract: The well-known matrix algebraic equation of the optimal control and filtering theory is considered. A necessary and sufficient condition for its solution to yield an optimal as well as asymptotically stable closed-loop system is given. The condition involves the concepts of stabilizability and detectability.

322 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of feedback control of a time-invariant uncertain system subject to state constraints over an infinite-time interval and study the behavior of the region of n -step reachability as n tends to infinity.
Abstract: In this paper we consider some aspects of the problem of feedback control of a time-invariant uncertain system subject to state constraints over an infinite-time interval. The central question that we investigate is under what conditions can the state of the uncertain system be forced to stay in a specified region of the state space for all times by using feedback control. At the same time we study the behavior of the region of n -step reachability as n tends to infinity. It is shown that in general this region may exhibit instability as we pass to the limit, and that under a compactness assumption this region converges to a steady state. A special case involving a linear finite-dimensional system is examined in more detail. It is shown that there exist ellipsoidal regions in state space where the state can be confined by making use of a linear time-invariant control law, provided that the system is stabilizable. Such control laws can be calculated efficiently through the solution of a recursive matrix equation of the Riccati type.

289 citations



Journal ArticleDOI
TL;DR: In this article, the authors considered a linear system with a quadratic cost function, which is a weighted sum of the integral square regulation error and the input cost, and showed that the necessary and sufficient condition for reducing the regulation error to zero is that the number of inputs be at least as large as the control variables, and the system possess no right-half plane zeros.
Abstract: A linear system with a quadratic cost function, which is a weighted sum of the integral square regulation error and the integral square input, is considered. What happens to the integral square regulation error as the relative weight of the integral square input reduces to zero is investigated. In other words, what is the maximum accuracy one can achieve when there are no limitations on the input? It turns out that the necessary and sufficient condition for reducing the regulation error to zero is that 1) the number of inputs be at least as large as the number of controlled variables, and 2) the system possess no right-half plane zeros. These results are also "dualized" to the optimal filtering problem.

Journal ArticleDOI
TL;DR: In this article, necessary and sufficient conditions are derived for a minimal order linear time-invariant differential feedback control system to exist for a linear time invariant multivariable system with unmeasurable arbitrary disturbances of a given class occurring in it.
Abstract: Necessary and sufficient conditions are derived for a minimal order linear time-invariant differential feedback control system to exist for a linear time-invariant multivariable system with unmeasurable arbitrary disturbances of a given class occurring in it, such that the outputs of the system asymptotically become equal to preassigned functions of a given class of outputs, independent of the disturbances occurring in the system, and such that the closed-loop system is controllable. The feedback gains of the control system are obtained so that the dynamic behavior of the closed-loop system is specified by using either an integral quadratic optimal control approach or a pole assignment approach. The result may be interpreted as being a generalization of the single-input, single-output servomechanism problem to multivariable systems or as being a solution to the asymptotic decoupling problem.

Journal ArticleDOI
TL;DR: A strategy suggested by Stackelberg for static economic competition is considered and extended to the case of dynamic games with biased information pattern, and necessary conditions for open-loop StACkelberg strategies are presented.
Abstract: A strategy suggested by Stackelberg for static economic competition is considered and extended to the case of dynamic games with biased information pattern. This strategy is reasonable when one of the players knows only his own cost function but the other player knows both cost functions. As with Nash strategies for nonzero-sum dynamic games open-loop and feedback Stackelberg strategies for dynamic games could lead to different solutions, a phenomenon which does not occur in optimum control problems. Necessary conditions for open-loop Stackelberg strategies are presented. Dynamic programming is used to define feedback Stackelberg strategies for discrete-time games. A simple resource allocation example illustrates the solution concept.




Journal ArticleDOI
TL;DR: The behavior of the Riccati equation for the linear regulator problem with a parameter whose perturbation changes the order of the system is analyzed in this article, where sufficient conditions are given under which the original problem tends to the solution of a low-order problem.
Abstract: The behavior of the solution of the Riccati equation for the linear regulator problem with a parameter whose perturbation changes the order of the system is analyzed. Sufficient conditions are given under which the solution of the original problem tends to the solution of a low-order problem. This result can be used for the decomposition of a high-order problem into two low-order problems.

Journal ArticleDOI
TL;DR: In this paper, the problem of designing an observer to estimate a linear function of the state of a linear system, for the purpose of implementing a feedback control law is considered, and a procedure for constructing the observer and an algorithm for determining minimal order is outlined.
Abstract: This paper considers the problem of designing an observer to estimate a linear function of the state of a linear system, for the purpose of implementing a feedback control law. In the single-output case a necessary and sufficient condition is found for the existence of an observer of given order and pole configuration. A procedure is stated for constructing the observer, and an algorithm for determining minimal order is outlined. The multi-output case is reduced via a canonical form to an output-coupled set of single-output systems which can be treated as above. Observers derived using these procedures are generally of lower order than those of Luenberger, and the restriction that plant and observer have no common poles is unnecessary.

Journal ArticleDOI
TL;DR: In this article, necessary and sufficient conditions for a partitioned matrix to be nonnegative definite and positive definite are derived in terms of its four submatrices, which are then used to obtain the necessary and necessary conditions.
Abstract: Necessary and sufficient conditions for a partitioned matrix to be nonnegative definite and positive definite are derived in terms of its four submatrices.




Journal ArticleDOI
TL;DR: In this article, a new approach to the exact model matching problem is given based on an algorithm for characterizing the input-output structural properties of a linear system, which is solved without recourse to initial coordinate transformations.
Abstract: A new approach to the exact model matching problem is given based on an algorithm for characterizing the input-output structural properties of a linear system. In contrast to previous methods, the state feedback matching problem is solved without recourse to initial coordinate transformations. Moreover, the algorithm given here extends directly to the dynamic model matching problem and yields a set of necessary and sufficient conditions for one system to be transfer function equivalent via dynamic state feedback to a specified model system.


Journal ArticleDOI
TL;DR: In this article, the gradient of the cost function with respect to the independent variables, called the generalized gradient, is calculated by solving a set of equations similar to the Euler-Lagrange equations.
Abstract: The steepest descent methods of Bryson and Ho [1] and Kelly [6] and the conjugate gradient method of Lasdon, Mitter, and Waren [3] use control variables as the independent variables in the search procedure. The inequality constraints are often handled via penalty functions which result in poor convergence. Special difficulties are encountered in handling state variable inequality constraints and singular arcs [1]. This paper shows that these difficulties arise due to the exclusive use of control variables as the independent variables in the search procedure. An algorithm based on the generalized reduced gradient (GRG) algorithm of Abadie and Carpentier [5] and Abadie [7] for nonlinear programming is proposed to solve these problems. The choice of the independent variables in this algorithm is dictated by the constraints on the problem and could result in different combinations of state and control variables as independent variables along different parts of the trajectory. The gradient of the cost function with respect to the independent variables, called the generalized gradient, is calculated by solving a set of equations similar to the Euler-Lagrange equations. The directions of search are determined using gradient projection and the conjugate gradient method. Two numerical examples involving state variable inequality constraints are solved [2]. The method is then applied to two examples containing singular arcs and it is shown that these problems can be handled as regular problems by choosing some of the state variables as the independent variables. The relationship of the method to the reduced gradient method of Wolfe [4] and the generalized reduced method of Abadie [7] for nonlinear programming is shown.



Journal ArticleDOI
TL;DR: In this paper, a sufficient condition for asymptotic stability in the large is proposed for nonlinear systems, which is applicable if the system in question can be decomposed into subsystems, if appropriate Lyapunov functions are obtained for the subsystems and if the connections between subsystems have bounded dc gains.
Abstract: A sufficient condition for asymptotic stability in the large is proposed for nonlinear systems. It is applicable if the system in question can be decomposed into subsystems, if appropriate Lyapunov functions are obtained for the subsystems, and if the connections between subsystems have bounded dc gains. It is generally less restrictive than the condition previously presented by Bailey for similar systems. An estimate of transient behavior, together with the stability condition, is also given.

Journal ArticleDOI
TL;DR: In this paper, a precise definition of identifiability of a parameter is given in terms of consistency in probability for the parameter estimate, and necessary and sufficient conditions for the unknown parameter to be identifiable are established.
Abstract: A precise definition of identifiability of a parameter is given in terms of consistency in probability for the parameter estimate. Under some mild Uniformity assumptions on the conditional density parameterized by the unknown parameter, necessary and sufficient conditions for the unknown parameter to be identifiable are established. The assumptions and identifiability criteria are expressed in terms of the density of individual observations, conditioned upon all past observations. The results are applied to linear system identification problems.