scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automatic Control in 1966"


Journal ArticleDOI
TL;DR: In this article, it was shown that the design of an observer for a system with M outputs can be reduced to the design for m separate observers for single-output subsystems.
Abstract: Often in control design it is necessary to construct estimates of state variables which are not available by direct measurement. If a system is linear, its state vector can be approximately reconstructed by building an observer which is itself a linear system driven by the available outputs and inputs of the original system. The state vector of an n th order system with m independent outputs can be reconstructed with an observer of order n-m . In this paper it is shown that the design of an observer for a system with M outputs can be reduced to the design of m separate observers for single-output subsystems. This result is a consequence of a special canonical form developed in the paper for multiple-output systems. In the special case of reconstruction of a single linear functional of the unknown state vector, it is shown that a great reduction in observer complexity is often possible. Finally, the application of observers to control design is investigated. It is shown that an observer's estimate of the system state vector can be used in place of the actual state vector in linear or nonlinear feedback designs without loss of stability.

1,611 citations



Journal ArticleDOI
TL;DR: In this article, the authors outline a stability theory for input-output problems using functional methods and derive open loop conditions for the boundedness and continuity of feedback systems, without, at the beginning, placing restrictions on linearity or time invariance.
Abstract: The object of this paper is to outline a stability theory for input-output problems using functional methods. More particularly, the aim is to derive open loop conditions for the boundedness and continuity of feedback systems, without, at the beginning, placing restrictions on linearity or time invariance. It will be recalled that, in the special case of a linear time invariant feedback system, stability can be assessed using Nyquist's criterion; roughly speaking, stability depends on the mounts by which signals are amplified and delayed in flowing around the loop. An attempt is made here to show that similar considerations govern the behavior of feedback systems in general-that stability of nonlinear time-varying feedback systems can often be assessed from certain gross features of input-output behavior, which are related to amplification and delay. This paper is divided into two parts: Part I contains general theorems, free of restrictions on linearity or time invariance; Part II, which will appear in a later issue, contains applications to a loop with one nonlinear element. There are three main results in Part I, which follow the introduction of concepts of gain, conicity, positivity, and strong positivity: THEOREM 1: If the open loop gain is less than one, then the closed loop is bounded. THEOREM 2: If the open loop can be factored into two, suitably proportioned, conic relations, then the closed loop is bounded. THEOREM 3: If the open loop can be factored into two positive relations, one of which is strongly positive and has finite gain, then the closed loop is bounded. Results analogous to Theorems I-3, but with boundedness replaced by continuity, are also obtained.

1,309 citations


Journal ArticleDOI
TL;DR: In this article, an iterative method is proposed for the identification of nonlinear systems from samples of inputs and outputs in the presence of noise, which consists of a no-memory gain (of an assumed polynomial form) followed by a linear discrete system.
Abstract: An iterative method is proposed for the identification of nonlinear systems from samples of inputs and outputs in the presence of noise. The model used for the identification consists of a no-memory gain (of an assumed polynomial form) followed by a linear discrete system. The parameters of the pulse transfer function of the linear system and the coefficients of the polynomial non-linearity are alternately adjusted to minimize a mean square error criterion. Digital computer simulations are included to demonstrate the feasibility of the technique.

707 citations


Journal ArticleDOI
TL;DR: In this paper, the authors outline a stability theory based on functional methods and derive several stability conditions, including Popov's condition, under various restrictions on the nonlinearity N ; the following cases are treated: (i) N is instantaneously inside a sector and is memoryless and time-invariant.
Abstract: The object of this paper is to outline a stability theory based on functional methods. Part I of the paper was devoted to a general feedback configuration. Part II is devoted to a feedback system consisting of two elements, one of which is linear time-invariant, and the other nonlinear. An attempt is made to unify several stability conditions, including Popov's condition, into a single principle. This principle is based on the concepts of conicity and positivity, and provides a link with the notions of gain and phase shift of the linear theory. Part II draws on the (generalized) notion of a "sector non-linearity." A nonlinearity N is said to be INSIDE THE SECTOR {\alpha,\beta} if it satisfies an inequality of the type \langle(Nx-\alphax)_{t}, (Nx-\betax)_{t}\rangle\leq0 . If N is memoryless and is characterized by a graph in the plane, then this simply means that the graph lies inside a sector of the plane. However, the preceding definition extends the concept to include nonlinearities with memory. There are two main results. The first result, the CIRCLE THEOREM, asserts in part that: If the nonlinearity is inside a sector {\alpha, \beta} , and if the frequency response of the linear element avoids a "critical region" in the complex plane, then the closed loop is bounded; if \alpha > 0 then the critical region is a disk whose center is halfway between the points -1/\alpha and -1/\beta , and whose diameter is greater than the distance between these points. The second result is a method for taking into account the detailed properties of the nonlinearity to get improved stability conditions. This method involves the removal of a "multiplier" from the linear element. The frequency response of the linear element is modified by the removal, and, in effect, the size of the critical region is reduced. Several conditions, including Popov's condition, are derived by this method, under various restrictions on the nonlinearity N ; the following cases are treated: (i) N is instantaneously inside a sector {\alpha, \beta} . (ii) N satisfies (i) and is memoryless and time-invariant. (iii) N satisfies (ii) and has a restricted slope.

657 citations


Journal ArticleDOI
TL;DR: A method is proposed for reducing large matrices by constructing a matrix of lower order which has the same dominant eigenvalues and eigenvectors as the original system.
Abstract: Often it is possible to represent physical systems by a number of simultaneous linear differential equations with constant coefficients, \dot{x} = Ax + r but for many processes (e.g., chemical plants, nuclear reactors), the order of the matrix A may be quite large, say 50×50, 100×100, or even 500×500. It is difficult to work with these large matrices and a means of approximating the system matrix by one of lower order is needed. A method is proposed for reducing such matrices by constructing a matrix of lower order which has the same dominant eigenvalues and eigenvectors as the original system.

614 citations


Journal ArticleDOI
TL;DR: The theory of optimal control is used to design an optimal linear feedback system which regulates the position and velocity of every vehicle in a densely packed string of high-speed moving vehicles.
Abstract: This paper uses the theory of optimal control to design an optimal linear feedback system which regulates the position and velocity of every vehicle in a densely packed string of high-speed moving vehicles. In addition to the general theoretical formulation and solution of the optimization problem, analog computer simulation results are presented for the case of a string of three vehicles.

448 citations


Journal ArticleDOI
TL;DR: An alternative synthesis based on Liapunov's second method is suggested here, and is applied to the redesign of adaptive loops considered by some other authors who have all used the M.I.T.T, rule.
Abstract: The model reference adaptive control system has proved very popular on account of a ready-made, but heuristically based, rule for synthesizing the adaptive loops-the so-called "M.I.T. rule." A theoretical analysis of loops so designed is generally very difficult, but analyses of quite simple systems do show that instability is possible for certain system inputs. An alternative synthesis based on Liapunov's second method is suggested here, and is applied to the redesign of adaptive loops considered by some other authors who have all used the M.I.T, rule. Derivatives of model-system error are sometimes required, but may be avoided in gain adjustment schemes if the system transfer function is "positive real," using a lemma due to Kalman. This paper amplifies and extends the work of Butchart and Shackcloth reported at the IFAC (Teddington) Symposium, September, 1965.

439 citations



Journal ArticleDOI
TL;DR: In this article, a class of continuous time systems with part continuous, part discrete state is described by differential equations combined with multistable elements, where transitions of these elements between their discrete states are triggered by the continuous part of the state and not directly by inputs.
Abstract: A class of continuous time systems with part continuous, part discrete state is described by differential equations combined with multistable elements. Transitions of these elements between their discrete states are triggered by the continuous part of the state and not directly by inputs. The dynamic behavior of such systems, in response to piecewise continuous inputs, is defined under suitable assumptions. A general Mayer-type optimization problem is formulated. Conditions are given for a solution to be well-behaved, so that variational methods can be applied. Necessary conditions for optimality are stated and the jump conditions are interpreted geometrically.

261 citations


Journal ArticleDOI
H. Heffes1
TL;DR: In this paper, a recursive equation for the actual covariance matrix of the estimation error when the filter design is based upon erroneous models is derived, and the derived equation can also be used to obtain the covariance matrices when the optimal filter gains are approximated by simple functions of time to be used in real-time filtering application.
Abstract: The optimal filtering equations, as derived by Kalman [1], [2], require the specification of a number of models for a given application. This paper concerns itself with the effect of errors in the assumed models on the filter response. The types of errors considered are those in the covariance of the initial state vector, the covariance of the stochastic inputs to the system, and the covariance of the uncorrelated measurement noise. Presented here is a derivation of a recursive equation for the actual covariance matrix of the estimation error when the filter design is based upon erroneous models. The derived equation can also be used to obtain the covariance matrix of the estimation error when the optimal filter gains are approximated by simple functions of time to be used in a real-time filtering application. A numerical example illustrates the use of the derived equations.

Journal ArticleDOI
L. Silverman1
TL;DR: In this article, the problem of transforming a single-input, single-output, time-variable linear system to phase-variable form is considered, and specific criteria for the existence of an equivalent canonical form are derived together with a method for obtaining the form when it exists.
Abstract: The problem of transforming a single-input, single-output, time-variable linear system to phase-variable form is considered in this paper. Specific criteria for the existence of an equivalent canonical form are derived together with a method for obtaining the form when it exists. This method, when specialized to fixed systems, is particularly simple, and results in an easilyconstructed explicit form for the transforming matrix. Furthermore, the calculation of eigenvalues and eigenvectors necessary in previously published techniques is avoided.

Journal ArticleDOI
TL;DR: In this paper, the definition of sets of Euler angles is discussed and a useful tool for treating the mathematics associated with Euler angle is illustrated, and the method of determining a set of orthogonal infinitesimal rotations equivalent to nonorthogonal inverse Euler increments is presented.
Abstract: The definition of sets of Euler angles is discussed and a useful tool for treating the mathematics associated with Euler angles is illustrated. Restricting attention to right-handed coordinate systems and positive rotations, twelve distinct but equivalent sets of Euler angles are partitioned into two subsets. The method of determining a set of orthogonal infinitesimal rotations equivalent to nonorthogonal infinitesimal increments on a set of Euler angles is illustrated. It is shown that the same solution yields expressions for the angular velocities of the final coordinate system relative to the reference coordinate system in terms of derivatives of the Euler angles. The ease with which the infinitesimal increments of one Euler set in terms of the increments of another equivalent Euler set can be determined by the symbolic technique is illustrated. This technique offers a systematic approach to error analysis of sequences of rotations.


Journal ArticleDOI
TL;DR: In this article, the authors show that, if sufficient conditions are violated, in certain cases instability occurs in the form of oscillations, which constitutes a new class of oscillators and constitutes counterexamples to Aizerman's conjecture.
Abstract: In the last few years, several sufficient conditions have been found for establishing the asymptotic stability in the large of the null solution of feedback systems containing a single non-linearity. The experiments reported in this paper show that, if these sufficient conditions are violated, in certain cases instability occurs in the form of oscillations. These experiments not only establish a new class of oscillators but they also constitute counterexamples to Aizerman's conjecture. An explanation of the oscillations is proposed which uses harmonic linearization.

Journal ArticleDOI
TL;DR: This paper attempts to summarize and place in perspective some of the recent contributions to stability theory with major emphasis on work which relates closely to applications in control, circuit theory, and aerospace systems.
Abstract: This paper attempts to summarize and place in perspective some of the recent contributions to stability theory. The major emphasis is on work which relates closely to applications in control, circuit theory, and aerospace systems. The frequency domain stability criteria for nonlinear and time-varying feedback loops are discussed in some detail. A large number of references describing both theoretical developments and applications are included.

Journal ArticleDOI
TL;DR: In this paper, the sensitivity analysis of a linear time-invariant multivariable (multiple input-output) system represented by a system of first-order differential equations is presented.
Abstract: This paper consists of two main parts. The first part considers the sensitivity analysis of a linear time-invariant multivariable (multiple input-output) system represented by a system of first-order differential equations. Procedures which can be easily programmed on a digital computer are presented for the determination of: (1) the system transfer-function matrix; (2) sensitivity coefficients for the transfer-function matrix in terms of the system parameters. The second part contains a simple procedure for the synthesis of a linear single-variable (single input-output) system for specified characteristic roots. This is also shown to be a general procedure for the reduction of a system to (phase-variable) canonical form. The procedures of Part I and Part II are then applied to the synthesis of a single-variable system for insensitivity to parameter changes.

Journal ArticleDOI
TL;DR: In this article, the authors present some recent developments in the field of deterministic optimal control and apply the theory to problems of engineering interest, and a selected list of references is also included.
Abstract: The purpose of this paper is to present some recent developments in the field of deterministic optimal control. Topics dealing with advances in the theory, as well as with applications of the theory to problems of engineering interest, will be discussed. A selected list of references is also included.

Journal ArticleDOI
TL;DR: In this article, a linear operator acting on functions of time and distance is described, where the operator is represented by an infinite diagonal matrix in which the entries are functions of the Laplace transform variable s.
Abstract: This paper presents a solution to the problem of the control of a class of linear distributed systems. The system is described by a linear operator acting on functions of time and distance. It is shown that if the operator separates in time and distance and the distance operator has a real discrete spectrum (self-adjoint and completely continuous), the operator can be represented by an infinite diagonal matrix in which the entries are functions of the Laplace transform variable s . In particular, if the problem stems from separable partial differential equations, the entries are rational ratios of polynomials of s . The system can be compensated with a series of discrete conventional filters using techniques of conventional lumped, single loop control system design. To implement the control system, the assumption is made that the distance dependent part of the output and forcing functions have negligible eigenfunction content beyond the N th one. If this assumption holds, N sensors, N filters, and N manipulators, plus 2 matrix multipliers and N subtractors provide a synthesis of the feedback control system. An illustrative numerical example is given.

Journal ArticleDOI
R. O'Shea1
TL;DR: In this article, a sufficient condition for asymptotic stability of a continuous nonlinearity with a monotone bound has been developed, which appears to have considerable practical application.
Abstract: An improved sufficient condition for asymptotic stability developed for the monotone nonlinearity is Re [Z(j\omega) (G(j\omega) + 1/k_{2})] or Re [Z(j\omega,) \bar(G(j\omega) + 1/k_{2})] \geq 0 where Z(s)=1+\alphas+Y(s) and \int\min{0}\max{\infty}|y(t)|dt . Examples of allowed Z(s) functions are considered which show that the angle of G(j\omega) +1/k_{2} is permitted to take on both positive and negative values outside the \pm90\deg band. The approach used in deriving the above results is a general one which reduces the problem of obtaining sufficient conditions for asymptotic stability to one of obtaining a bound for a cross-correlation function R(T) in terms of R(O) . As an example, a sufficient condition for asymptotic stability is developed for a continuous nonlinearity having a monotone bound which appears to have considerable practical application.

Journal ArticleDOI
TL;DR: In this paper, it was shown that minimization of integrals containing quartic or hexadic terms in the state variables leads, respectively, to cubic or quintic feedback, which is adaptive to actuator saturation.
Abstract: Just as minimization of quadratic performance criteria leads to linear feedback, so it is shown here that minimization of integrals containing quartic or hexadic terms in the state variables leads, respectively, to cubic or quintic feedback. This idea is extended to the minimization of integrals of arbitrarily higher order combinations of the state variables, which is desirable in order to impose inequality constraints upon the state variables. Such laws are shown to be adaptive to actuator saturation (including even bang-bang operation). These results are proved by exhibiting a closed-form solution of the corresponding Hamilton-Jacobi equation, which also provides a globally valid Liapunov function. Prior results of Kalman, Haussler, and Rekasius for linear plants appear as special cases. A new constructive procedure for computing the coefficients of the higher-order feedback terms is also presented, together with a numerical application which illustrates remarkable effectiveness in the reduction of overshoots as compared to optimal linear control.

Journal ArticleDOI
V. Levadi1
TL;DR: In this paper, the reproducing kernel Hilbert space formulation is used to obtain the parameter estimates and the error covariance matrix in terms of the input, and the performance index is minimized by a variational procedure.
Abstract: The optimum energy-constrained and time-constrained input signal is obtained for estimating the parameters of a system. The output is corrupted by nonstationary, nonwhite additive observation noise, and the observation time is finite. The reproducing kernel Hilbert space formulation is used to obtain the parameter estimates and the error covariance matrix in terms of the input. The performance index, assumed to be a function of the error covariance matrix, is minimized by a variational procedure. A necessary condition for optimality is that the input satisfy a nonlinear Fredholm equation. An example estimates the gain of a single time constant system where the observation noise has an exponential autocorrelation function. For broadband noise, the optimum input is a portion of a sinusoid. For a noise bandwidth narrower than the system bandwidth, the optimum input switches sign as rapidly as possible, but near-optimum performance can be obtained with a relatively high frequency sinusoidal input.

Journal ArticleDOI
TL;DR: In this paper, an iterative equation based on dynamic programming for finding the most likely trajectory of a dynamic system observed through a noisy measurement system is presented; the procedure can be applied to nonlinear systems with non-Gaussian noise.
Abstract: An iterative equation based on dynamic programming for finding the most likely trajectory of a dynamic system observed through a noisy measurement system is presented; the procedure can be applied to nonlinear systems with non-Gaussian noise It differs from the recently developed Bayesian estimation procedure in that the most likely estimate of the entire trajectory up to the present time, rather than of the present state only, is generated It is shown that the two procedures in general yield different estimates of the present state; however, in the case of linear systems with Gaussian noise, both procedures reduce to the Kalman-Bucy filter Illustrative examples are presented, and the present procedure is compared with the Bayesian procedure and with other estimation techniques in terms of computational requirements and applicability

Journal ArticleDOI
TL;DR: In this paper, the discrete maximum principle is re-derived with a requirement that is weaker than convexity, which considerably extends its applicability and has been used in the development of optimal control theory.
Abstract: Halkin has given a derivation of the discrete maximum principle using a convexity requirement. An example given in this paper shows that incorrect results may be obtained when Halkin's convexity requirement is not met. There are, however, systems that do not satisfy the convexity requirement, but for which there is still a maximum principle. The discrete maximum principle is rederived with a requirement, directional convexity, that is weaker than convexity and which considerably extends its applicability. Though convexity has appeared to be basic in the development of optimal control theory, it is only the weaker property of directional convexity which is required for much of the development.

Journal ArticleDOI
J. Sklansky1
TL;DR: Current developments in learning systems for automatic control are discussed from the point of view of pattern recognition, and Markov chain theory provides an approach to modelling the dynamics of learning controllers.
Abstract: Recent developments in learning systems for automatic control are discussed from the point of view of pattern recognition. The following mathematical areas are given special attention: 1) decision theory, which produces control policies from gradually adjusted estimates of pattern probabilities, 2) trainable threshold logic, which produces control policies from networks of adjustable threshold devices, 3) stochastic approximation, which produces asymptotically optimum controllers, and 4) Markov chain theory, which provides an approach to modelling the dynamics of learning controllers. Projected applications in the following areas are discussed: process control, automated design of controllers, reliability control, numerical computation, and communication systems. A selected bibliography is included.

Journal ArticleDOI
TL;DR: In this article, a Liapunov-like method is presented for obtaining upper bounds of the probability P(x) ∈ {T\geq t/geq 0} V(X_{t})\geq\lambda, where x = x and x t is a Markov process with either discrete or continuous parameter.
Abstract: A (Liapunov-like) method is presented for obtaining upper bounds of the probability P_{x}{\sup_{T\geq t\geq 0} V(X_{t})\geq\lambda} , where x_{0} = x and x t is a Markov process with either discrete or continuous parameter, and V(\cdot) is some function. Such estimates are the quantity of greatest interest in numerous tracking, control, and reliability studies. The method involves finding suitable (stochastic) Liapunov functions. The results are also results in (what may be termed) finite-time stochastic stability. The theorems are based on some theorems of Dynkin [1]. Several illustrative examples are given.

Journal ArticleDOI
TL;DR: An algorithm is proposed for the design of "on-line" learning controllers to control a discrete stochastic plant and the subjective probabilities of applying control actions from a finite set of allowable actions using random strategy are modified through the algorithm.
Abstract: An algorithm is proposed for the design of "on-line" learning controllers to control a discrete stochastic plant. The subjective probabilities of applying control actions from a finite set of allowable actions using random strategy, after any plant-environment situation (called an "event") is observed, are modified through the algorithm. The subjective probability for the optimal action is proved to approach one with probability one for any observed event. The optimized performance index is the conditional expectation of the instantaneous performance evaluations with respect to the observed events and the allowable actions. The algorithm is described through two transformations, T 1 and T 2 . After the "ordering transformation" T 1 is applied on the estimates of the performance indexes of the allowable actions, the "learning transformation" T 2 modifies the subjective probabilities. The cases of discrete and continuous features are considered. In the latter, the Potential Function Method is employed. The algorithm is compared with a linear reinforcement scheme and computer simulation results are presented.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a discrete maximum principle similar to the Pontryagin maximum principle is valid for a subclass of these problems, specifically systems with linear dynamics, convex inequality constraints and convex performance criteria.
Abstract: A certain class of discrete optimization problems is investigated using the framework of nonlinear programming. It is shown that a discrete maximum principle similar to the Pontryagin maximum principle is valid for a subclass of these problems, specifically systems with linear dynamics, convex inequality constraints and convex performance criteria. This result extends the applicability of the discrete maximum principle to a class of problems not covered by the Rozonoer-Halkin formulation.

Journal ArticleDOI
TL;DR: The problem of minimizing the ensemble average of a performance index in the presence of control noise has received substantial attention in the literature as mentioned in this paper, where the index variance is minimized while its expectation is constrained.
Abstract: The problem of minimizing the ensemble average of a performance index in the presence of control noise has received substantial attention in the literature. This work considers a generalization of viewpoint, in which the index variance is minimized while its expectation is constrained. Necessary and sufficient relations are derived for linear, time-invariant systems and disturbances having rational spectra. The open-loop, optimal-feedback solution is specified by its characteristic equation and boundary conditions for Gaussian noises and plants with distinct eigenvalues. The canonic structure of a noise-free plant incorporating covariance data from the disturbance process is shown to have fundamental significance in the optimal solution. Several examples are presented.

Journal ArticleDOI
TL;DR: This paper presents the development of an algorithm for control of the sampling interval in discrete systems based on the use of a sensitivity function which reflects the influence of discrete changes in hold circuit output on system response.
Abstract: This paper presents the development of an algorithm for control of the sampling interval in discrete systems. The proposed strategy is based on the use of a sensitivity function which reflects the influence of discrete changes in hold circuit output on system response. The actual algorithm is hybrid, including both continuous computation and logical decisions. The method is illustrated by examples with a linear and a nonlinear sampled-data control system. The adaptive systems, where the sampling interval is variable, are shown to lead to significant reductions in the total number of samples required in an interval, as compared to equivalent systems with a fixed sampling rate.