scispace - formally typeset
Search or ask a question

Showing papers by "Martin Morf published in 1976"


Proceedings ArticleDOI
01 Dec 1976
TL;DR: This work has developped fast algorithms for a variety of presently available identification methods as well as new ones that require computer time and storage per measurement only proportional to the number of model parameters, compared to the square of theNumber of parameters for previous methods.
Abstract: Recursive identification algorithms are of great interest in control and estimation problems, and related areas such as recursive least squares- and adaptive methods. Recently we have shown how a certain shift invariance inherent in many estimation and control problems can be exploited to obtain fast algorithms that often require orders of magnitude less computations than presently available methods to compute optimal gains. We have developped fast algorithms for a variety of presently available identification methods as well as new ones that require computer time and storage (or hardware) per measurement only proportional to the number of model parameters, compared to the square of the number of parameters for previous methods. Since parameter identification can be formulated as a state estimation problem, optimal filtering results can be applied. In particular we would like to attract attention to alternatives, such as square-root methods and their last versions or ladder forms using partial correlations that have several computational and numerical advantages over the more commonly used methods.

73 citations


Proceedings ArticleDOI
01 Dec 1976
TL;DR: In this article, a generalized resultant matrix and a fast algorithm for testing the coprimeness of two polynomial matrices, extracting their great common divisor, finding the McMillan degree and the observability indices of the associated minimal realization are presented.
Abstract: We present a generalized resultant matrix and a fast algorithm for testing the coprimeness of two polynomial matrices, extracting their great common divisor, finding the McMillan degree and the observability indices of the associated minimal realization.

57 citations


Proceedings ArticleDOI
01 Dec 1976
TL;DR: In this article, a general linear least squares estimation problem is considered and the optimal filters for filtering and smoothing can be recursively and efficiently calculated under certain structural assumptions about the covariance functions involved.
Abstract: A general linear least-squares estimation problem is considered. It is shown how the optimal filters for filtering and smoothing can be recursively and efficiently calculated under certain structural assumptions about the covariance functions involved. This structure is related to an index known as the displacement rank, which is a measure of non-Toeplitzness of a covariance kernel. When a state space type structure is added, it is shown how the Chandrasekhar equations for determining the gain of the Kalman-Bucy filter can be derived directly from the covariance function information; thus we are able to imbed this class of state-space problems into a general input-output framework.

16 citations


Proceedings ArticleDOI
01 Dec 1976
TL;DR: In this article, a fast and potentially numerically advantageous algorithm for finding canonical minimal state-space realizations from given multivariable transfer functions is presented, which can be used to find canonical transfer functions.
Abstract: We present a fast and potentially numerically advantageous algorithm for finding canonical minimal state-space realizations from given multivariable transfer functions.

6 citations


Proceedings ArticleDOI
01 Dec 1976
TL;DR: A way of classifying stochastic processes in terms of their "distance" from stationarity that leads to a derivation of an efficient Levinson-type algorithm for arbitrary (nonstationary) processes is introduced.
Abstract: Recursive algorithms for the solution of linear least-squares estimation problems have been based mainly on state-space models. It has been know, however, that such algorithms exist for stationary time-series, using input-output descriptions (e.g., covariance matrices). We introduce a way of classifying stochastic processes in terms of their "distance" from stationarity that leads to a derivation of an efficient Levinson-type algorithm for arbitrary (nonstationary) processes. By adding structure to the covariance matrix, these general results specialize to state-space type estimation algorithms. In particular, the Chandrasekhar equations are shown to be the natural descendants of the Levinson algorithm.

6 citations