scispace - formally typeset
Search or ask a question

Showing papers by "Ali H. Sayed published in 1993"


Proceedings ArticleDOI
15 Dec 1993
TL;DR: In this article, a self-contained theory for linear estimation in Krein spaces is developed, based on simple concepts such as projections and matrix factorizations, and leads to an interesting connection between Krein space projection and the computation of the stationary points of certain second order (or quadratic) forms.
Abstract: We develop a self-contained theory for linear estimation in Krein spaces. The theory is based on simple concepts such as projections and matrix factorizations, and leads to an interesting connection between Krein space projection and the computation of the stationary points of certain second order (or quadratic) forms. We use the innovations process to obtain a rather general recursive linear estimation algorithm, which when specialized to a state space model yields a Krein space generalization of the celebrated Kalman filter with applications in several areas such as H/sup /spl infin//-filtering and control, game problems, risk sensitive control, and adaptive filtering. >

27 citations


Proceedings ArticleDOI
15 Dec 1993
TL;DR: In this paper, it was shown that the LMS algorithm is a minimizer of the H/sup /spl infin// error norm, and that the normalized LMS minimizes the energy gain from disturbances to the predicted errors.
Abstract: Shows that the celebrated LMS (least-mean squares) adaptive algorithm is an H/sup /spl infin// optimal filter In other words, the LMS algorithm, which has long been regarded as an approximate least-mean squares solution, is in fact a minimizer of the H/sup /spl infin// error norm In particular, the LMS minimizes the energy gain from the disturbances to the predicted errors, while the normalized LMS minimizes the energy gain from the disturbances to the filtered errors Moreover, since these algorithms are central H/sup /spl infin// filters, they are also risk-sensitive optimal and minimize a certain exponential cost function The authors discuss various implications of these results, and show how they provide theoretical justification for the widely observed excellent robustness properties of the LMS filter >

24 citations


Proceedings ArticleDOI
15 Dec 1993
TL;DR: In this paper, the Krein space Kalman filter was used for risk sensitive control and estimation in the context of H/sup /spl infin//-filtering and game theory.
Abstract: We show that several applications considered in the context of H/sup /spl infin//-filtering and game theory, risk sensitive control and estimation, follow as special cases of the Krein space Kalman filter. We show that these problems can be cast into the problem of calculating the stationary points of certain second order forms: and that by considering appropriate state space models and error Gramians, we can use the Krein space Kalman filter to recursively compute these stationary points and to study their properties. >

19 citations



Proceedings ArticleDOI
27 Apr 1993
TL;DR: A unified square-root-based derivation of adaptive filtering schemes that is based on reformulating the original problem as a state-space linear least-squares estimation problem and suggests some generalizations and extensions of classical results.
Abstract: The authors describe a unified square-root-based derivation of adaptive filtering schemes that is based on reformulating the original problem as a state-space linear least-squares estimation problem. In this process one encounters rich connections with algorithms that have been long established in linear least-squares estimation theory, such as the Kalman filter, the Chandrasekhar filter, and the information forms of the Kalman and Chandrasekhar algorithms. The RLS (recursive least squares), fast RLS, QR, and lattice algorithms readily follow by proper identification with such well-known algorithms. The approach also suggests some generalizations and extensions of classical results. >

16 citations


Proceedings Article
29 Nov 1993
TL;DR: Analysis to the nonlinear setting often encountered in neural networks, and it is shown that the backpropagation algorithm is locally H∞ optimal, providing a theoretical justification of the widely observed excellent robustness properties of the LMS and backpropaganda algorithms.
Abstract: We have recently shown that the widely known LMS algorithm is an H∞ optimal estimator. The H∞ criterion has been introduced, initially in the control theory literature, as a means to ensure robust performance in the face of model uncertainties and lack of statistical information on the exogenous signals. We extend here our analysis to the nonlinear setting often encountered in neural networks, and show that the backpropagation algorithm is locally H∞ optimal. This fact provides a theoretical justification of the widely observed excellent robustness properties of the LMS and backpropagation algorithms. We further discuss some implications of these results.

14 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived a more general sum of quasi-Toeplitz and quasi-Hankel matrices, both Hermitian and non-Hermitian.

11 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: The authors present a new derivation of exact least-squares multichannel and multidimensional adaptive algorithms, based on explicitly formulating the problem as a state-space estimation problem and then using different square-root versions of the Kalman, Chandrasekhar, and information algorithms.
Abstract: The authors present a new derivation of exact least-squares multichannel and multidimensional adaptive algorithms, based on explicitly formulating the problem as a state-space estimation problem and then using different square-root versions of the Kalman, Chandrasekhar, and information algorithms. The amount of data to be processed here is usually significantly higher than in the single-channel case, and reducing the computational complexity of the standard multichannel RLS (recursive least square) algorithm is thus of major importance. This reduction is usually achieved by invoking the existing shift structure in the input data. For this purpose, it is shown how to apply the extended Chandrasekhar recursions, with an appropriate choice of the initial covariance matrix, to reduce the computations by an order of magnitude. In multichannel filters, the number of weights in different channels is not necessarily the same. This is illustrated with two examples: a nonlinear Volterra-series filter and a two-dimensional filter. In the former case the number of weights varies among the channels, but in the latter case all channels have the same number of weights. >

9 citations