scispace - formally typeset
Search or ask a question

Showing papers by "Ali H. Sayed published in 1994"


Journal ArticleDOI
TL;DR: This article is to show how several different variants of the recursive least-squares algorithm can be directly related to the widely studied Kalman filtering problem of estimation and control.
Abstract: Adaptive filtering algorithms fall into four main groups: recursive least squares (RLS) algorithms and the corresponding fast versions; QR- and inverse QR-least squares algorithms; least squares lattice (LSL) and QR decomposition-based least squares lattice (QRD-LSL) algorithms; and gradient-based algorithms such as the least-mean square (LMS) algorithm. Our purpose in this article is to present yet another approach, for the sake of achieving two important goals. The first one is to show how several different variants of the recursive least-squares algorithm can be directly related to the widely studied Kalman filtering problem of estimation and control. Our second important goal is to present all the different versions of the RLS algorithm in computationally convenient square-root forms: a prearray of numbers has to be triangularized by a rotation, or a sequence of elementary rotations, in order to yield a postarray of numbers. The quantities needed to form the next prearray can then be read off from the entries of the postarray, and the procedure can be repeated; the explicit forms of the rotation matrices are not needed in most cases. >

470 citations


Journal ArticleDOI
TL;DR: It can be shown that the much studied exponentially weighted recursive least-squares filtering problem can be reformulated as an estimation problem for a state-space model having this special time-variant structure.
Abstract: We extend the discrete-time Chandrasekhar recursions for least-squares estimation in constant parameter state-space models to a class of structured time-variant state-space models, special cases of which often arise in adaptive filtering. It can be shown that the much studied exponentially weighted recursive least-squares filtering problem can be reformulated as an estimation problem for a state-space model having this special time-variant structure. Other applications arise in the multichannel and multidimensional adaptive filtering context. >

70 citations


Journal ArticleDOI
TL;DR: A novel approach to analytic rational interpolation problems of the Hermite-Fejér type, based on the fast generalized Schur algorithm for the recursive triangular factorization of structured matrices, which leads to a transmission-line cascade of first-order sections that makes evident the interpolation property.
Abstract: We describe a novel approach to analytic rational interpolation problems of the Hermite-Fejer type, based on the fast generalized Schur algorithm for the recursive triangular factorization of structured matrices. We use the interpolation data to construct a convenient so-called generator for the factorization algorithm. The recursive algorithm then leads to a transmission-line cascade of first-order sections that makes evident the interpolation property. We also give state-space descriptions for each section and for the entire cascade.

33 citations


Journal ArticleDOI
TL;DR: The authors extend the concept of displacement structure to time-variant matrices and use it to efficiently and recursively propagate the Cholesky factor of such matrices to solve the normal equations that arise in adaptive least-squares filtering.
Abstract: The authors extend the concept of displacement structure to time-variant matrices and use it to efficiently and recursively propagate the Cholesky factor of such matrices. A natural implementation of the algorithm is via a modular triangular array of processing elements. When the algorithm is applied to solve the normal equations that arise in adaptive least-squares filtering, they get the so-called QR algorithm, with the extra bonus of a parallelizable procedure for determining the weight vector. It is shown that the general algorithm can also be implemented in time-variant lattice form; a specialization of this result yields a time-variant Schur algorithm. >

24 citations


Journal ArticleDOI
TL;DR: A new recursive solution for a general time-variant interpolation problem of the Hermite-Fejer type is derived, based on a fast algorithm for the recursive triangular factorization of time-Variant structured matrices.
Abstract: Derives a new recursive solution for a general time-variant interpolation problem of the Hermite-Fejer type, based on a fast algorithm for the recursive triangular factorization of time-variant structured matrices. The solution follows from studying the properties of an associated cascade system and leads to a triangular array implementation of the recursive algorithm. The system can be drawn as a cascade of first-order lattice sections, where each section is composed of a rotation matrix followed by a storage element and a tapped-delay filter. Such cascades always have certain blocking properties, which can be made equivalent to the interpolation conditions. The authors also illustrate the application of the algorithm to problems in adaptive filtering, model validation, robust control, and analytic interpolation theory. >

21 citations


Journal ArticleDOI
TL;DR: It is shown that when the extra structure provided by an underlying state-space model is properly incorporated into the generalized Schur algorithm, it reduces to the Chandrasekhar recursions, which are O(Nn/sup 2/) recursions for estimating the n-dimensional state of a time-invariant system from N measured outputs.
Abstract: Presents a new approach to the Chandrasekhar recursions and some generalizations thereof. The derivation uses the generalized Schur recursions, which are O(N/sup 2/) recursions for the triangular factorization of N/spl times/N matrices having a certain Toeplitz-like displacement structure. It is shown that when the extra structure provided by an underlying state-space model is properly incorporated into the generalized Schur algorithm, it reduces to the Chandrasekhar recursions, which are O(Nn/sup 2/) recursions for estimating the n-dimensional state of a time-invariant (or constant-parameter) system from N measured outputs. It is further noted that the generalized Schur algorithm factors more general structured matrices, and this fact is readily used to extend the Chandrasekhar recursions to a class of time-variant state-space models, special cases of which often arise in adaptive filtering. >

11 citations


Journal ArticleDOI
TL;DR: A fast recursive algorithm for the solution of an unconstrained rational interpolation problem by exploiting the displacement structure concept and a transmission-line interpretation that makes evident the interpolation properties is described.

9 citations


Proceedings ArticleDOI
31 Oct 1994
TL;DR: In this article, robustness, optimality, and convergence properties of the widely used class of instantaneous-gradient adaptive algorithms are established in a purely deterministic framework and assumes no apriori statistical information.
Abstract: The paper establishes several robustness, optimality, and convergence properties of the widely used class of instantaneous-gradient adaptive algorithms. The analysis is carried out in a purely deterministic framework and assumes no apriori statistical information. It starts with a simple Cauchy-Schwarz inequality for vectors in an Euclidean space and proceeds to derive local and global energy bounds that are shown here to highlight, as well as explain, several relevant aspects of this important class of algorithms. >

7 citations


Proceedings ArticleDOI
14 Dec 1994
TL;DR: The authors develop square-root arrays and Chandrasekhar recursions for H/sup /spl infin// filtering problems that allow a reduction in the computational effort per iteration from O(n/sup 3/) to O( n/sup 2/), where n is the number of states.
Abstract: Using their previous observation that H/sup /spl infin// filtering coincides with Kalman filtering in Krein space the authors develop square-root arrays and Chandrasekhar recursions for H/sup /spl infin// filtering problems The H/sup /spl infin// square-root algorithms involve propagating the indefinite square-root of the quantities of interest and have the property that the appropriate inertia of these quantities is preserved For systems that are constant, or whose time-variation is structured in a certain way, the Chandrasekhar recursions allow a reduction in the computational effort per iteration from O(n/sup 3/) to O(n/sup 2/), where n is the number of states The H/sup /spl infin// square-root and Chandrasekhar recursions both have the interesting feature that one does not need to explicitly check for the positivity conditions required of the H/sup /spl infin// filters These conditions are built into the algorithms themselves so that an H/sup /spl infin// estimator of the desired level exists if, and only if, the algorithms can be executed >

6 citations


Book ChapterDOI
01 Jan 1994
TL;DR: The least-mean-squares (LMS) algorithm was originally conceived as an approximate solution to the above adaptive problem as mentioned in this paper, and it recursively updates the estimates of the weight vector along the direction of the instantaneous gradient of the sum squared error.
Abstract: An important problem that arises in many applications is the following adaptive problem: given a sequence of n × 1 input column vectors {h i }, and a corresponding sequence of desired scalar responses {d i }, find an estimate of an n × 1 column vector of weights w such that the sum of squared errors, \(\sum olimits_{i = 0}^N {{{\left| {{d_i} - h_i^Tw} \right|}^2}}\), is minimized. The {h i ,d i } are most often presented sequentially, and one is therefore required to find an adaptive scheme that recursively updates the estimate of w. The least-mean-squares (LMS) algorithm was originally conceived as an approximate solution to the above adaptive problem. It recursively updates the estimates of the weight vector along the direction of the instantaneous gradient of the sum squared error [1]. The introduction of the LMS adaptive filter in 1960 came as a significant development for a broad range of engineering applications since the LMS adaptive linear-estimation procedure requires essentially no advance knowledge of the signal statistics. The LMS, however, has been long thought to be an approximate minimizing solution to the above squared error criterion, and a rigorous minimization criterion has been missing.

6 citations


Journal ArticleDOI
TL;DR: A new solution to the four-block problem is described using the method of generalized Schur analysis, which parameterizes the unknown entry in terms of a Schur-type matrix function, which is shown to satisfy a finite number of interpolation conditions of the Hermite-Fejer type.
Abstract: We describe a new solution to the four-block problem using the method of generalized Schur analysis. We first reduce the general problem to a simpler one by invoking a coprime factorization with a block-diagonal inner matrix. Then, using convenient spectral factorizations, we are able to parameterize the unknown entry in terms of a Schur-type matrix function, which is shown to satisfy a finite number of interpolation conditions of the Hermite-Fejer type. All possible interpolating functions are then determined via a simple recursive procedure that constructs a transmission-line (or lattice) cascade of elementary J-lossless sections. This also leads to a parameterization of all solutions of the four-block problem in terms of a linear fractional transformation. >

Proceedings ArticleDOI
28 Oct 1994
TL;DR: A recursive algorithm for the time-update of the triangular factors of non-Hermitian time-variant matrices with structure is derived, which considers an IV parameter estimation problem and shows how the arrays collapse to a coupled parallelizable solution of the identification problem.
Abstract: We derive a recursive algorithm for the time—update of the triangular factors of non-Hermitian time-variant matrices with structure. These are matrices that undergo low-rankmodifications as time progresses, special cases of which often arise in adaptive filtering andinstrumental variable (IV) methods. A natural implementation of the algorithm is via twocoupled triangular arrays of processing elements. We consider, in particular, an IV parameterestimation problem and show how the arrays collapse to a coupled parallelizable solution ofthe identification problem. 1. INTRODUCTION The notion of displacement structure provides a natural framework for the solution of manyproblems in signal processing and mathematics. It represents a powerful and unifying toolfor exploiting structure in numerous applications, as detailed in several recent surveys on thetopic.1'2'3 More recently, we have extended the concept of structured matrices to the time-variant setting4'5'6 and shown that we can, as well, study matrices that undergo low-rankmodifications as time progresses. Special examples often arise in adaptive filtering.7'8'9 Inthis paper, we further extend our earlier results to the non-Hermitian case and exhibit anapplication to instrumental variable