Topic
Recursive least squares filter
About: Recursive least squares filter is a research topic. Over the lifetime, 8907 publications have been published within this topic receiving 191933 citations.
Papers published on a yearly basis
Papers
More filters
••
41 citations
••
TL;DR: In this paper, a geometrical explanation and lower bounds for the errors of least squares solutions based on orthogonal transformations are given. But these bounds are not applicable to some classes of problems and types of perturbations, and to others.
Abstract: In 1966 Golub and Wilkinson gave upper bounds for the errors of least squares solutions based on orthogonal transformations, in which the square of the condition number of the matrix occurs. In the present paper a geometrical explanation and lower bounds are given. The upper bounds will be shown to be realistic for some classes of problems and types of perturbations, and to be unrealistic for others.
41 citations
••
TL;DR: The authors present scalar implementations of multichannel and multiexperiment fast recursive least squares algorithms in transversal filter form, known as fast transversAL filter (FTF) algorithms, which benefit from the advantages of triangularization techniques in block processing.
Abstract: The authors present scalar implementations of multichannel and multiexperiment fast recursive least squares algorithms in transversal filter form, known as fast transversal filter (FTF) algorithms. By processing the different channels and/or experiments one at a time, the multichannel and/or multiexperiment algorithm decomposes into a set of intertwined single-channel single-experiment algorithms. For multichannel algorithms, the general case of possibly different filter orders in different channels is handled. Geometrically, this modular decomposition approach corresponds to a Gram-Schmidt orthogonalization of multiple error vectors. Algebraically, this technique corresponds to matrix triangularization of error covariance matrices and converts matrix operations into a regular set of scalar operations. Modular algorithm structures that are amenable to VLSI implementation on arrays of parallel processors naturally follow from the present approach. Numerically, the resulting algorithm benefits from the advantages of triangularization techniques in block processing. >
41 citations
••
TL;DR: In this article, it was shown that the minimum norm solution is equivalent to the total least square solution, and that two versions of the TLS solution exist, one based on the signal subspace and another based on noise subspace.
Abstract: It is shown that the minimum norm solution is equivalent to the total least squares solution. It is noted that two versions of the total least squares solution exist, one based on the signal subspace and another based on the noise subspace. >
41 citations
••
TL;DR: This work considers a class of hybrid systems which is modeled by continuous-time linear systems with Markovian jumps in the parameters (LSMJP), and derives the best linear mean square estimator for such systems.
Abstract: We consider a class of hybrid systems which is modeled by continuous-time linear systems with Markovian jumps in the parameters (LSMJP). Our aim is to derive the best linear mean square estimator for such systems. The approach adopted here produces a filter which bears those desirable properties of the Kalman filter: A recursive scheme suitable for computer implementation which allows some offline computation that alleviates the computational burden. Apart from the intrinsic theoretical interest of the problem in its own right and the application-oriented motivation of getting more easily implementable filters, another compelling reason why the study here is pertinent has to do with the fact that the optimal nonlinear filter for our estimation problem is not computable via a finite computation (the filter is infinite dimensional). Our filter has dimension Nn, with n denoting the dimension of the state vector and N the number of states of the Markov chain.
41 citations