scispace - formally typeset
Search or ask a question
Topic

Recursive least squares filter

About: Recursive least squares filter is a research topic. Over the lifetime, 8907 publications have been published within this topic receiving 191933 citations.


Papers
More filters
Journal ArticleDOI
01 Jan 1978
TL;DR: Specializations are derived from significant simplifications to a class of extended Kalman filters for linear state space models with the unknown parameters augmenting the state vector and in such a way as to yield good convergence properties.
Abstract: Convenient recursive prediction error algorithms for identification and adaptive state estimation are proposed, and the convergence of these algorithms to achieve off-line prediction error minimization solutions is studied. To set the recursive prediction error algorithms in another perspective, specializations are derived from significant simplifications to a class of extended Kalman filters. The latter are designed for linear state space models with the unknown parameters augmenting the state vector and in such a way as to yield good convergence properties. Also, specializations to approximate maximum likelihood recursions, Kalman filters with adaptive gains, and connections to the extended least squares algorithms are noted.

41 citations

Journal ArticleDOI
TL;DR: In this article, the authors extend these techniques to the separable nonlinear least squares problem subject to separable nonsmooth equality constraints, where the nonlinear variables only have a solution whose solution is the solution to the original problem.
Abstract: Recently several algorithms have been proposed for solving separable nonlinear least squares problems which use the explicit coupling between the linear and nonlinear variables to define a new nonlinear least squares problem in the nonlinear variables only whose solution is the solution to the original problem. In this paper we extend these techniques to the separable nonlinear least squares problem subject to separable nonlinear equality constraints.

41 citations

Proceedings ArticleDOI
14 Apr 1983
TL;DR: The following new results are obtained: necessary and sufficient conditions of convergence, optimal adjustment gains and optimal convergence rates, interrelationship between LMS and NLMS gains, and non-stationary algorithm design.
Abstract: The main contribution of this paper is the unified treatment of convergence analysis for both LMS and NLMS adaptive algorithms. The following new results are obtained: (i) necessary and sufficient conditions of convergence, (ii) optimal adjustment gains and optimal convergence rates, (iii) interrelationship between LMS and NLMS gains, and (iv) non-stationary algorithm design.

41 citations

Proceedings ArticleDOI
26 May 2013
TL;DR: In this article, an adaptive distributed technique that is suitable for node-specific parameter estimation in an adaptive network where each node is interested in a set of parameters of local interest as well as the set of network global parameters is introduced.
Abstract: We introduce an adaptive distributed technique that is suitable for node-specific parameter estimation in an adaptive network where each node is interested in a set of parameters of local interest as well as a set of network global parameters. The estimation of each set of parameters of local interest is undertaken by a local Least Mean Squares (LMS) algorithm at each node. At the same time and coupled with the previous local estimation processes, an incremental mode of cooperation is implemented at all nodes in order to perform an LMS algorithm which estimates the parameters of global interest. In the steady state, the new distributed technique converges to the MMSE solution of a centralized processor that is able to process all the observations. To illustrate the effectiveness of the proposed technique we provide simulation results in the context of cooperative spectrum sensing in cognitive radio networks.

41 citations

Journal ArticleDOI
TL;DR: In this article, a simple expression for the difference between the least squares and minimum variance linear unbiased estimators obtained in linear models in which the covariance operator of the observation vector is nonsingular was developed.
Abstract: A simple expression is developed for the difference between the least squares and minimum variance linear unbiased estimators obtained in linear models in which the covariance operator of the observation vector is nonsingular. Bounds and series expansion for this difference are obtained, and bounds for the efficiency of least squares estimates are also obtained.

41 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Wireless
133.4K papers, 1.9M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022104
2021172
2020228
2019234
2018237