scispace - formally typeset
Search or ask a question
Topic

Recursive least squares filter

About: Recursive least squares filter is a research topic. Over the lifetime, 8907 publications have been published within this topic receiving 191933 citations.


Papers
More filters
Journal ArticleDOI
S.U.H. Qureshi1
TL;DR: In this article, the authors give an overview of the current state of the art in adaptive equalization and discuss the convergence and steady-state properties of least mean square (LMS) adaptation algorithms.
Abstract: Bandwidth-efficient data transmission over telephone and radio channels is made possible by the use of adaptive equalization to compensate for the time dispersion introduced by the channel Spurred by practical applications, a steady research effort over the last two decades has produced a rich body of literature in adaptive equalization and the related more general fields of reception of digital signals, adaptive filtering, and system identification. This tutorial paper gives an overview of the current state of the art in adaptive equalization. In the first part of the paper, the problem of intersymbol interference (ISI) and the basic concept of transversal equalizers are introduced followed by a simplified description of some practical adaptive equalizer structures and their properties. Related applications of adaptive filters and implementation approaches are discussed. Linear and nonlinear receiver structures, their steady-state performance and sensitivity to timing phase are presented in some depth in the next part. It is shown that a fractionally spaced equalizer can serve as the optimum receive filter for any receiver. Decision-feedback equalization, decision-aided ISI cancellation, and adaptive filtering for maximum-likelihood sequence estimation are presented in a common framework. The next two parts of the paper are devoted to a discussion of the convergence and steady-state properties of least mean-square (LMS) adaptation algorithms, including digital precision considerations, and three classes of rapidly converging adaptive equalization algorithms: namely, orthogonalized LMS, periodic or cyclic, and recursive least squares algorithms. An attempt is made throughout the paper to describe important principles and results in a heuristic manner, without formal proofs, using simple mathematical notation where possible.

1,321 citations

Book
23 Apr 2004
TL;DR: In this paper, Kronecker Factorization and Levenberg-Marquardt method for least square estimation is used to estimate the probability of an error in a prior state estimate.
Abstract: LEAST SQUARES APPROXIMATION A Curve Fitting Example Linear Batch Estimation Linear Least Squares Weighted Least Squares Constrained Least Squares Linear Sequential Estimation Nonlinear Least Squares Estimation Basis Functions Advanced Topics Matrix Decompositions in Least Squares Kronecker Factorization and Least Squares Levenberg-Marquardt Method Projections in Least Squares Summary PROBABILITY CONCEPTS IN LEAST SQUARES Minimum Variance Estimation Estimation without a Prior State Estimates Estimation with a Prior State Estimates Unbiased Estimates Maximum Likelihood Estimation Cramer-Rao Inequality Nonuniqueness of the Weight Matrix Bayesian Estimation Advanced Topics Analysis of Covariance Errors Ridge Estimation Total Least Squares Summary REVIEW OF DYNAMICAL SYSTEMS Linear System Theory The State Space Approach Homogeneous Linear Dynamical Systems Forced Linear Dynamical Systems Linear State Variable Transformations Nonlinear Dynamical Systems Parametric Differentiation Observability Discrete-Time Systems Stability of Linear and Nonlinear Systems Attitude Kinematics and Rigid Body Dynamics Attitude Kinematics Rigid Body Dynamics Spacecraft Dynamics and Orbital Mechanics Spacecraft Dynamics Orbital Mechanics Aircraft Flight Dynamics Vibration Summary PARAMETER ESTIMATION: APPLICATIONS Global Positioning System Navigation Attitude Determination Vector Measurement Models Maximum Likelihood Estimation Optimal Quaternion Solution Information Matrix Analysis Orbit Determination Aircraft Parameter Identification Eigen-system Realization Algorithm Summary SEQUENTIAL STATE ESTIMATION A Simple First-Order Filter Example Full-Order Estimators Discrete-Time Estimators The Discrete-Time Kalman Filter Kalman Filter Derivation Stability and Joseph's Form Information Filter and Sequential Processing Steady-State Kalman Filter Correlated Measurement and Process Noise Orthogonality Principle The Continuous-Time Kalman Filter Kalman Filter Derivation in Continuous Time Kalman Filter Derivation from Discrete Time Stability Steady-State Kalman Filter Correlated Measurement and Process Noise The Continuous-Discrete Kalman Filter Extended Kalman Filter Advanced Topics Factorization Methods Colored-Noise Kalman Filtering Consistency of the Kalman Filter Adaptive Filtering Error Analysis Unscented Filtering Robust Filtering Summary BATCH STATE ESTIMATION Fixed-Interval Smoothing Discrete-Time Formulation Continuous-Time Formulation Nonlinear Smoothing Fixed-Point Smoothing Discrete-Time Formulation Continuous-Time Formulation Fixed-Lag Smoothing Discrete-Time Formulation Continuous-Time Formulation Advanced Topics Estimation/Control Duality Innovations Process Summary ESTIMATION OF DYNAMIC SYSTEMS: APPLICATIONS GPS Position Estimation GPS Coordinate Transformations Extended Kalman Filter Application to GPS Attitude Estimation Multiplicative Quaternion Formulation Discrete-Time Attitude Estimation Murrell's Version Farrenkopf's Steady-State Analysis Orbit Estimation Target Tracking of Aircraft The a-b Filter The a-b-g Filter Aircraft Parameter Estimation Smoothing with the Eigen-system Realization Algorithm Summary OPTIMAL CONTROL AND ESTIMATION THEORY Calculus of Variations Optimization with Differential Equation Constraints Pontryagin's Optimal Control Necessary Conditions Discrete-Time Control Linear Regulator Problems Continuous-Time Formulation Discrete-Time Formulation Linear Quadratic-Gaussian Controllers Continuous-Time Formulation Discrete-Time Formulation Loop Transfer Recovery Spacecraft Control Design Summary APPENDIX A MATRIX PROPERTIES Basic Definitions of Matrices Vectors Matrix Norms and Definiteness Matrix Decompositions Matrix Calculus APPENDIX B BASIC PROBABILITY CONCEPTS Functions of a Single Discrete-Valued Random Variable Functions of Discrete-Valued Random Variables Functions of Continuous Random Variables Gaussian Random Variables Chi-Square Random Variables Propagation of Functions through Various Models Linear Matrix Models Nonlinear Models APPENDIX C PARAMETER OPTIMIZATION METHODS C.1 Unconstrained Extrema C.2 Equality Constrained Extrema C.3 Nonlinear Unconstrained Optimization C.3.1 Some Geometrical Insights C.3.2 Methods of Gradients C.3.3 Second-Order (Gauss-Newton) Algorithm APPENDIX D COMPUTER SOFTWARE Index

1,205 citations

S.U.H. Qureshi1
01 Sep 1985
TL;DR: This tutorial paper gives an overview of the current state of the art in adaptive equalization and discusses the convergence and steady-state properties of least mean-square (LMS) adaptation algorithms, including digital precision considerations, and three classes of rapidly converging adaptive equalizer algorithms.
Abstract: Bandwidth-efficient data transmission over telephone and radio channels is made possible by the use of adaptive equalization to compensate for the time dispersion introduced by the channel Spurred by practical applications, a steady research effort over the last two decades has produced a rich body of literature in adaptive equalization and the related more general fields of reception of digital signals, adaptive filtering, and system identification. This tutorial paper gives an overview of the current state of the art in adaptive equalization. In the first part of the paper, the problem of intersymbol interference (ISI) and the basic concept of transversal equalizers are introduced followed by a simplified description of some practical adaptive equalizer structures and their properties. Related applications of adaptive filters and implementation approaches are discussed. Linear and nonlinear receiver structures, their steady-state performance and sensitivity to timing phase are presented in some depth in the next part. It is shown that a fractionally spaced equalizer can serve as the optimum receive filter for any receiver. Decision-feedback equalization, decision-aided ISI cancellation, and adaptive filtering for maximum-likelihood sequence estimation are presented in a common framework. The next two parts of the paper are devoted to a discussion of the convergence and steady-state properties of least mean-square (LMS) adaptation algorithms, including digital precision considerations, and three classes of rapidly converging adaptive equalization algorithms: namely, orthogonalized LMS, periodic or cyclic, and recursive least squares algorithms. An attempt is made throughout the paper to describe important principles and results in a heuristic manner, without formal proofs, using simple mathematical notation where possible.

1,186 citations

Journal ArticleDOI
TL;DR: A nonlinear version of the recursive least squares (RLS) algorithm that uses a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be sufficiently well approximated by combining the images of previously admitted samples.
Abstract: We present a nonlinear version of the recursive least squares (RLS) algorithm. Our algorithm performs linear regression in a high-dimensional feature space induced by a Mercer kernel and can therefore be used to recursively construct minimum mean-squared-error solutions to nonlinear least-squares problems that are frequently encountered in signal processing applications. In order to regularize solutions and keep the complexity of the algorithm bounded, we use a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be sufficiently well approximated by combining the images of previously admitted samples. This sparsification procedure allows the algorithm to operate online, often in real time. We analyze the behavior of the algorithm, compare its scaling properties to those of support vector machines, and demonstrate its utility in solving two signal processing problems-time-series prediction and channel equalization.

1,011 citations

Journal ArticleDOI
TL;DR: The weighted least squares (WLS) method as mentioned in this paper was shown to be an appropriate way of fitting variogram models, which automatically gives most weight to early lags and down-weights those lags with a small number of pairs.
Abstract: The method of weighted least squares is shown to be an appropriate way of fitting variogram models. The weighting scheme automatically gives most weight to early lags and down-weights those lags with a small number of pairs. Although weights are derived assuming the data are Gaussian (normal), they are shown to be still appropriate in the setting where data are a (smooth) transform of the Gaussian case. The method of (iterated) generalized least squares, which takes into account correlation between variogram estimators at different lags, offer more statistical efficiency at the price of more complexity. Weighted least squares for the robust estimator, based on square root differences, is less of a compromise.

988 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Wireless
133.4K papers, 1.9M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022104
2021172
2020228
2019234
2018237