scispace - formally typeset
Search or ask a question
Topic

Recursive least squares filter

About: Recursive least squares filter is a research topic. Over the lifetime, 8907 publications have been published within this topic receiving 191933 citations.


Papers
More filters
Proceedings ArticleDOI
28 Nov 2005
TL;DR: The proposed state-space model and fractional order difference equation are used in an identification procedure which produces very accurate results.
Abstract: The paper is devoted to the application of fractional calculus concepts to modeling, identification and control of discrete-time systems. Fractional difference equations (FAE) models are presented and their use in identification, state estimation and control context is discussed. The fractional difference state-space model is proposed for that purpose. For such a model stability conditions are given. A fractional Kalman filter (FKF) for this model is recalled. The proposed state-space model and fractional order difference equation are used in an identification procedure which produces very accurate results. Finally, the state-space model is used in closed-loop state feedback control form together with FKF as a state estimator. The latter is also given in an adaptive form together with FKF and a modification of recursive least squares (RLS) algorithm as a parameters identification procedure. All the algorithms presented were tested in simulations and the example results are given in the paper

69 citations

Journal ArticleDOI
01 Aug 1998
TL;DR: This approach combines structure and parameter identification of Takagi-Sugeno-Kang fuzzy models via a combination of modified mountain clustering algorithm, recursive least squares estimation (RLSE), and group method of data handling (GMDH).
Abstract: The paper describes an approach to generating optimal adaptive fuzzy neural models from I/O data. This approach combines structure and parameter identification of Takagi-Sugeno-Kang (TSK) fuzzy models. We propose to achieve structure determination via a combination of modified mountain clustering (MMC) algorithm, recursive least squares estimation (RLSE), and group method of data handling (GMDH). Parameter adjustment is achieved by training the initial TSK model using the algorithm of an adaptive network based fuzzy inference system (ANFIS), which employs backpropagation (BP) and RLSE. Further, a procedure for generating locally optimal model structures is suggested. The structure optimization procedure is composed of two phases: 1) locally optimal rule premise variables subsets (LOPVS) are identified using MMC, GMDH, and a search tree (ST); and 2) locally optimal numbers of model rules (LONOR) are determined using MMC/RLSE along with parallel simulation mean square error (PSMSE) as a performance index. The effectiveness of the proposed approach is verified by a variety of simulation examples. The examples include modeling of a nonlinear dynamical process from I/O data and modeling nonlinear components of dynamical plants, followed by tracking control based on a model reference adaptive scheme (MRAC). Simulation results show that this approach is fast and accurate and leads to several optimal models.

69 citations

Journal ArticleDOI
TL;DR: The forgetting factor RLS algorithm exhibits a variable performance that depends on the particular combination of the initialization and noise level, and it is shown that it is preferable to initialize the algorithm with a matrix of large norm.
Abstract: We investigate the convergence properties of the forgetting factor RLS algorithm in a stationary data environment. Using the settling time as our performance measure, we show that the algorithm exhibits a variable performance that depends on the particular combination of the initialization and noise level. Specifically when the observation noise level is low (high SNR) RLS, when initialized with a matrix of small norm, it has an exceptionally fast convergence. Convergence speed decreases as we increase the norm of the initialization matrix. In a medium SNR environment, the optimum convergence speed of the algorithm is reduced as compared with the previous case; however, RLS becomes more insensitive to initialization. Finally, in a low SNR environment, we show that it is preferable to initialize the algorithm with a matrix of large norm.

68 citations

Journal ArticleDOI
TL;DR: This work proposes three ways of initializing, one that uses randomly generated data, a second that is ad-hoc and a third that uses an appropriate distribution, which provide a computing toolbox for analysing the quantitative properties of dynamic stochastic macroeconomic models under adaptive learning.

68 citations

Journal ArticleDOI
TL;DR: It is shown that the performance of the systolic array is similar to that of a conventional LMS implementation for a wide range of practical conditions.
Abstract: A systolic array design for an adaptive filter is presented. The filter is based on the least-mean-square algorithm, but due to the problems in implementation of the systolic array, a modified algorithm, a special case of the delayed LMS (DLMS), is used. The DLMS algorithm introduces a delay in the updating of the filter coefficients. The convergence and steady-state behavior of the systolic array are analyzed. It is shown that the performance of the systolic array is similar to that of a conventional LMS implementation for a wide range of practical conditions. >

68 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Wireless
133.4K papers, 1.9M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022104
2021172
2020228
2019234
2018237