Topic
Recursive least squares filter
About: Recursive least squares filter is a research topic. Over the lifetime, 8907 publications have been published within this topic receiving 191933 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The recursive least squares method (RLS) is derived for the learning of multilayer feedforward neural networks and simulation results indicate a fast learning process in comparison to the classical and momentum backpropagation (BP) algorithms.
Abstract: The recursive least squares method (RLS) is derived for the learning of multilayer feedforward neural networks. Simulation results on the XOR, 4-2-4 encoder, and function approximation problems indicate a fast learning process in comparison to the classical and momentum backpropagation (BP) algorithms.
76 citations
••
TL;DR: The proposed algorithms can effectively estimate the parameters of Hammerstein–Wiener systems and the computational efficiency of the proposed algorithms is analyzed and compared.
Abstract: This paper considers the parameter estimation problems of Hammerstein–Wiener systems by using the data filtering technique. In order to improve the estimation accuracy, the data filtering-based recursive generalized extended least squares algorithm is derived. In order to improve the computational efficiency, the data filtering-based generalized extended stochastic gradient algorithm is derived for estimating the system parameters. Finally, the computational efficiency of the proposed algorithms is analyzed and compared. The simulation results indicate that the proposed algorithms can effectively estimate the parameters of Hammerstein–Wiener systems.
76 citations
••
TL;DR: This paper describes a set of block processing algorithms which contains as extremal cases the normalized least mean squares (NLMS) and the block recursive least squares (BRLS) algorithms, and shows that these algorithms require a lower number of arithmetic operations than the classical leastmean squares (LMS) algorithm, while converging much faster.
Abstract: This paper describes a set of block processing algorithms which contains as extremal cases the normalized least mean squares (NLMS) and the block recursive least squares (BRLS) algorithms. All these algorithms use small block lengths, thus allowing easy implementation and small input-output delay. It is shown that these algorithms require a lower number of arithmetic operations than the classical least mean squares (LMS) algorithm, while converging much faster. A precise evaluation of the arithmetic complexity is provided, and the adaptive behavior of the algorithm is analyzed. Simulations illustrate that the tracking characteristics of the new algorithm are also improved compared to those of the NLMS algorithm. The conclusions of the theoretical analysis are checked by simulations, illustrating that, even in the case where noise is added to the reference signal, the proposed algorithm allows altogether a faster convergence and a lower residual error than the NLMS algorithm. Finally, a sample-by-sample version of this algorithm is outlined, which is the link between the NLMS and recursive least squares (RLS) algorithms. >
76 citations
••
TL;DR: This work presents a robust procedure for optimally estimating a polynomial-form input forcing function, its time of occurrence and the measurement error covariance matrix, R, based on a running window robust regression analysis.
Abstract: A method is proposed to adapt the Kalman filter to the changes in the input forcing functions and the noise statistics. The resulting procedure is stable in the sense that the duration of divergences caused by external disturbances are finite and short and, also, the procedure is robust with respect to impulsive noise (outlier). The input forcing functions are estimated by a running window curve-fitting algorithm, which concurrently provides estimates of the measurement noise covariance matrix and the time instant of any significant change in the input forcing functions. In addition, an independent technique for estimating the process noise covariance matrix is suggested which establishes a negative feedback in the overall adaptive Kalman filter. This procedure is based on the residual characteristics of the standard optimum Kalman filter and a stochastic approximation method. The performance of the proposed method is demonstrated by simulations and compared to the conventional sequential adaptive Kalman filter algorithm. >
76 citations