scispace - formally typeset
Search or ask a question
Topic

Recursive least squares filter

About: Recursive least squares filter is a research topic. Over the lifetime, 8907 publications have been published within this topic receiving 191933 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the problem of adaptive regulation of linear systems with white-noise disturbances is studied, and the apparent dilemma between the control objective and the need of information for parameter estimation is resolved by occasional use of white noise probing inputs and by a reparametrization of the model.
Abstract: This paper studies the problem of adaptive regulation of linear systems with white-noise disturbances. The apparent dilemma between the control objective and the need of information for parameter estimation is resolved by occasional use of white-noise probing inputs and by a reparametrization of the model. Insights into the question concerning how often and when such probing inputs should be introduced are provided by the concept of “asymptotic efficiency,” which quantifies the asymptotically minimal cost due to parameter ignorance, or equivalently, due to the infeasibility of using the optimal regulator that assumes knowledge of the system parameters. Asymptotically efficient adaptive regulators are constructed by making use of certain basic properties of adaptive predictors involving recursive least squares for the reparametrized model.

54 citations

Journal ArticleDOI
TL;DR: A new robust recursive least-squares adaptive filtering algorithm that uses a priori error-dependent weights that offers improved robustness as well as better tracking compared to the conventional RLS andursive least-M estimate adaptation algorithms.
Abstract: A new robust recursive least-squares (RLS) adaptive filtering algorithm that uses a priori error-dependent weights is proposed. Robustness against impulsive noise is achieved by choosing the weights on the basis of the L1 norms of the crosscorrelation vector and the input-signal autocorrelation matrix. The proposed algorithm also uses a variable forgetting factor that leads to fast tracking. Simulation results show that the proposed algorithm offers improved robustness as well as better tracking compared to the conventional RLS and recursive least-M estimate adaptation algorithms.

54 citations

Journal ArticleDOI
TL;DR: A methodology is described here allowing to estimate in advance the potential response of flexible end-consumers to price variations, subsequently embedded in an optimal price-signal generator.
Abstract: Household-based demand response is expected to play an increasing role in supporting the large scale integration of renewable energy generation in existing power systems and electricity markets. While the direct control of the consumption level of households is envisaged as a possibility, a credible alternative is that of indirect control based on price signals to be sent to these end-consumers. A methodology is described here allowing to estimate in advance the potential response of flexible end-consumers to price variations, subsequently embedded in an optimal price-signal generator. In contrast to some real-time pricing proposals in the literature, here prices are estimated and broadcast once a day for the following one, for households to optimally schedule their consumption. The price-response is modeled using stochastic finite impulse response (FIR) models. Parameters are estimated within a recursive least squares (RLS) framework using data measurable at the grid level, in an adaptive fashion. Optimal price signals are generated by embedding the FIR models within a chance-constrained optimization framework. The objective is to keep the price signal as unchanged as possible from the reference market price, whilst keeping consumption below a pre-defined acceptable level.

54 citations

Journal ArticleDOI
TL;DR: In this paper, the properties of generalized stochastic gradient (GSG) learning in forward-looking models were studied and the conditions for stability of SG learning both differ from and are related to E-stability, which governs stability under least squares learning.
Abstract: We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity.

54 citations


Network Information
Related Topics (5)
Control theory
299.6K papers, 3.1M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
85% related
Wireless
133.4K papers, 1.9M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202356
2022104
2021172
2020228
2019234
2018237