scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1997"



Journal ArticleDOI
TL;DR: In this paper, the authors define variable knot splines as minimizers of global functionals and study their asymptotic properties, showing that the estimates adapt well to spatially inhomogeneous smoothness.
Abstract: Least squares penalized regression estimates with total variation penalties are considered. It is shown that these estimators are least squares splines with locally data adaptive placed knot points. The definition of these variable knot splines as minimizers of global functionals can be used to study their asymptotic properties. In particular, these results imply that the estimates adapt well to spatially inhomogeneous smoothness. We show rates of convergence in bounded variation function classes and discuss pointwise limiting distributions. An iterative algorithm based on stepwise addition and deletion of knot points is proposed and its consistency proved.

352 citations


Journal ArticleDOI
TL;DR: This work exploits the one-to-one correspondences between the recursive least-squares (RLS) and Kalman variables to formulate extended forms of the RLS algorithm that are applicable to a system identification problem and the tracking of a chirped sinusoid in additive noise.
Abstract: We exploit the one-to-one correspondences between the recursive least-squares (RLS) and Kalman variables to formulate extended forms of the RLS algorithm. Two particular forms of the extended RLS algorithm are considered: one pertaining to a system identification problem and the other pertaining to the tracking of a chirped sinusoid in additive noise. For both of these applications, experiments are presented that demonstrate the tracking superiority of the extended RLS algorithms compared with the standard RLS and least-mean-squares (LMS) algorithms.

281 citations


Journal ArticleDOI
TL;DR: In this paper, a new and fast recursive, exponentially weighted PLS algorithm is presented, which provides greatly improved parameter estimates in most process situations, including adaptive control of a two by two simulated multivariable continuous stirred tank reactor and updating of a prediction model for an industrial flotation circuit.

277 citations


Journal ArticleDOI
TL;DR: This paper proposes and test an iterative algorithm based on Lanczos bidiagonalization for computing truncated TLS solutions and expresses the results in terms of the singular value decomposition of the coefficient matrix rather than the augmented matrix, which leads to insight into the filtering properties of the truncation TLS method as compared to regularized least squares solutions.
Abstract: The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use of TLS for solving problems with very ill-conditioned coefficient matrices whose singular values decay gradually (so-called discrete ill-posed problems), where some regularization is necessary to stabilize the computed solution. We filter the solution by truncating the small singular values of the TLS matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose and test an iterative algorithm based on Lanczos bidiagonalization for computing truncated TLS solutions.

212 citations


Journal ArticleDOI
TL;DR: A new member of the family of mixed-norm stochastic gradient adaptive filter algorithms for system identification applications based upon a convex function of the error norms that underlie the least mean square (LMS) and least absolute difference (LAD) algorithms is proposed.
Abstract: We propose a new member of the family of mixed-norm stochastic gradient adaptive filter algorithms for system identification applications based upon a convex function of the error norms that underlie the least mean square (LMS) and least absolute difference (LAD) algorithms. A scalar parameter controls the mixture and relates, approximately, to the probability that the instantaneous desired response of the adaptive filter does not contain significant impulsive noise. The parameter is calculated with the complementary error function and a robust estimate of the standard deviation of the desired response. The performance of the proposed algorithm is demonstrated in a system identification simulation with impulsive and Gaussian measurement noise.

201 citations


Journal ArticleDOI
TL;DR: A smart solution is provided for a difficult vapour recognition problem vexed by drift that have failed with traditional pattern recognition techniques and is here coupled with a recursive least squares algorithm.
Abstract: Sensor drift is addressed as one of the most serious impairments afflicting chemical and biochemical sensors. One possible solution to this problem is to view sensor arrays as time-varying dynamic systems, whose variations have to be tracked by adaptive estimation algorithms. A theory of hidden variable dynamics for the rejection of common mode drifting of sensors has previously been developed and is here coupled with a recursive least squares algorithm. In Section 7 a smart solution is provided for a difficult vapour recognition problem vexed by drift that have failed with traditional pattern recognition techniques. Among the many advantages we distinguish that model adaptation to changes in the sensor array makes lifelong calibration possible without interrupting the operation of the array.

103 citations


Proceedings ArticleDOI
10 Dec 1997
TL;DR: In this article, the asymptotic behavior of two recursive estimators, the recursive maximum likelihood estimator and the recursive conditional least squares estimator (RCLSE), as the number of observations increases to infinity is studied.
Abstract: We consider a hidden Markov model (HMM) with multidimensional observations, and where the coefficients (transition probability matrix, and observation conditional densities) depend on some unknown parameter. We study the asymptotic behaviour of two recursive estimators, the recursive maximum likelihood estimator (RMLE), and the recursive conditional least squares estimator (RCLSE), as the number of observations increases to infinity. Firstly, we exhibit the contrast functions associated with the two non-recursive estimators, and we prove that the recursive estimators converge a.s. to the set of stationary points of the corresponding contrast function. Secondly, we prove that the two recursive estimators are asymptotically normal.

99 citations


Journal ArticleDOI
TL;DR: A systems model with time-varying parameters for describing the responses of physical performance to training that would be useful for investigating the underlying mechanisms of adaptation and fatigue is assessed.
Abstract: Busso, Thierry, Christian Denis, Regis Bonnefoy, Andre Geyssant, and Jean-Rene Lacour. Modeling of adaptations to physical training by using a recursive least squares algorithm. J. Appl. Physiol. 8...

93 citations


Book
01 Jan 1997
TL;DR: This book investigates the ability of a neural network (NN) to learn how to control an unknown system, using data acquired on-line, that is during the process of attempting to exert control.
Abstract: This book investigates the ability of a neural network (NN) to learn how to control an unknown (nonlinear, in general) system, using data acquired on-line, that is during the process of attempting to exert control. Two algorithms are developed to train the neural network for real-time control applications. The first algorithm is known as Learning by Recursive Least Squares (LRLS) algorithm and the second algorithm is known as Integrated Gradient and Least Squares (IGLS) algorithm. The ability of these algorithms for training the NN controller for real-time control is demonstrated on practical applications and the local convergence and stability requirements of these algorithms are analysed. In addition, network topology, learning algorithms (particularly supervised learning) and neural network control strategies including a new classification system for them, are presented.

85 citations


Journal ArticleDOI
TL;DR: In this paper, three kinds of adaptive filters: the least mean square (LMS) algorithm, the recursive least square (RLS) technique and the Kalman filter are considered.
Abstract: Chaotic modulation has recently been proposed for spread spectrum (SS) and code division multiple access (CDMA) communication. It embeds the signal of transmission in the bifurcating parameter of a chaotic dynamical system, and uses the wide-band output of the chaotic system as the transmitted signal. Here we consider the demodulation of this communication scheme using an adaptive filter. Not only can an adaptive filter reduce the effect of channel noise, but it also estimates the bifurcating parameter (i.e., the signal of transmission) sequentially which is required in a communication system. Three kinds of adaptive filters: the least mean square (LMS) algorithm, the recursive least square (RLS) technique and the Kalman filter, are considered here. It is found that the demodulators based on these adaptive filters outperform the standard inversion approach used in the literature.

Journal ArticleDOI
TL;DR: The forgetting factor RLS algorithm exhibits a variable performance that depends on the particular combination of the initialization and noise level, and it is shown that it is preferable to initialize the algorithm with a matrix of large norm.
Abstract: We investigate the convergence properties of the forgetting factor RLS algorithm in a stationary data environment. Using the settling time as our performance measure, we show that the algorithm exhibits a variable performance that depends on the particular combination of the initialization and noise level. Specifically when the observation noise level is low (high SNR) RLS, when initialized with a matrix of small norm, it has an exceptionally fast convergence. Convergence speed decreases as we increase the norm of the initialization matrix. In a medium SNR environment, the optimum convergence speed of the algorithm is reduced as compared with the previous case; however, RLS becomes more insensitive to initialization. Finally, in a low SNR environment, we show that it is preferable to initialize the algorithm with a matrix of large norm.

Journal ArticleDOI
TL;DR: This paper investigates the application of a radial basis function (RBF) neural network to the prediction of field strength based on topographical and morphographical data and finds a hybrid algorithm that significantly enhances the real-time or adaptive capability of the RBF-based prediction model.
Abstract: This paper investigates the application of a radial basis function (RBF) neural network to the prediction of field strength based on topographical and morphographical data. The RBF neural network is a two-layer localized receptive field network whose output nodes from a combination of radial activation functions computed by the hidden layer nodes. Appropriate centers and connection weights in the RBF network lead to a network that is capable of forming the best approximation to any continuous nonlinear mapping up to an arbitrary resolution. Such an approximation introduces best nonlinear approximation capability into the prediction model in order to accurately predict propagation loss over an arbitrary environment based on adaptive learning from measurement data. The adaptive learning employs hybrid competitive and recursive least squares algorithms. The unsupervised competitive algorithm adjusts the centers while the recursive least squares (RLS) algorithm estimates the connection weights. Because these two learning rules are both linear, rapid convergence is guaranteed. This hybrid algorithm significantly enhances the real-time or adaptive capability of the RBF-based prediction model. The applications to Okumura's (1968) data are included to demonstrate the effectiveness of the RBF neural network approach.

Journal ArticleDOI
TL;DR: A novel adaptive multiuser CDMA detector structure is introduced using either an extended Kalman filter (EKF) or a recursive least squares (RLS) formulation, adaptive algorithms which jointly estimate the transmitted bits of each user and individual amplitudes and time delays may be derived.
Abstract: Existing multiuser code-division multiple-access (CDMA) detectors either have to rely on strict power control or near-perfect parameter estimation for reliable operation. A novel adaptive multiuser CDMA detector structure is introduced. Using either an extended Kalman filter (EKF) or a recursive least squares (RLS) formulation, adaptive algorithms which jointly estimate the transmitted bits of each user and individual amplitudes and time delays may be derived. The proposed detectors work in a tracking mode after initial delay acquisition is accomplished using other techniques not discussed here. Through computer simulations, we show that the algorithms perform better than a bank of single-user (SU) receivers in terms of near-far resistance. Practical issues such as the selection of adaptation parameters are also discussed.

Journal ArticleDOI
TL;DR: Comparisons with results from using standard least squares algorithms show that the ROLS algorithm can significantly improve the neural modelling accuracy and can also be applied to a large data set with much lower requirements on computer memory than the batch OLS algorithm.
Abstract: A recursive orthogonal least squares (ROLS) algorithm for multi-input, multi-output systems is developed in this paper and is applied to updating the weighting matrix of a radial basis function network. An illustrative example is given, to demonstrate the effectiveness of the algorithm for eliminating the effects of ill-conditioning in the training data, in an application of neural modelling of a multi-variable chemical process. Comparisons with results from using standard least squares algorithms, in batch and recursive form, show that the ROLS algorithm can significantly improve the neural modelling accuracy. The ROLS algorithm can also be applied to a large data set with much lower requirements on computer memory than the batch OLS algorithm.

Journal ArticleDOI
TL;DR: In this paper, a new adaptive filter algorithm called LMS/F was developed that combines the benefits of the least mean square (LMS) and least mean fourth (LMF) methods.
Abstract: A new adaptive filter algorithm has been developed that combines the benefits of the least mean square (LMS) and least mean fourth (LMF) methods. This algorithm, called LMS/F, outperforms the standard LMS algorithm judging either constant convergence rate or constant misadjustment. While LMF outperforms LMS for certain noise profiles, its stability cannot be guaranteed for known input signals even For very small step sizes. However, both LMS and LMS/F have good stability properties and LMS/F only adds a few more computations per iteration compared to LMS. Simulations of a non-stationary system identification problem demonstrate the performance benefits of the LMS/F algorithm.

Journal ArticleDOI
TL;DR: In this article, the recursive least-squares algorithm was adopted to investigate the estimation of surface heat flux of inverse heat conduction problem from experimental data, which is adequate for impulse heat flux estimation.

Proceedings ArticleDOI
01 Jan 1997
TL;DR: In this article, a procedure is presented to accelerate the convergence of the normalized LMS algorithm for colored inputs, where the correction is mostly in the direction of the largest eigenvector.
Abstract: A procedure is presented to accelerate the convergence of the normalized LMS algorithm for colored inputs. The usual NLMS algorithm reduces the distance between the estimated and true system weights, where the correction is in the direction of the input vector. For colored inputs the correction is mostly in the direction of the largest eigenvector. We therefore generate additional, NLMS-like, corrections of the weight vector in directions orthogonal to the input vector and orthogonal to each other. Simulated as well as measurement-based examples show a good acceleration of convergence, especially for high coherence between the input and the desired signal.

Journal ArticleDOI
TL;DR: An extended least squares-based algorithm for feedforward networks is proposed, which eliminates the stalling problem experienced by the pure least squares type algorithms; however, still maintains the characteristic of fast convergence.
Abstract: An extended least squares-based algorithm for feedforward networks is proposed. The weights connecting the last hidden and output layers are first evaluated by least squares algorithm. The weights between input and hidden layers are then evaluated using the modified gradient descent algorithms. This arrangement eliminates the stalling problem experienced by the pure least squares type algorithms; however, still maintains the characteristic of fast convergence. In the investigated problems, the total number of FLOPS required for the networks to converge using the proposed training algorithm are only 0.221%-16.0% of that using the Levenberg-Marquardt algorithm. The number of floating point operations per iteration of the proposed algorithm are only 1.517-3.521 times of that of the standard backpropagation algorithm.

Journal ArticleDOI
TL;DR: Simulation results indicate that for systems with poles close to the unit circle, where an (adaptive) FIR model of very high order would be required to meet a prescribed modeling error, an adaptive Laguerre-lattice model of relatively low order achieves the prescribed bound after just a few updates of the recursions in the adaptive algorithm.
Abstract: Adaptive Laguerre-based filters provide an attractive alternative to adaptive FIR filters in the sense that they require fewer parameters to model a linear time-invariant system with a long impulse response. We present an adaptive Laguerre-lattice structure that combines the desirable features of the Laguerre structure (i.e., guaranteed stability, unique global minimum, and small number of parameters M for a prescribed level of modeling error) with the numerical robustness and low computational complexity of adaptive FIR lattice structures. The proposed configuration is based on an extension to the IIR case of the FIR lattice filter; it is a cascade of identical sections but with a single-pole all-pass filter replacing the delay element used in the conventional (FIR) lattice filter. We utilize this structure to obtain computationally efficient adaptive algorithms (O(M) computations per time instant). Our adaptive Laguerre-lattice filter is an extension of the gradient adaptive lattice (GAL) technique, and it demonstrates the same desirable properties, namely, (1) excellent steady-state behavior, (2) relatively fast initial convergence (comparable with that of an RLS algorithm for Laguerre structure), and good numerical stability. Simulation results indicate that for systems with poles close to the unit circle, where an (adaptive) FIR model of very high order would be required to meet a prescribed modeling error, an adaptive Laguerre-lattice model of relatively low order achieves the prescribed bound after just a few updates of the recursions in the adaptive algorithm.

Journal ArticleDOI
TL;DR: The methods of penalized least-squares and cross-validation balance the bias-variance tradeoff and lead to a closed form expression for the estimator, which is simultaneously optimal in a "small-sample", predictive sum of squares sense and asymptotically optimal in the mean square sense.
Abstract: This letter develops an optimal, nonlinear estimator of a deterministic signal in noise. The methods of penalized least-squares and cross-validation (CV) balance the bias-variance tradeoff and lead to a closed form expression for the estimator. The estimator is simultaneously optimal in a "small-sample", predictive sum of squares sense and asymptotically optimal in the mean square sense.

Journal ArticleDOI
TL;DR: A united training method of the link weights of the Gaussian radial basis function networks (GRBFN) and the shape parameter α of the RBF and the forgotten factor λ which makes the effects on the speed of convergence is proposed.
Abstract: This paper proposes a united training method of the link weights of the Gaussian radial basis function networks (GRBFN) and the shape parameter α of the RBF. The training method corresponding to the former is a kind of recursive least squares backpropagation (RLS-BP) learning algorithm which is an accurately recursive method, the training method corresponding to the latter is an adaptive gradient descending (AGD) searching algorithm which is an approximately approaching method. We use the one-dimensional images of radar targets to study the effect of the shape parameter α on the rate of recognition, and survey the changes of the shape parameter αs of radial basis functions corresponding to different hidden nodes, and present the judgement confidence curves of different radar targets. In addition, the forgotten factor λ which makes the effects on the speed of convergence is also discussed. The experimental results are presented.

Proceedings ArticleDOI
09 Jun 1997
TL;DR: In this article, an adaptive nonlinear RLS algorithm for robust filtering in impulse noise is presented and the analysis of the mean and mean-square behaviors is carried out and verified by simulation.
Abstract: An adaptive nonlinear RLS algorithm for robust filtering in impulse noise is presented. The analysis of the mean and mean-square behaviours is carried out and verified by simulation. It is shown that the new algorithm can provide a robust performance against impulse noise and outperform the LMS counterpart and the RLS algorithm particularly when there is impulse noise.

Journal ArticleDOI
TL;DR: The results indicate that the performance of the direct implementation of the leaky LMS adaptive filter is superior to that of the random noise implementation in all respects, however, for small leakage factors, these performance differences are negligible.
Abstract: The leaky LMS adaptive filter can be implemented either directly or by adding random white noise to the input signal of the LMS adaptive filter. In this correspondence, we analyze and compare the mean-square performances of these two adaptive filter implementations for system identification tasks with zero mean i.i.d. input signals. Our results indicate that the performance of the direct implementation is superior to that of the random noise implementation in all respects. However, for small leakage factors, these performance differences are negligible.

Journal ArticleDOI
TL;DR: Under the assumption that the constrained and weighted linear least squares subproblems attained in the Gauss--Newton method are not too ill conditioned, global convergence towards a first-order KKT point is proved.
Abstract: A hybrid algorithm consisting of a Gauss--Newton method and a second-order method for solving constrained and weighted nonlinear least squares problems is developed, analyzed, and tested. One of the advantages of the algorithm is that arbitrarily large weights can be handled and that the weights in the merit function do not get unnecessarily large when the iterates diverge from a saddle point. The local convergence properties for the Gauss--Newton method are thoroughly analyzed and simple ways of estimating and calculating the local convergence rate for the Gauss--Newton method are given. Under the assumption that the constrained and weighted linear least squares subproblems attained in the Gauss--Newton method are not too ill conditioned, global convergence towards a first-order KKT point is proved.

Journal ArticleDOI
TL;DR: A new algorithm is developed, which guarantees the normalized bias in the weight vector due to persistent and bounded data perturbations to be bounded, and is termed as the robust recursive least squares (RRLS) algorithm since it resembles the RLS algorithm in its structure and is robust with respect to persistent boundedData perturbation.
Abstract: A new algorithm is developed, which guarantees the normalized bias in the weight vector due to persistent and bounded data perturbations to be bounded. Robustness analysis for this algorithm has been presented. An approximate recursive implementation is also proposed. It is termed as the robust recursive least squares (RRLS) algorithm since it resembles the RLS algorithm in its structure and is robust with respect to persistent bounded data perturbation. Simulation results are presented to illustrate the efficacy of the RRLS algorithm.

Journal ArticleDOI
TL;DR: It is recommended that the weighted least squares procedure be further studied by electric utilities which use neural networks to forecast their short-term load, and experience large variabilities in their hourly marginal energy costs during a 24-hour period.
Abstract: The use of a weighted least squares procedure when training a neural network to solve the short-term load forecasting (STLF) problem is investigated. Our results indicate that a neural network that implements the weighted least squares procedure outperforms a neural network that implements the least squares procedure during the on-peak period for the two performance criteria specified; MAE% and COST, during the entire period for the COST criterion. It is therefore, recommended that the weighted least squares procedure be further studied by electric utilities which use neural networks to forecast their short-term load, and experience large variabilities in their hourly marginal energy costs during a 24-hour period.

Journal ArticleDOI
TL;DR: This analysis of Self-Referential Linear Stochastic models under bounded rationality assuming that agents update their beliefs by means of the Least Mean Squares algorithm proves convergence of the learning mechanism.

Dissertation
01 Jul 1997
TL;DR: The design of an application-specific integrated circuit of a parallel array processor is considered for recursive least squares by QR decomposition using Givens rotations, applicable in adaptive filtering and beamforming applications and a novel algorithm, based on the Squared Given Rotation algorithm, is developed.
Abstract: The design of an application-specific integrated circuit of a parallel array processor is considered for recursive least squares by QR decomposition using Givens rotations, applicable in adaptive filtering and beamforming applications. Emphasis is on high sample-rate operation, which, for this recursive algorithm, means that the time to perform arithmetic operations is critical. The algorithm, architecture and arithmetic are considered in a single integrated design procedure to achieve optimum results. A realisation approach using standard arithmetic operators, add, multiply and divide is adopted. The design of high-throughput operators with low delay is addressed for fixed- and floating-point number formats, and the application of redundant arithmetic considered. New redundant multiplier architectures are presented enabling reductions in area of up to 25%, whilst maintaining low delay. A technique is presented enabling the use of a conventional tree multiplier in recursive applications, allowing savings in area and delay. Two new divider architectures are presented showing benefits compared with the radix-2 modified SRT algorithm. Givens rotation algorithms are examined to determine their suitability for VLSI implementation. A novel algorithm, based on the Squared Givens Rotation (SGR) algorithm, is developed enabling the sample-rate to be increased by a factor of approximately 6 and offering area reductions up to a factor of 2 over previous approaches. An estimated sample-rate of 136 MHz could be achieved using a standard cell approach and O.35pm CMOS technology. The enhanced SGR algorithm has been compared with a CORDIC approach and shown to benefit by a factor of 3 in area and over 11 in sample-rate. When compared with a recent implementation on a parallel array of general purpose (GP) DSP chips, it is estimated that a single application specific chip could offer up to 1,500 times the computation obtained from a single OP DSP chip.

Journal ArticleDOI
01 May 1997
TL;DR: In this paper, an adaptive active control mechanism for vibration suppression using genetic algorithms (GAs) is presented, where GAs are used to estimate the adaptive controller characteristics, where the controller is designed on the basis of optimal vibration suppression with the plant model.
Abstract: This paper presents an investigation into the development of an adaptive active control mechanism for vibration suppression using genetic algorithms (GAs). GAs are used to estimate the adaptive controller characteristics, where the controller is designed on the basis of optimal vibration suppression using the plant model. This is realized by minimizing the prediction error of the actual plant output and the model output. A MATLAB GA toolbox is used to identify the controller parameters. A comparative performance of the conventional recursive least-squares (RLS) scheme and the GA is presented. The active vibration control system is implemented with both the GA and the RLS schemes, and its performance assessed in the suppression of vibration along a flexible beam structure in each case.