scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1983"


Journal ArticleDOI
TL;DR: In this article, a simple iterative algorithm is presented and shown to converge to the desired solution for minimizing a least square expression subject to side constraints, which is a commonly occurring problem in statistics.
Abstract: A commonly occurring problem in statistics is that of minimizing a least squares expression subject to side constraints. Here a simple iterative algorithm is presented and shown to converge to the desired solution. Several examples are presented, including finding the closest concave (convex) function to a set of points and other general quadratic programming problems. The dual problem to the basic problem is also discussed and a solution for it is given in terms of the algorithm. Finally, extensions to expressions other than least squares are given.

572 citations


Proceedings ArticleDOI
28 Nov 1983
TL;DR: A systolic array for performing recursive least-squares minimization is desc/ performs an orthogonal triangularization of the data matrix using a pipelined SE.
Abstract: A systolic array for performing recursive least-squares minimization is desc/ performs an orthogonal triangularization of the data matrix using a pipelined SE© (1983) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

295 citations


Journal ArticleDOI
TL;DR: In this article, the remaining unquantized coefficients of a FIR linear phase digital filter when one or more of the filter coefficients takes on discrete values are optimized using the least square response error.
Abstract: An efficient method optimizing (in the least square response error sense) the remaining unquantized coefficients of a FIR linear phase digital filter when one or more of the filter coefficients takes on discrete values is introduced. By incorporating this optimization method into a tree search algorithm and employing a suitable branching policy, an efficient algorithm for the design of high-order discrete coefficient FIR filters is produced. This approach can also be used to design FIR filters on a minimax basis. The minimax criterion is approximated by adjusting the least squares weighting. Results show that the least square criteria is capable of designing filters of order well beyond other approaches by a factor of three for the same computer time. The discrete coefficient spaces discussed include the evenly distributed finite wordlength space as well as the nonuniformly distributed powers-of-two space.

240 citations


Journal ArticleDOI
TL;DR: This article showed that least square cross-validation is asymptotically optimal for density estimation, rather then simply consistent, in the sense that the tail conditions are only slightly more severe than the hypothesis of finite variance.
Abstract: We prove that the method of cross-validation suggested by A. W. Bowman and M. Rudemo achieves its goal of minimising integrated square error, in an asymptotic sense. The tail conditions we impose are only slightly more severe than the hypothesis of finite variance, and so least squares cross-validation does not exhibit the pathological behaviour which has been observed for Kullback-Leibler cross-validation. This is apparently the first time that a cross-validatory procedure for density estimation has been shown to be asymptotically optimal, rather then simply consistent.

229 citations


Journal ArticleDOI
TL;DR: In this article, specific implementations of the finite impulse response (FIR) block adaptive filter in the frequency domain are presented and some of their important properties are discussed, and the time-domain block adaptive filtering is shown to be equivalent to the frequency-domain adaptive filtering, provided data sectioning is done properly.
Abstract: Specific implementations of the finite impulse response (FIR) block adaptive filter in the frequency domain are presented and some of their important properties are discussed. The time-domain block adaptive filter implemented in the frequency domain is shown to be equivalent to the frequency-domain adaptive filter (derived in the frequency domain), provided data sectioning is done properly. All of the known time- and frequency-domain adaptive filters [1]-[12], [16]-[18] are contained in the set of possible block adaptive filter structures. Thus, the block adaptive filter is generic and its formulation unifies the current theory of time- and frequency-domain FIR adaptive filter structures. A detailed analysis of overlap-save and overlap-add implementations shows that the former is to be preferred for adaptive applications because it requires less computation, a fact that is not true for fixed coefficient filters.

197 citations


Proceedings ArticleDOI
14 Apr 1983
TL;DR: Some aspects of dynamic convergence behavior are discussed, with conclusions supported by simulation of adaptive filter algorithm for constant envelope waveforms.
Abstract: An adaptive filter algorithm has been developed and introduced [1] for use with constant envelope waveforms, e.g., FM communication signals. It has proven capable of suppressing additive interferers as well as equalization, without the need for a priori statistical information. In this paper, aspects of dynamic convergence behavior are discussed, with conclusions supported by simulation.

80 citations


Journal ArticleDOI
TL;DR: In this paper, the authors translate these "persistency of excitation" conditions into "sufficiently rich" conditions on the plant noise and inputs, and show that with sufficiently rich inputs, guaranteed convergence rates of prediction errors improve.
Abstract: In least-squares parameter estimation schemes, "persistency of excitation" conditions on the plant states are required for consistent estimation. In the case of extended least squares, the persistency conditions are on the state estimates. Here, these "persistency of excitation" conditions are translated into "sufficiently rich" conditions on the plant noise and inputs. In the case of adaptive minimum variance control schemes, the "sufficiently rich" conditions are on the noise and specified output trajectory. With sufficiently rich input signals, guaranteed convergence rates of prediction errors improve, and it is conjectured that the algorithms are consequently more robust.

80 citations


DOI
01 Jan 1983
TL;DR: In this article, it was shown that convergence of a self-tuning regulator with variable forgetting factor with covariance resetting can also be proved for ordinary recursive least squares with a fixed forgetting factor.
Abstract: A recent paper by Cordero and Mayne has established convergence of a self-tuning regulator with variable forgetting factor. The short paper shows that a similar result can be proved for ordinary recursive least squares with covariance resetting.

56 citations


01 Jan 1983
TL;DR: This work is concerned with fast algorithms for integral equations and least squares identification problems and a fast algorithm for solvency identification problems is presented.
Abstract: This work is concerned with fast algorithms for integral equations and least squares identification problems.The presentation is divided into three parts. In the first part a fast algorithm for sol ...

45 citations


Proceedings ArticleDOI
01 Dec 1983
TL;DR: In this paper, the equivalence between the Kalman filter and a particular least square regression problem is established, and it is suggested that the regression problem be solved robustly, and the possibility of gleaning more information from past data is discussed.
Abstract: We consider the problem of robustifying the Kalman filter. First, we review some known approaches to the problem. Then we establish the equivalence between the Kalman filter and a particular least squares regression problem. We suggest that the regression problem be solved robustly. Some well known approaches for doing this are discussed. Finally, the possibility of gleaning more information from past data is discussed.

45 citations


Proceedings ArticleDOI
14 Apr 1983
TL;DR: The following new results are obtained: necessary and sufficient conditions of convergence, optimal adjustment gains and optimal convergence rates, interrelationship between LMS and NLMS gains, and non-stationary algorithm design.
Abstract: The main contribution of this paper is the unified treatment of convergence analysis for both LMS and NLMS adaptive algorithms. The following new results are obtained: (i) necessary and sufficient conditions of convergence, (ii) optimal adjustment gains and optimal convergence rates, (iii) interrelationship between LMS and NLMS gains, and (iv) non-stationary algorithm design.

Journal ArticleDOI
TL;DR: This note supplements the paper of Sakai with the derivation of optimal recursive least squares circular lattice estimation algorithms based on the geometric method of Lee, Morf, and Friedlander.
Abstract: This note supplements the paper of Sakai [1] about circular lattice filtering with the derivation of optimal recursive least squares circular lattice estimation algorithms. The derivation is based on the geometric method of Lee, Morf, and Friedlander [2]. The same method is also applied to obtain an optimal recursive estimation algorithm for escalator structure of Ahmed and Youn [3].

Journal ArticleDOI
TL;DR: In this paper, a fast convergence algorithm for frequency domain adaptive filter and its applicability to acoustic noise cancellation in speech signals is presented, and the algorithm can be used to cancel speech signals.
Abstract: This correspondence presents a new fast convergence algorithm for frequency domain adaptive filter and its applicability to acoustic noise cancellation in speech signals.

Journal ArticleDOI
TL;DR: This paper proposes two techniques to approximate a given impulse response as a degenerate sequence that is realizable as a recursive difference equation and uses a least squares error criterion to minimize the difference between the given and the approximated impulse responses.
Abstract: In this paper, problems associated with the synthesis and implementation of recursive linear shift-variant digital filters are investigated. We propose two techniques to approximate a given impulse response as a degenerate sequence that is realizable as a recursive difference equation. Both techniques use a least squares error criterion to minimize the difference between the given and the approximated impulse responses. Numerical examples illustrating and comparing the results of these techniques are included. In addition, we present several recursive structures for the implementation of both causal and noncausal degenerate impulse responses.

Journal ArticleDOI
TL;DR: The architectures and implementations of the LMS/transversal, fast-converging FRLS filter, and lattice filter algorithms which minimize the required hardware speed are discussed.
Abstract: Adaptive filters, employing the transversal filter structure and the least mean square (LMS) adaptation algorithm, or its variations, have found wide application in data transmission equalization, echo cancellation, prediction, spectral estimation, on-line system identification, and antenna arrays. Recently, in response to requirements of fast start-up, or fast tracking of temporal variations, fast recursive least squares (FRLS) adaptation algorithms for both transversal and lattice filter structures have been proposed. These algorithms offer faster convergence than is possible with the LMS/ transversal adaptive filters, at the price of a five-to-tenfold increase in the number of multiplications, divisions, and additions. Here we discuss architectures and implementations of the LMS/transversal, fast-converging FRLS filter, and lattice filter algorithms which minimize the required hardware speed. We show how each of these algorithms can be partitioned so as to be realizable with an architecture based on multiple parallel processors.

Journal ArticleDOI
TL;DR: In this paper, the authors highlight the superiority of the Kalman filter over Ordinary Least Squares for estimating the unknown coefficients of the classical linear regression model and analyze both methods with respect to their optimality properties and their usefulness in dealing with multicollinearity.
Abstract: The purpose of this paper is to highlight the superiority of the Kalman filter over Ordinary Least Squares for estimating the unknown coefficients of the classical linear regression model. Both methods are analyzed with respect to their optimality properties and their usefulness in dealing with multicollinearity. Theoretical results are applied to two economic models.

Journal ArticleDOI
TL;DR: In this paper, an algorithm that combines the Prony series method with the recursive least squares algorithm is described, which eliminates the need to invert any matrices and also requires only part of the data to be available at one particular time.

Proceedings ArticleDOI
01 Apr 1983
TL;DR: The introduced method is the fastest known algorithm featured by the rapid convergence characteristics of exact Least Squares estimation schemes, and the balanced role, the forward and backward prediction play.
Abstract: The present paper deals with a new, computationally efficient, algorithm for Sequential Least Squares (LS) estimation This scheme requires only O(5p) MAD (Multiplications And Divisions) per recursion to update a Kalman type gain vector; p is the number of estimated parameters In contrast the well-known fast Kalman algorithm requires O(8p) MAD The introduced method is the fastest known algorithm featured by the rapid convergence characteristics of exact Least Squares estimation schemes Another interesting feature of the new algorithm is the balanced role, the forward and backward prediction play

Journal ArticleDOI
TL;DR: In this article, the Magill adaptive filter is used to detect known signals in the presence of Gauss-Markov noise and the various hypotheses are accounted for outside the bank of Kalman filters, and thus all filters have the same gains and error covariances.
Abstract: The Magill adaptive filter can be used to detect known signals in the presence of Gauss-Markov noise. In this application, the various hypotheses are accounted for outside the bank of Kalman filters, and thus all filters have the same gains and error covariances. This commonality makes it feasible to use the Magill scheme in large-scale multiple-hypothesis testing applications.

Journal ArticleDOI
TL;DR: A systolic array for performing recursive least-squares minimisation performs an orthogonal triangularisation of the data matrix using a sequence of Givens rotations, and generates the required residual without having to solve the associated triangular linear system by back-substitution.
Abstract: A systolic array for performing recursive least-squares minimisation is described. It performs an orthogonal triangularisation of the data matrix using a sequence of Givens rotations, and generates the required residual without having to solve the associated triangular linear system by back-substitution. Since the back-substitution process may be ill conditioned and numerically unstable, the reliability and robustness of the method is greatly improved as a result, whilst the amount of circuitry and computation is reduced.


Journal ArticleDOI
TL;DR: In this article, the Fast Kalman Algorithm (FKA) was proposed for adaptive deconvolution of seismic data, which does not require the storage and updating of a matrix to calculate the filter gain and hence is computationally very efficient.
Abstract: The application of a recently proposed fast implementation of the recursive least squares algorithm, called the Fast Kalman Algorithm (FKA) to adaptive deconvolution of seismic data is discussed. The newly proposed algorithm does not require the storage and updating of a matrix to calculate the filter gain, and hence is computationally very efficient. Furthermore, it has an interesting structure yielding both the forward and backward prediction residuals of the seismic trace and thus permits the estimation of a ?smoothed residual? without any additional computations. The paper also compares the new algorithm with the conventional Kalman algorithm (CKA) proposed earlier [3] for seismic deconvolution. Results of experiments on simulated as well as real data show that while the FKA is a little more sensitive to the choice of some initial parameters which need to be selected carefully, it can yield comparable performance with greatly reduced computational effort.

Proceedings ArticleDOI
01 Apr 1983
TL;DR: Fast algorithms for solving normal equations for the filter coefficients have been outlined in this paper and an application of one of these algorithms for spectral estimation concludes the paper.
Abstract: A general finite impulse response (FIR) filter can be used as a linear prediction filter, if given only an input sample sequence, or as a system identification model, if given the input and output sequences from an unknown system. With known correlation, the coefficients of the FIR filter that minimize the mean square error in both applications are found by solution of a set of normal equations with Toeplitz structure. Using only data samples, the coefficients that yield the least squared error in both applications are found by solution of a set of normal equations with near-to-Toeplitz structure. Computationally efficient (fast) algorithms have been published to solve for the coefficients from both types of normal equation structures. If the FIR filter is constrained to have a linear phase, then the impulse response must be symmetric. This then leads to normal equations with Toeplitz-plus-Hankel or near-to-Toeplitz-plus-Hankel structure. Fast algorithms for solving these normal equations for the filter coefficients are outlined in this paper. They have computational complexity proportional to M2and parameter storage proportional to M, where M is the filter order. An application of one of these algorithms for spectral estimation concludes the paper.

Journal ArticleDOI
TL;DR: In this article, a weighted linear least squares (WLLS) procedure is proposed to test linear hypotheses about the parameters of the nonlinear models and the advantages and disadvantages of the WLLS procedure are discussed.
Abstract: Suppose the same nonlinear function involving k parameters is fit to each of t populations. Suppose further it is of interest to compare a specific parameter of the models across the populations. Such comparisons can be expressed as linear hypotheses about the parameters of the nonlinear models. A weighted linear least squares (WLLS) procedure is proposed to test these linear hypotheses. The advantages and disadvantages of the WLLS procedure are discussed. This procedure is also compared to a nonlinear least squares procedure for testing these hypotheses in nonlinear models.

Journal ArticleDOI
Michael L. Honig1
TL;DR: In this article, a simple model characterizing the convergence properties of an adaptive digital lattice filter using gradient algorithms has been extended to the least mean square (LMS) lattice joint process estimator.
Abstract: A simple model characterizing the convergence properties of an adaptive digital lattice filter using gradient algorithms has been reported [1]. This model is extended to the least mean square (LMS) lattice joint process estimator [5], and to the least squares (LS) lattice and "fast" Kalman algorithms [9] -[16]. The models in each case are compared with computer simulation. The single-stage LMS lattice analysis presented in [1] is also applied to the LS lattice. Results indicate that for stationary inputs, the LMS lattice and LS algorithms exhibit similar behavior.

Proceedings ArticleDOI
01 Apr 1983
TL;DR: The steady state output error of the Least Mean Square (LMS) Adaptive Algorithm due to the finite precision arithmetic of a digital processor is analyzed and the relation between the quantization error and the error that occurs when adaptation possibly ceases due to quantization is investigated.
Abstract: The steady state output error of the Least Mean Square (LMS) Adaptive Algorithm due to the finite precision arithmetic of a digital processor is analyzed. It is found to consist of three terms: (1) the error due to the input data quantization, (2) the error due to the rounding of the arithmetic operations in calculating the filter's output, and (3) the error due to the deviation of the filter's coefficients from the values they take when infinite precision arithmetic is used. The last term is inversely proportional to the adaptation step size µ. Both fixed and floating point arithmetics are examined. The relation between the quantization error and the error that occurs when adaptation possibly ceases due to quantization is also investigated.

Proceedings ArticleDOI
14 Apr 1983
TL;DR: Simulations are used to verify the performance of the proposed algorithms and the results confirm the theoretical predictions, and a comparison of performances of different algorithms applied to real speech signals is given.
Abstract: We propose the application of modified recursive least-squares algorithms and least-squares lattice algorithms to echo cancellation problems to provide fast convergence and overcome the effects of the double-talking. The modified algorithms run continuously, but the gain updates become negligibly small during the double-talking intervals. Thus, detection of the double-talking and virtual freezing of the adaptive filter weights are both built into the algorithm. Simulations are used to verify the performance of the proposed algorithms, And the results confirm the theoretical predictions. A comparison of performances of different algorithms applied to real speech signals is also given in the paper.

DOI
Y.M. El-Fattah1
01 Nov 1983
TL;DR: In this article, a new recursive algorithm for adaptive Kalman filtering is proposed, where the signal state-space model and its noise statistics are assumed to depend on an unknown parameter taking values in a subset [', '] of Rs. The parameter is estimated recursively using the gradient of the innovation sequence of the Kalman filter.
Abstract: A new recursive algorithm for adaptive Kalman filtering is proposed The signal state-space model and its noise statistics are assumed to depend on an unknown parameter taking values in a subset [', '] of Rs The parameter is estimated recursively using the gradient of the innovation sequence of the Kalman filter The unknown parameter is replaced by its current estimate in the Kalman-filtering algorithm The asymptotic properties of the adaptive Kalman filter are discussed

Proceedings ArticleDOI
01 Dec 1983
TL;DR: Determination of the optimum parameters require 0(10p) block multiplications and additions per data point in contrast to existing schemes that apply only to single channel signals and call for 0(15p) multiplication and additions.
Abstract: Fast implementation of recursive least squares algorithms is of great importance in various estimation, control and signal processing applications. Such an efficient fast Kalman type algorithm is introduced in this paper for both the single channel and multichannel case without any windowing assumption (covariance case). Determination of the optimum parameters require 0(10p) block multiplications and additions per data point in contrast to existing schemes that apply only to single channel signals and call for 0(15p) multiplications and additions.

Proceedings ArticleDOI
01 Apr 1983
TL;DR: The Kalman filter theory is used to develop an algorithm for updating the tap-weight vector of an adaptive tapped-delay line filter that operates in a nonstationary environment that is always stable.
Abstract: In this paper, the Kalman filter theory is used to develop an algorithm for updating the tap-weight vector of an adaptive tapped-delay line filter that operates in a nonstationary environment. The tracking behaviour of the algorithm is discussed in detail. Computer simulation experiments show that this algorithm, unlike the exponentially weighted recursive least-squares (deterministic) algorithm, is always stable. Simulation results are included in the paper to illustrate this phenomenon.