scispace - formally typeset
Search or ask a question

Showing papers on "Recursive least squares filter published in 1984"


Journal ArticleDOI
TL;DR: Fast transversal filter (FTF) implementations of recursive-least-squares (RLS) adaptive-filtering algorithms are presented in this paper and substantial improvements in transient behavior in comparison to stochastic-gradient or LMS adaptive algorithms are efficiently achieved by the presented algorithms.
Abstract: Fast transversal filter (FTF) implementations of recursive-least-squares (RLS) adaptive-filtering algorithms are presented in this paper. Substantial improvements in transient behavior in comparison to stochastic-gradient or LMS adaptive algorithms are efficiently achieved by the presented algorithms. The true, not approximate, solution of the RLS problem is always obtained by the FTF algorithms even during the critical initialization period (first N iterations) of the adaptive filter. This true solution is recursively calculated at a relatively modest increase in computational requirements in comparison to stochastic-gradient algorithms (factor of 1.6 to 3.5, depending upon application). Additionally, the fast transversal filter algorithms are shown to offer substantial reductions in computational requirements relative to existing, fast-RLS algorithms, such as the fast Kalman algorithms of Morf, Ljung, and Falconer (1976) and the fast ladder (lattice) algorithms of Morf and Lee (1977-1981). They are further shown to attain (steady-state unnormalized), or improve upon (first N initialization steps), the very low computational requirements of the efficient RLS solutions of Carayannis, Manolakis, and Kalouptsidis (1983). Finally, several efficient procedures are presented by which to ensure the numerical Stability of the transversal-filter algorithms, including the incorporation of soft-constraints into the performance criteria, internal bounding and rescuing procedures, and dynamic-range-increasing, square-root (normalized) variations of the transversal filters.

724 citations


Journal ArticleDOI
TL;DR: In this paper, the eigenvalue, logarithmic least squares, and least squares methods are compared to derive estimates of ratio scales from a positive reciprocal matrix, and the criteria for comparison are the measurement of consistency, dual solutions, and rank preservation.

359 citations


Journal ArticleDOI
TL;DR: It is shown here, however, that for an important class of nonstationary problems, the mis adjustment of conventional LMS is the same as that of orthogonalized LMS, which in the stationary case is shown to perform essentially as an exact least squares algorithm.
Abstract: A fundamental relationship exists between the quality of an adaptive solution and the amount of data used in obtaining it. Quality is defined here in terms of "misadjustment," the ratio of the excess mean square error (mse) in an adaptive solution to the minimum possible mse. The higher the misadjustment, the lower the quality is. The quality of the exact least squares solution is compared with the quality of the solutions obtained by the orthogonalized and the conventional least mean square (LMS) algorithms with stationary and nonstationary input data. When adapting with noisy observations, a filter trained with a finite data sample using an exact least squares algorithms will have a misadjustment given by M=\frac{n}{N}=\frac{number of weights}{number of training samples} If the same adaptive filter were trained with a steady flow of data using an ideal "orthogonalized LMS" algorithm, the misadjustment would be M=\frac{n}{4\tau_{\mse}}=\frac{number of weights}{number of training samples} Thus, for a given time constant \tau_{\mse} of the learning process, the ideal orthogonalized LMS algorithm will have about as low a misadjustment as can be achieved, since this algorithm performs essentially as an exact least squares algorithm with exponential data weighting. It is well known that when rapid convergence with stationary data is required, exact least squares algorithms can in certain cases outperform the conventional Widrow-Hoff LMS algorithm. It is shown here, however, that for an important class of nonstationary problems, the misadjustment of conventional LMS is the same as that of orthogonalized LMS, which in the stationary case is shown to perform essentially as an exact least squares algorithm.

175 citations


Journal ArticleDOI
TL;DR: It is shown that a muitichannel LS estimation algorithm with a different number of parameters to be estimated in each channel can be implemented by cascading lattice stages of nondescending dimension to form a generalized lattice structure.
Abstract: A generalized multichannel least squares (LS) lattice algorithm which is appropriate for multichannel adaptive filtering and estimation is presented in this paper. It is shown that a muitichannel LS estimation algorithm with a different number of parameters to be estimated in each channel can be implemented by cascading lattice stages of nondescending dimension to form a generalized lattice structure. A new realization of a multichannel lattice stage is also presented. This realization employs only scalar operations and has a computational complexity of 0(p2) for each p-channel lattice stage.

148 citations


Journal ArticleDOI
TL;DR: The problems of input sensitivity, structure detection, model validation and input signal selection are discussed in the non-linear context.
Abstract: Least squares parameter estimation algorithms for non-linear systems are investigated based on a non-linear difference equation model. A modified extended least squares algorithm, an instrumental variable algorithm and a new suboptimal least squares algorithm are considered. The problems of input sensitivity, structure detection, model validation and input signal selection are also discussed in the non-linear context.

126 citations



Journal ArticleDOI
TL;DR: The proposed infinite impulse response filter has a special structure that guarantees the desired transfer characteristics and is derived using a general prediction error framework.
Abstract: An adaptive notch filter is derived by using a general prediction error framework. The proposed infinite impulse response filler has a special structure that guarantees the desired transfer characteristics. The filter coefficients are updated by a version of the recursive maximum likelihood algorithm. The convergence properties of the algorithm and its asymptotic behavior are discussed, and its performance is evaluated by simulation results.

82 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm for self-tuning optimal fixed-lag smoothing or filtering for linear discrete-time multivariable processes is proposed, which involves spectral factorization of polynomial matrices and assumes knowledge of the process parameters and the noise statistics.
Abstract: An algorithm is proposed for self-tuning optimal fixed-lag smoothing or filtering for linear discrete-time multivariable processes. A z -transfer function solution to the discrete multivariable estimation problem is first presented. This solution involves spectral factorization of polynomial matrices and assumes knowledge of the process parameters and the noise statistics. The assumption is then made that the signal-generating process and noise statistics are unknown. The problem is reformulated so that the model is in an innovations signal form, and implicit self-tuning estimation algorithms are proposed. The parameters of the innovation model of the process can be estimated using an extended Kalman filter or, alternatively, extended recursive least squares. These estimated parameters are used directly in the calculation of the predicted, smoothed, or filtered estimates. The approach is an attempt to generalize the work of Hagander and Wittenmark.

62 citations



Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper provides a quantitative analysis of the tracking characteristics of least squares algorithms and a comparison is made with the tracking performance of the LMS algorithm.
Abstract: This paper provides a quantitative analysis of the tracking characteristics of least squares algorithms. A comparison is made with the tracking performance of the LMS algorithm. Other algorithms that are similar to least squares algorithms, such as the gradient lattice algorithm and the Gram-Schmidt orthogonalization algorithm are also considered. Simulation results are provided to reinforce the analytical results and conclusions.

51 citations


Journal ArticleDOI
TL;DR: This paper develops an updating algorithm for the solution of linear least squares problems which are sparse except for a small subset of dense equations, which can be divided into sparse and dense subsets.
Abstract: Linear least squares problems which are sparse except for a small subset of dense equations can be efficiently solved by an updating method. Often the least squares solution is also required to satisfy a set of linear constraints, which again can be divided into sparse and dense subsets. This paper develops an updating algorithm for the solution of such problems. The algorithm is completely general in that no restrictive assumption on the rank of any subset of equations or constraints is made.

Journal ArticleDOI
TL;DR: In this article, a technique for finding the coefficients of a two-dimensional (2D) recursive digital filter having a separable denominator which gives the best approximation, in the least squares sense, to a desired 2-D impulse response over a finite interval is presented.
Abstract: This paper presents a technique for finding the coefficients of a two-dimensional (2-D) recursive digital filter having a separable denominator which gives the best approximation, in the least squares sense, to a desired 2-D impulse response over a finite interval. All the coefficients in the filter are found by iteratively solving linear equations. Since the resulting filter has a separable denominator, it is easy to check the stability and the implementation is simpler. Two examples are given to illustrate the utility of the proposed technique.

Journal ArticleDOI
TL;DR: The convergence properties of the gradient algorithm are analyzed under the assumption that the gain tends to zero, and a main result is that the convergence conditions for the gradient algorithms are the same as those for the recursive least squares algorithm.
Abstract: Parameter estimation problems that can be formulated as linear regressions are quite common in many applications. Recursive (on-line, sequential) estimation of such parameters can be performed using the recursive least squares (RLS) algorithm or a stochastic gradient version of this algorithm. In this paper the convergence properties of the gradient algorithm are analyzed under the assumption that the gain tends to zero. The technique is the same as the so-called ordinary differential equation approach, but the treatment here is self-contained and includes a proof of the boundedness of the estimates. A main result is that the convergence conditions for the gradient algorithm are the same as those for the recursive least squares algorithm.

Journal ArticleDOI
TL;DR: Under mild conditions on the observation processes the almost sure convergence properties of linear stochastic approximation are summarized for least squares and for some of its applications: adaptive filtering, echo cancellation, detection of binary data in Gaussian noise, identification, and linear classification.
Abstract: Under mild conditions on the observation processes the almost sure convergence properties of linear stochastic approximation are summarized for least squares and for some of its applications: adaptive filtering, echo cancellation, detection of binary data in Gaussian noise, identification, and linear classification.

Book
01 Jan 1984
TL;DR: This chapter discussesrete models in Systems Engineering, a branch of engineering that focuses on the design of systems with low levels of control, and some of the approaches used, such as the tried and tested Two-Stage Control Design.
Abstract: 1 Discrete Models in Systems Engineering.- 1.1 Introduction.- 1.2 Some Illustrative Examples.- 1.2.1 Direct Digital Control of a Thermal Process.- 1.2.2 An Inventory Holding Problem.- 1.2.3 Measurement and Control of Liquid Level.- 1.2.4 An Aggregate National Econometric Model.- 1.3 Objectives and Outline of the Book.- 1.4 References.- 2 Representation of Discrete Control Systems.- 2.1 Introduction.- 2.2 Transfer Functions.- 2.2.1 Review of Z-Transforms.- 2.2.2 Effect of Pole Locations.- 2.2.3 Stability Analysis.- 2.2.4 Simplification by Continued-Fraction Expansions.- 2.2.5 Examples.- 2.3 Difference Equations.- 2.3.1 The Nature of Solutions.- 2.3.2 The Free Response.- 2.3.3 The Forced Response.- 2.3.4 Examples.- 2.3.5 Relationship to Transfer Functions.- 2.4. Discrete State Equations.- 2.4.1 Introduction.- 2.4.2 Obtaining the State Equations.- A. From Difference Equations.- B. From Transfer Functions.- 2.4.3 Solution Procedure.- 2.4.4 Examples.- 2.5 Modal Decomposition.- 2.5.1 Eigen-Structure.- 2.5.2 System Modes.- 2.5.3 Some Important Properties.- 2.5.4 Examples.- 2.6 Concluding Remarks.- 2.7 Problems.- 2.8 References.- 3 Structural Properties.- 3.1 Introduction.- 3.2 Controllability.- 3.2.1 Basic Definitions.- 3.2.2 Mode-Controllability Structure.- 3.2.3 Modal Analysis of State-Reachability.- 3.2.4 Some Geometrical Aspects.- 3.2.5 Examples.- 3.3 Observability.- 3.3.1 Basic Definitions.- 3.3.2 Principle of Duality.- 3.3.3 Mode-Observability Structure.- 3.3.4 Concept of Detectability.- 3.3.5 Examples.- 3.4. Stability.- 3.4.1 Introduction.- 3.4.2 Definitions of Stability.- 3.4.3 Linear System Stability.- 3.4.4 Lyapunov Analysis.- 3.4.5 Solution and Properties of the Lyapunov Equation.- 3.4.6 Examples.- 3.5 Remarks.- 3.6 Problems.- 3.7 References.- 4 Design of Feedback Systems.- 4.1 Introduction.- 4.2 The Concept of Linear Feedback.- 4.2.1 State Feedback.- 4.2.2 Output Feedback.- 4.2.3 Computational Algorithms.- 4.2.4 Eigen-Structure Assignment.- 4.2.5 Remarks.- 4.2.6 Example.- 4.3 Deadbeat Controllers.- 4.3.1 Preliminaries.- 4.3.2 The Multi-Input Deadbeat Controller.- 4.3.3 Basic Properties.- 4.3.4 Other Approaches.- 4.3.5 Examples.- 4.4 Development of Reduced-Order Models.- 4.4.1 Analysis.- 4.4.2 Two Simplification Schemes.- 4.4.3 Output Modelling Approach.- 4.4.4 Control Design.- 4.4.5 Examples.- 4.5 Control Systems with Slow and Fast Modes.- 4.5.1 Time-Separation Property.- 4.5.2 Fast and Slow Subsystems.- 4.5.3 A Frequency Domain Interpretation.- 4.5.4 Two-Stage Control Design.- 4.5.5 Examples.- 4.6 Concluding Remarks.- 4.7 Problems.- 4.8 References.- 5 Control of Systems with Inaccessible States.- 5.1 Introduction.- 5.2 State Reconstruction Schemes.- 5.2.1 Full-Order State Reconstructors.- 5.2.2 Reduced-Order State Reconstructors.- 5.2.3 Discussion.- 5.2.4 Deadbeat State Reconstructors.- 5.2.5 Examples.- 5.3 Observer-Based Controllers.- 5.3.1 Structure of Closed-Loop Systems.- 5.3.2 The Separation Principle.- 5.3.3 Deadbeat Type Controllers.- 5.3.4 Example.- 5.4 Two-Level Observation Structures.- 5.4.1 Full-Order Local State Reconstructors.- 5.4.2 Modifications to Ensure Overall Asymptotic Reconstruction.- 5.4.3 Examples.- 5.5 Discrete Two-Time-Scale Systems.- 5.5.1 Introduction.- 5.5.2 Two-Stage Observer Design.- 5.5.3 Dynamic State Feedback Control.- 5.5.4 Example.- 5.6 Concluding Remarks.- 5.7 Problems.- 5.8 References.- 6 State and Parameter Estimation.- 6.1 Introduction.- 6.2 Random Variables and Gauss-Markov Processes.- 6.2.1 Basic Concepts of Probability Theory.- 6.2.2 Mathematical Properties of Random Variables.- A. Distribution Functions.- B. Mathematical Expectation.- C. Two Random Variables.- 6.2.3 Stochastic Processes.- A. Definitions and Properties.- B. Gauss and Markov Processes.- 6.3 Linear Discrete Models with Random Inputs.- 6.3.1 Model Description.- 6.3.2 Some Useful Properties.- 6.3.3 Propagation of Means and Covariances.- 6.3.4 Examples.- 6.4 The Kalman Filter.- 6.4.1 The Estimation Problem.- A. The Filtering Problem.- B. The Smoothing Problem.- C. The Prediction Problem.- 6.4.2 Principal Methods of Obtaining Estimates.- A. Minimum Variance Estimate.- B. Maximum Likelihood Estimate.- C. Maximum A Posteriori Estimate.- 6.4.3 Development of the Kalman Filter Equations.- A. The Optimal Filtering Problem.- B. Solution Procedure.- C. Some Important Properties.- 6.4.4 Examples.- 6.5 Decentralised Computation of the Kalman Fikter.- 6.5.1 Linear Interconnected Dynamical Systems.- 6.5.2 The Basis of the Decentralised Filter Structure.- 6.5.3 The Recursive Equations of the Filter.- 6.5.4 A Computation Comparison.- 6.5.5 Example.- 6.6 Parameter Estimation.- 6.6.1 Least Squares Estimation.- A. Linear Static Models.- B. Standard Least Squares Method and Properties.- C. Application to Parameter Estimation of Dynamic Models.- D. Recursive Least Squares.- E. The Generalised Least Squares Method.- 6.6.2 Two-Level Computational Algorithms.- A. Linear Static Models.- B. A Two-Level Multiple Projection Algorithm.- C. The Recursive Version.- D. Linear Dynamical Models.- E. The Maximum A Posteriori Approach.- F. A Two-Level Structure.- 6.6.3 Examples.- 6.7 Problems.- 6.8 References.- 7 Adaptive Control Systems.- 7.1 Introduction.- 7.2 Basic Concepts of Model Reference Adaptive Systems.- 7.2.1 The Reference Model.- 7.2.2 The Adaptation Mechanism.- 7.2.3 Notations and Some Definitions.- 7.2.4 Design Considerations.- 7.3 Design Techniques.- 7.3.1 Techniques Based on Lyapunov Analysis.- 7.3.2 Techniques Based on Hyperstability and Positivity Concepts.- A. Popov Inequality and Related Results.- B. Systematic Procedure.- C. Parametric Adaptation Schemes.- D. Adaptive Model-Following Schemes.- 7.3.3 Examples.- 7.4 Self-Tuning Regulators.- 7.4.1 Introduction.- 7.4.2 Description of the System.- 7.4.3 Parameter Estimators.- A. The Least Squares Method.- B. The Extended Least Squares Method.- 7.4.4 Control Strategies.- A. Controllers Based on Linear Quadratic Theory.- B. Controllers Based on Minimum Variance Criteria.- 7.4.5 Other Approaches.- A. Pole/Zero Placement Approach.- B. Implicit Identification Approach.- C. State Space Approach.- D. Multivariable Approach.- 7.4.6 Discussion.- 7.4.7 Examples.- 7.5 Concluding Remarks.- 7.6 Problems.- 7.7 References.- 8 Dynamic Optimisation.- 8.1 Introduction.- 8.2 The Dynamic Optimisation Problem.- 8.2.1 Formulation of the Problem.- 8.2.2 Conditions of Optimality.- 8.2.3 The Optimal Return Function.- 8.3 Linear-Quadratic Discrete Regulators.- 8.3.1 Derivation of the Optimal Sequences.- 8.3.2 Steady-State Solution.- 8.3.3 Asymptotic Properties of Optimal Control.- 8.4 Numerical Algorithms for the Discrete Riccati Equation.- 8.4.1 Successive Approximation Methods.- 8.4.2 Hamiltonian Methods.- 8.4.3 Discussion.- 8.4.4 Examples.- 8.5 Hierarchical Optimization Methodology.- 8.5.1 Problem Decomposition.- 8.5.2 Open-Loop Computation Structures.- A. The Goal Coordination Method.- B. The Method of Tamura.- C. The Interaction Prediction Method.- 8.5.3 Closed-Loop Control Structures.- 8.5.4 Examples.- 8.6 Decomposition-Decentralisation Approach.- 8.6.1 Statement of the Problem.- 8.6.2 The Decoupled Subsystems.- 8.6.3 Multi-Controller Structure.- 8.6.4 Examples.- 8.7 Concluding Remarks.- 8.8 Problems.- 8.9 References.

Proceedings ArticleDOI
01 Dec 1984
TL;DR: In this article, an indirect adaptive control scheme for deterministic plants which are not necessarily minimum phase is presented, where the closed loop poles are asymptotically assigned and the system input and output remain bounded for all time.
Abstract: This paper presents an indirect adaptive control scheme for deterministic plants which are not necessarily minimum phase. Global convergence is established for the scheme in the sense that the closed loop poles are asymptotically assigned and the system input and output remain bounded for all time. A key feature of the scheme is that no persistency of excitation condition is required. The algorithm has been designed with time-varying problems in mind and uses recursive least squares with variable forgetting factor, normalized regression vectors, and a matrix gain with constant trace.

Journal ArticleDOI
TL;DR: A recursive algorithm for estimating the constant but unknown parameters of a controlled ARMA process is presented and it is shown that the estimate is (globally) p-consistent, i.e. that it converges a.s. as the number of data tends to infinity, to a vector which, in turn, converges to the true parameter vector as the degree p of the AR model tends to Infinity.

Journal ArticleDOI
TL;DR: In this article, the adaptive nature of the identifier is verified by varying the system parameters on convergence and simulation results are obtained and these are compared with ordinary recursive least squares, where the adaptive properties of the identifiers are verified by different parameters.
Abstract: Symmetrical behaviour of the covariance matrix and the positive-definite criterion are used to simplify identification of single-input/single-output systems using recursive least squares. Simulation results are obtained and these are compared with ordinary recursive least squares. The adaptive nature of the identifier is verified by varying the system parameters on convergence.

Journal ArticleDOI
TL;DR: In this article, a state-space design for a multivariable, self-tuning regulator is presented which is based upon eigenvalue assignment, where the parameters of a MIMO stochastic process are estimated at each time interval by recursive least squares, and are used to construct an equivalent state space representation.
Abstract: A state-space design for a multivariable, self-tuning regulator is presented which is based upon eigenvalue assignment. The parameters of a MIMO stochastic process are estimated at each time interval by recursive least squares, and are used to construct an equivalent state-space representation. Eigenvalue assignment of the closed-loop system is accomplished by state feedback, where the state estimate is obtained from the output of a full-order Luenberger observer. The algorithm can be used to regulate non-minimum phase and open-loop unstable systems. Three examples are given.

Journal ArticleDOI
TL;DR: The algorithms of Levinson-Schur and Nevanlinna-Pick are briefly reviewed in this article, where they produce least squares predictive filters by minimizing the least square error with respect to the interpolation points of the NevANP algorithm.

Proceedings ArticleDOI
01 Dec 1984
TL;DR: In this article, a new approach combining maneuver detection and parameter estimation through use of an Adaptive Recursive Least Squares algorithm in U-D form, including a forgetting factor is presented to solve the problem of maneuvering target tracking.
Abstract: A new approach is presented to solving the problem of maneuvering target tracking. This approach combines maneuver detection and parameter estimation through use of an Adaptive Recursive Least Squares algorithm in U-D form, including a forgetting factor. New self-tuning smoother-predictors are proposed to predict the target trajectory. The behavior of such adaptive predictors is illustrated using real radar measurements.

Journal ArticleDOI
TL;DR: The fast least squares al gorithm, sometimes known as the “fast Kalman algorithm” is however shown to be un stabl e with res pect to such errors, i.e. the effect of the error decays exponentially.

Proceedings ArticleDOI
J. Treichler1
01 Mar 1984
TL;DR: This paper introduces the concept of designing adaptive filtering algorithms to exploit the presence of some invariant property possessed by a signal of interest which is disturbed by the interference or dispersion which degrades the signal's quality.
Abstract: This paper introduces the concept of designing adaptive filtering algorithms to exploit the presence of some invariant property possessed by a signal of interest. If this invariant property is disturbed by the interference or dispersion which degrades the signal's quality, and if the disturbance can be sensed, then it can be used to guide an adaptive algorithm. Certain analytic requirements on this type of algorithm are discussed as well as their relation to well-known least squares and MSE algorithms.

Journal ArticleDOI
TL;DR: The derivation of a complex-domain recursive linear prediction algorithm with a reduced number of computations is presented, seen to be very similar to real domain fast Kalman, when complex conjugation is considered in formulating the error measure.
Abstract: A derivation of a complex-domain recursive linear prediction algorithm with a reduced number of computations is presented. It is denoted the complex fast Kalman algorithm by virtue of its similarity to the real domain algorithm of that name. It is seen to be very similar to real domain fast Kalman, when complex conjugation is considered in formulating the error measure.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper considers implementation of a least mean square adaptive digital filter algorithm using logarithmic number system (LNS), and Analytical expressions for error performance have been derived and design issues have been explored.
Abstract: This paper considers implementation of a least mean square (LMS) adaptive digital filter algorithm using logarithmic number system (LNS). Analytical expressions for error performance have been derived and design issues have been explored. Computer simulations have been performed to verify the derived expressions.

Journal ArticleDOI
Simon Haykin1
01 Apr 1984
TL;DR: In this article, the least square formulation of an adaptive antenna is presented, which with the aid of a calibration curve is used to estimate the angle of arrival in the presence of multipath as encountered in a low-angle tracking radar environment.
Abstract: The least squares formulation of an adaptive antenna is presented, which (with the aid of a calibration curve) may be used to estimate the angle of arrival in the presence of multipath as encountered in a low-angle tracking radar environment. A procedure (based on the Eckart-Young theorem) is proposed for improving the noise performance of the estimator.

Journal ArticleDOI
TL;DR: In this article, a method of designing a discrete adaptive observer for a general class of linear time-invariant multivariable systems is proposed for single-input single-output (SISO) systems.
Abstract: A method of designing a discrete adaptive observer is proposed for a general class of linear time-invariant multivariable systems. This method is an extension of the procedure described by Kanai and Degawa (1979) for single-input single-output systems. A parameter updating algorithm is derived based upon a recursive least squares method and the number of iterations required for parameter convergence is examined.

Journal ArticleDOI
TL;DR: It is shown that the choice of the learning constant in adaptive filtering is quite critical, and that only in cases with substantial coefficient variability will adaptive filtering lead to forecast improvements.
Abstract: This paper compares the Makridakis and Wheelwright adaptive filtering forecast technique with the recursive least squares procedure, which assumes constant coefficients. A simulation study is performed to examine its relative forecast accuracy under several models of time-varying coefficients. It is shown that the choice of the learning constant in adaptive filtering is quite critical, and that only in cases with substantial coefficient variability will adaptive filtering lead to forecast improvements.

Proceedings ArticleDOI
01 Dec 1984
TL;DR: In this article, the authors used an input-output stability analysis approach to show that for a large class of output error identification algorithms, the usual strict positive real (SPR) conditions on the unknown plant can be replaced by "persistent power" condition on the plant input sequence.
Abstract: This paper uses an input-output stability analysis approach to show that for a large class of output error identification algorithms, the usual strict positive real (SPR) conditions on the unknown plant can be replaced by "persistent power" conditions on the plant input sequence. The only a priori knowlege of the plant assumed is stability and knowlege of a model order upper bound. This class of algorithms is shown to include the constant direction, recursive least squares with forgetting, controlled trace, and covariance resetting variants, extending the results of [1]. Arguments for the necessity of the SPR condition in other cases, eg. recursive least squares and stochastic approximation, are also given. Implications in identification and adaptive IIR filtering are discussed.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: Based upon computational results on two noisy images and a comparison with three other adaptive image restoration algorithms, it was concluded that the proposed adaptive image estimation algorithm is highly competitive.
Abstract: A new iterative adaptive image restoration algorithm is developed. It is composed of a sub-optimal filter and a least squares based identifier. To account for nonstationarity, a moving window technique has been incorporated. Based upon computational results on two noisy images and a comparison with three other adaptive image restoration algorithms, it was concluded that the proposed adaptive image estimation algorithm is highly competitive.