scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive filter published in 1984"


Journal ArticleDOI
TL;DR: This paper presents a geometrical discussion as to the origin of that defect, and a new adaptive algorithm is proposed based on the result of the investigation, called APA (affine projection algorithm).
Abstract: The LMS algorithm and learning identification, which presently are typical adaptive algorithms, have a problem in that the speed of convergence may decrease greatly depending on the property of the input signal. To avoid this problem, this paper presents a geometrical discussion as to the origin of that defect, and proposes a new adaptive algorithm based on the result of the investigation. Comparing the convergence speeds of the proposed algorithm and the learning identification by numerical experiment by computer, great improvement was verified. The algorithm is extended to a group of algorithms which includes the original algorithm and the learning identification, which are called APA (affine projection algorithm). It is shown that APA has some desirable properties, such as, the coefficient vector approaches the true value monotonically and the convergence speed is independent of the amplitude of the input signal. Clear conclusions are also obtained for the problem as to what noise is included in the output signal when an external disturbance is impressed or the degree of the adaptive filter is not sufficient.

843 citations


Journal ArticleDOI
TL;DR: Fast transversal filter (FTF) implementations of recursive-least-squares (RLS) adaptive-filtering algorithms are presented in this paper and substantial improvements in transient behavior in comparison to stochastic-gradient or LMS adaptive algorithms are efficiently achieved by the presented algorithms.
Abstract: Fast transversal filter (FTF) implementations of recursive-least-squares (RLS) adaptive-filtering algorithms are presented in this paper. Substantial improvements in transient behavior in comparison to stochastic-gradient or LMS adaptive algorithms are efficiently achieved by the presented algorithms. The true, not approximate, solution of the RLS problem is always obtained by the FTF algorithms even during the critical initialization period (first N iterations) of the adaptive filter. This true solution is recursively calculated at a relatively modest increase in computational requirements in comparison to stochastic-gradient algorithms (factor of 1.6 to 3.5, depending upon application). Additionally, the fast transversal filter algorithms are shown to offer substantial reductions in computational requirements relative to existing, fast-RLS algorithms, such as the fast Kalman algorithms of Morf, Ljung, and Falconer (1976) and the fast ladder (lattice) algorithms of Morf and Lee (1977-1981). They are further shown to attain (steady-state unnormalized), or improve upon (first N initialization steps), the very low computational requirements of the efficient RLS solutions of Carayannis, Manolakis, and Kalouptsidis (1983). Finally, several efficient procedures are presented by which to ensure the numerical Stability of the transversal-filter algorithms, including the incorporation of soft-constraints into the performance criteria, internal bounding and rescuing procedures, and dynamic-range-increasing, square-root (normalized) variations of the transversal filters.

724 citations


Journal ArticleDOI
TL;DR: It is shown that median filtering an arbitrary level signal to its root is equivalent to decomposing the signal into binary signals, filtering each binary signal to a root with a binary median filter, and then reversing the decomposition.
Abstract: Median filters are a special class of ranked order filters used for smoothing signals Repeated application of the filter on a quantized signal of finite length ultimately results in a sequence, termed a root signal, which is invariant to further passes of the median filter In this paper, it is shown that median filtering an arbitrary level signal to its root is equivalent to decomposing the signal into binary signals, filtering each binary signal to a root with a binary median filter, and then reversing the decomposition This equivalence allows problems in the analysis and the implementation of median filters for arbitrary level signals to be reduced to the equivalent problems for binary signals Since the effects of median filters on binary signals are well understood, this technique is a powerful new tool

336 citations



Journal ArticleDOI
TL;DR: In this article, an adaptive notch filter is developed for the enhancement and tracking of sinusoids in additive noise, colored or white, using a constrained infinite impulse response filter with the constraint enforced by a single parameter termed the debiasing parameter.
Abstract: In this paper, an adaptive notch filter is developed (employing a frequency domain and time domain analysis) for the enhancement and tracking of sinusoids in additive noise, colored or white. The notch filter is implemented as a constrained infinite impulse response filter with the constraint enforced by a single parameter termed the debiasing parameter. The resulting notch filter requires few parameters, facilitates the formation of the desired band rejection filter response, and also leads to various useful implementations (cascade, parallel). For the adaptation of the filter coefficients, the stochastic Gauss-Newton algorithm is used. The convergence of this updating procedure is established by studying the associated differential equation. Also, it is shown that the structure present in the problem enables truncation of the gradient, thereby reducing the complexity of adapting the filter coefficients. Simulation results are presented to substantiate the analysis, and to demonstrate the potential of the notch filtering technique.

262 citations


Journal ArticleDOI
TL;DR: A tutorial-style framework is presented for understanding the current status of adaptive infinite-impulse-response (IIR) filters and the structures of provable convergent adaptive algorithms are derived.
Abstract: A tutorial-style framework is presented for understanding the current status of adaptive infinite-impulse-response (IIR) filters The paper begins with a detailed discussion of the difference equation models that are useful as adaptive IIR filters The particular form of the resulting prediction error generic to adaptive IIR filters is highlighted and the structures of provable convergent adaptive algorithms are derived A brief summary of particular, currently known performance properties, drawn principally from the system identification literature, is followed by the formulation of three illustrative adaptive signal processing problems, to which these adaptive IIR filters are applicable The concluding section discusses various open issues raised by the formulation of this framework

236 citations


Journal ArticleDOI
TL;DR: A theoretical analysis of self-adaptive equalization for data-transmission is carried out starting from known convergence results for the corresponding trained adaptive filter and it can be proved that the algorithm is bounded.
Abstract: A theoretical analysis of self-adaptive equalization for data-transmission is carried out starting from known convergence results for the corresponding trained adaptive filter. The development relies on a suitable ergodicity model for the sequence of observations at the output of the transmission channel. Thanks to the boundedness of the decision function used for data recovery, it can be proved that the algorithm is bounded. Strong convergence results can be reached when a perfect (noiseless) equalizer exists: the algorithm will converge to it if the eye pattern is initially open. Otherwise convergence may take place towards certain other stationary points of the algorithm for which domains of attraction have been defined. Some of them will result in a poor error rate. The case of a noisy channel exhibits limit points for the algorithm that differ from those of the classical (trained) algorithm. The stronger the noise, the greater the difference is. One of the principal results of this study is the proof of the stability of the usual decision feedback algorithms once the learning period is over.

190 citations


Journal ArticleDOI
TL;DR: It is shown here, however, that for an important class of nonstationary problems, the mis adjustment of conventional LMS is the same as that of orthogonalized LMS, which in the stationary case is shown to perform essentially as an exact least squares algorithm.
Abstract: A fundamental relationship exists between the quality of an adaptive solution and the amount of data used in obtaining it. Quality is defined here in terms of "misadjustment," the ratio of the excess mean square error (mse) in an adaptive solution to the minimum possible mse. The higher the misadjustment, the lower the quality is. The quality of the exact least squares solution is compared with the quality of the solutions obtained by the orthogonalized and the conventional least mean square (LMS) algorithms with stationary and nonstationary input data. When adapting with noisy observations, a filter trained with a finite data sample using an exact least squares algorithms will have a misadjustment given by M=\frac{n}{N}=\frac{number of weights}{number of training samples} If the same adaptive filter were trained with a steady flow of data using an ideal "orthogonalized LMS" algorithm, the misadjustment would be M=\frac{n}{4\tau_{\mse}}=\frac{number of weights}{number of training samples} Thus, for a given time constant \tau_{\mse} of the learning process, the ideal orthogonalized LMS algorithm will have about as low a misadjustment as can be achieved, since this algorithm performs essentially as an exact least squares algorithm with exponential data weighting. It is well known that when rapid convergence with stationary data is required, exact least squares algorithms can in certain cases outperform the conventional Widrow-Hoff LMS algorithm. It is shown here, however, that for an important class of nonstationary problems, the misadjustment of conventional LMS is the same as that of orthogonalized LMS, which in the stationary case is shown to perform essentially as an exact least squares algorithm.

175 citations


Book
01 Jan 1984

175 citations


Journal ArticleDOI
TL;DR: It is proved for the binary reinforcement algorithm that the tap weight vector converges in distribution to a random vector that is suitably concentrated about the optimal value based on a least mean-absolute error cost function.
Abstract: Recently there has been increased interest in high speed adaptive filtering where the usual stochastic gradient or least mean-square (LMS) algorithm is replaced with the simpler algorithm where adaptation is guided only by the polarity of the error signal. In this paper the convergence of this binary reinforcement (BR) algorithm is proved under the usual independence assumption, and the surprising observation is made that, unlike the LMS algorithm, convergence occurs for any positive value of the step-size parameter. While the stochastic gradient algorithm attempts to minimize a mean-square error cost function, the binary reinforcement algorithm in fact attempts to minimize a mean-absolute error cost function. It is proved for the binary reinforcement algorithm that the tap weight vector converges in distribution to a random vector that is suitably concentrated about the optimal value based on a least mean-absolute error cost function. For a sufficiently small step size, the expected cost of the asymptotic weight vector can be made as close as desired to thc minimum cost attain,ed by the optimum weight vector.

150 citations


Patent
02 Jul 1984
TL;DR: In this paper, a two-input crosstalk-resistant adaptive noise canceller with first and second-summer means was proposed, where the first summer means provides a canceller output signal which is the difference between the primary input signal and the first adaptive filter output signal.
Abstract: A two-input crosstalk-resistant adaptive noise canceller receives a primary input signal including a desired speech signal portion and an undesired noise signal portion and also receives a reference input signal having a reference noise input portion and a crosstalk speech portion. The canceller has first and second summer means and first and second adaptive filter means. The first summer means provides a canceller output signal which is the difference between the primary input signal and the first adaptive filter output signal. The canceller output signal is applied to the reference input of the second adaptive filter and to one of a pair of error-control inputs of the first adaptive filter. The second error-control input of the first adaptive filter is provided by the signal at the output of the second adaptive filter, which receives a single error-control input signal from the output of the second summer means. The second summer provides an output signal which is the difference between the reference input signal and the second adapter filter output signal. With the correlation bias between the desired primary input (speech) signal and the crosstalk (speech) signal in the reference input substantially reduced, the canceller output signal is then related substantially only to the primary input desired signal.

Journal ArticleDOI
TL;DR: It is shown that a muitichannel LS estimation algorithm with a different number of parameters to be estimated in each channel can be implemented by cascading lattice stages of nondescending dimension to form a generalized lattice structure.
Abstract: A generalized multichannel least squares (LS) lattice algorithm which is appropriate for multichannel adaptive filtering and estimation is presented in this paper. It is shown that a muitichannel LS estimation algorithm with a different number of parameters to be estimated in each channel can be implemented by cascading lattice stages of nondescending dimension to form a generalized lattice structure. A new realization of a multichannel lattice stage is also presented. This realization employs only scalar operations and has a computational complexity of 0(p2) for each p-channel lattice stage.

Journal ArticleDOI
TL;DR: A unified theory is presented to characterize least-Squares adaptive filters, in either lattice or transversal-filter form, for nonstationary processes, based upon a geometric formulation of least-squares estimation and on the concept of displacement rank.
Abstract: A unified theory is presented to characterize least-squares adaptive filters, in either lattice or transversal-filter form, for nonstationary processes The derivations are based upon a geometric formulation of least-squares estimation and on the concept of displacement rank A few basic geometric relations are shown to underlie the various algorithms Insights into the fundamental concepts that unify lattice- and transversal-filter approaches to least-squares adaptive filters are also given The general results are illustrated by applications to the so-called "pre-windowed" and "growing-memory covariance" formulations of the deterministic least-squares problem

Journal ArticleDOI
TL;DR: In this paper, an adaptative filter whose main feature is to preserve edges and impulses present in the signal is analyzed by the computation of the mean-square error (MSE) of its output sequence.
Abstract: An adaptative filter whose main feature is to preserve edges and impulses present in the signal is analyzed by the computation of the mean-square error (MSE) of its output sequence. The filter in its more general form is highly nonlinear, resembling the M-type estimators used in robust statistics. A simplified form used here allows the exact computation of the MSE when the filter length is finite. This MSE can be compared to the ones obtained for a median filter and a mean filter. It is shown that for a wide range of the filter and signal parameters such as filter length, edge heights, and impulse width, the performance of the filter proposed in this paper is superior to the other filters mentioned above. An additional advantage of the simplified version of the filter is that in most cases, its computation amounts to a linear adaptative averaging. This contrasts with the amount of calculation required to implement the median filter and any other filter based on the order statistics of the measured samples.

Journal ArticleDOI
TL;DR: In this article, the authors consider the use of various digital prefilter structures and provide a quantitative sense of the level of reduction in computational complexity that can be achieved by using an appropriately designed pre-filter.
Abstract: A new approach to the design of low complexity FIR digital filters was presented in [1]. The essence of the method is to separate the design problem into two parts: the realization of an efficient prefilter and the design of the corresponding equalizer. This separation allows the filter designer to focus on the computational complexity issue within the simplified context of the prefilter network. In this paper, we consider the use of various digital prefilter structures. The number of possibilities for digital prefilters is unlimited, so that our treatment cannot be exhaustive. However, our study does provide a quantitative sense of the level of reduction in computational complexity that can be achieved by the use of an appropriately designed prefilter.

Journal ArticleDOI
TL;DR: The proposed infinite impulse response filter has a special structure that guarantees the desired transfer characteristics and is derived using a general prediction error framework.
Abstract: An adaptive notch filter is derived by using a general prediction error framework. The proposed infinite impulse response filler has a special structure that guarantees the desired transfer characteristics. The filter coefficients are updated by a version of the recursive maximum likelihood algorithm. The convergence properties of the algorithm and its asymptotic behavior are discussed, and its performance is evaluated by simulation results.

Patent
31 May 1984
TL;DR: In this article, an adaptive filter system is proposed which is capable of automatic changes in filter parameters such as width and center frequency of the teeth of a comb type bandpass envelope of the filter.
Abstract: Noise reduction on a video signal is achieved by an adaptive filter system which is capable of automatic changes in filter parameters. This inventive concept includes an automatic method of independently changing both the width and center frequency of the teeth of a comb type bandpass envelope of the filter, as well as adjusting the amplitude response of the filter, independent of the bandpass characteristics, in order to closely match the filter bandpass response to the power spectrum of the video signal being processed, thus rejecting noise in those portions of the spectrum not being used by the video signal. The filter system herein disclosed also provides an adaptive spatial processing of the video signal thus further improving said signal by enhancing detail in the image and by smoothing low amplitude noise in relatively detail free areas of the picture.

Book
01 Jan 1984
TL;DR: Stochastic convergence theory is reviewed in this text including 33 fundamental martingale and convergence theorems, which unifies identification theory; adaptive filtering; control and decision, and time series analysis.
Abstract: Stochastic convergence theory is reviewed in this text including 33 fundamental martingale and convergence theorems. The book unifies identification theory; adaptive filtering; control and decision, and time series analysis. Examples of practical microcomputer-based applications are included.

Patent
02 Apr 1984
TL;DR: In this article, a technique for recovering each of an entire analog speech signal and a modulated data signal simultaneously received over a transmission channel such as a common analog telephone speech channel was proposed.
Abstract: The present invention relates to a technique for recovering each of an entire analog speech signal and a modulated data signal simultaneously received over a transmission channel such as a common analog telephone speech channel. In the received composite signal, the entire modulated data signal is multiplexed within the normal analog speech signal frequency band where the speech is present and its signal power density characteristic is at a low level. Separation of the speech and data signals at the receiver is effected by recovering the modulation carrier frequency and demodulating the received signal to recover the data signal. The data signal is then (a) remodulated with the recovered carrier, (b) modified to cancel phase jitter and frequency offset errors detected during the data demodulating process and (c) convolved with an arbitrary channel impulse response in an adaptive filter whose output signal is subtracted from the received composite data and speech signal to generate the recovered speech signal. To improve the recovered speech signal, a least mean square algorithm is used to update the arbitrary channel impulse response output signal of the adaptive filter.

Journal ArticleDOI
TL;DR: This paper considers the use of finite-tap delay line equalizers, and finds that a simple, suboptimal form of timing recovery is generally quite adequate, and that fractionally spaced equalizers are more advantageous than synchronously spaced equalizer with the same number of taps.
Abstract: Recent analysis/simulation studies have quantified the multipath outage statistics of digital radio systems using ideal adaptive equalization. In this paper, we consider the use of finite-tap delay line equalizers, with the aim of determining how many taps are needed to approximate ideal performance. To this end, we assume an M -level QAM system using cosine rolloff spectral shaping and an adaptive equalizer with either fractionally spaced or synchronously spaced taps. We invoke a widely used statistical model for the fading channel and computer-simulate thousands of responses from its ensemble. For each trial, we compute a detection signal-to-distortion measure, suitably maximized with respect to the tap gains. We can thereby obtain probability distributions of this measure for specified combinations of system parameters. These distributions, in turn, can be interpreted as outage probabilities (or outage seconds) versus the number of modulation levels. A major finding of this study is that, for the assumed multipath fading model, very few taps (the order of five) are needed to approximate the performance of an ideal infinite-tap equalizer. We also find that a simple, suboptimal form of timing recovery is generally quite adequate, and that fractionally spaced equalizers are more advantageous than synchronously spaced equalizers with the same number of taps. This advantage is minor for rolloff factors of 0.5 and larger but increases dramatically as the rolloff factor approaches zero.

Journal ArticleDOI
Abstract: Computer simulations of the scaling and rotation sensitivity of a phase-only filter were performed, showing that it is much more sensitive to such input variations than is a classical matched filter Values of the peak correlation spot power versus both rotation angle and scale factor are presented for both filter types Several theorems are derived for calculating the optical efficiency of any filter in both input space and frequency space

Journal ArticleDOI
N. Verhoeckx1, T. Claasen
TL;DR: A new type of design graph is introduced, which characterizes the convergence pretty well and which is called "the elephant's ear," due to its typical shape, and some effects of non-Gaussian statistics of the received signal are considered.
Abstract: In this paper we present some tools that can be helpful in the design of adaptive digital filters operating with the so-called sign algorithm. Although theoretical results with respect to the convergence behavior of such filters are available, it may be hard to derive practical design criteria from them. We introduce a new type of design graph, which characterizes the convergence pretty well and which we call "the elephant's ear," due to its typical shape. We also consider some effects of non-Gaussian statistics of the received signal and show how the preceding analysis can be extended to include the effects of dithering. Finally, we pay attention to the digital word lengths required in a finite precision implementation. The subject of this paper is restricted to adaptive digital echo cancellers (EC), but the results can be generalized to include other adaptive filters like decision feedback equalizers (DFE) or combinations of EC and DFE.

Journal ArticleDOI
TL;DR: In this paper, an algorithm for self-tuning optimal fixed-lag smoothing or filtering for linear discrete-time multivariable processes is proposed, which involves spectral factorization of polynomial matrices and assumes knowledge of the process parameters and the noise statistics.
Abstract: An algorithm is proposed for self-tuning optimal fixed-lag smoothing or filtering for linear discrete-time multivariable processes. A z -transfer function solution to the discrete multivariable estimation problem is first presented. This solution involves spectral factorization of polynomial matrices and assumes knowledge of the process parameters and the noise statistics. The assumption is then made that the signal-generating process and noise statistics are unknown. The problem is reformulated so that the model is in an innovations signal form, and implicit self-tuning estimation algorithms are proposed. The parameters of the innovation model of the process can be estimated using an extended Kalman filter or, alternatively, extended recursive least squares. These estimated parameters are used directly in the calculation of the predicted, smoothed, or filtered estimates. The approach is an attempt to generalize the work of Hagander and Wittenmark.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: A new, promising algorithm for adaptive estimation of eigenvectors corresponding to the smallest eigenvalues is introduced, and a generalization of Thompson's algorithm for estimating several eigenvctors is represented.
Abstract: In several applications of signal processing recursive algorithms for estimating a few eigenvectors of correlation or covariance matrices directly from the incoming samples are desirable. In this paper such algorithms are derived by starting from an extension of the classical power method of numerical analysis, instead of the usual gradient approach. This viewpoint leads to useful and relatively simple rules for determining the gain parameters of Owsley's stochastic gradient ascent algorithm for sensor array processing and Thompson's adaptive algorithm for unbiased frequency estimation using the Pisarenko method. A new, promising algorithm for adaptive estimation of eigenvectors corresponding to the smallest eigenvalues is introduced. Preliminary numerical results and comparisons are given, and a generalization of Thompson's algorithm for estimating several eigenvectors is represented.

Proceedings ArticleDOI
01 Jan 1984
TL;DR: The proposed adaptive inverse modeling process is a promising new approach to the design of adaptive control systems and can be used to obtain a stable controller, whether the plant is minimum or non-minimum phase.
Abstract: A few of the well established methods of adaptive signal processing are modified and extended for application to adaptive control. An unknown plant will track an input command signal if the plant is preceded by a controller whose transfer function approximates the inverse of the plant transfer function. An adaptive inverse modeling process can be used to obtain a stable controller, whether the plant is minimum or non-minimum phase. No direct feedback is involved. However the system output is monitored and utilized in order to adjust the parameters of the controller. The proposed method is a promising new approach to the design of adaptive control systems.

Journal ArticleDOI
TL;DR: The aim of this paper concerns the research of an adaptive probabilistic data association filter (APDAF), this Bayesian method estimates the state of a target in a cluttered environment when the noise statistics are unknown.

Proceedings ArticleDOI
19 Mar 1984
TL;DR: This paper introduces a computationally efficient technique for splitting a signal into N equally spaced sub-bands subsampled by 1/N and for near perfectly reconstructing the original signal from the sub-band signals.
Abstract: This paper introduces a computationally efficient technique for splitting a signal into N equally spaced sub-bands subsampled by 1/N and for near perfectly reconstructing the original signal from the sub-band signals. This technique is based on a multirate approach where some operations are nested to decrease the computation load. Simulation results show that the performances are comparable to that of conventional quadrature mirror filters, but with a very significant reduction in computational complexity.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper provides a quantitative analysis of the tracking characteristics of least squares algorithms and a comparison is made with the tracking performance of the LMS algorithm.
Abstract: This paper provides a quantitative analysis of the tracking characteristics of least squares algorithms. A comparison is made with the tracking performance of the LMS algorithm. Other algorithms that are similar to least squares algorithms, such as the gradient lattice algorithm and the Gram-Schmidt orthogonalization algorithm are also considered. Simulation results are provided to reinforce the analytical results and conclusions.

Journal ArticleDOI
TL;DR: It is shown that this structure exhibits high inherent parallelism that is ideally suited for VLSI implementation or multimicroprocessor systems and enables high-speed processing with the number of multiplies and additions per output sample much less than those of the block-state or canonical filter realizations.
Abstract: A new structure for realizing IIR digital filters is introduced based on the idea of processing sequences by blocks. It is shown that this structure exhibits high inherent parallelism that is ideally suited for VLSI implementation or multimicroprocessor systems. This enables high-speed processing with the number of multiplies and additions per output sample much less than those of the block-state or canonical filter realizations. The roundoff noise level in the output of the new filter structure is derived and compared to the noise levels of the block-state and canonical forms. Further, it is shown that the new filter structure will present no scaling problems if the canonical filter is scaled. Finally, the extension of the new filter structure to the realization of periodically time-varying digital filters is also presented.

Journal ArticleDOI
TL;DR: In this paper, three types of 3D recursive digital filters are defined under different radial symmetry constraints in the 3D coordinate axes: the symmetrical filter, the nonsymmetrical filters I and II.
Abstract: Three types of three-dimensional (3-D) recursive digital filters are first defined under different radial symmetry constraints in the 3-D coordinate axes: the symmetrical filter, the nonsymmetrical filters I and II. A design technique is then outlined for these 3-D recursive digital filters whose magnitude responses can be decomposed into several cubic pass- and stopbands. The filter designed by the present approach can be realized by cascading and paralleling of the well-known one-dimensional (1-D) component transfer functions so that the stability test is simple and the filter can be implemented easily. Three examples are included to illustrate the design procedure for each type of 3-D filter.