scispace - formally typeset
Search or ask a question

Showing papers on "Adaptive algorithm published in 1998"


Proceedings ArticleDOI
01 Jun 1998
TL;DR: A conservative error metric is introduced capturing the intuition that for an approximate histogram to have low error, the error must be small in all regions of the histogram and an optimal bound on the amount of sampling required for pre-specified error bounds is presented.
Abstract: Random sampling is a standard technique for constructing (approximate) histograms for query optimization. However, any real implementation in commercial products requires solving the hard problem of determining “How much sampling is enough?” We address this critical question in the context of equi-height histograms used in many commercial products, including Microsoft SQL Server. We introduce a conservative error metric capturing the intuition that for an approximate histogram to have low error, the error must be small in all regions of the histogram. We then present a result establishing an optimal bound on the amount of sampling required for pre-specified error bounds. We also describe an adaptive page sampling algorithm which achieves greater efficiency by using all values in a sampled page but adjusts the amount of sampling depending on clustering of values in pages. Next, we establish that the problem of estimating the number of distinct values is provably difficult, but propose a new error metric which has a reliable estimator and can still be exploited by query optimizers to influence the choice of execution plans. The algorithm for histogram construction was prototyped on Microsoft SQL Server 7.0 and we present experimental results showing that the adaptive algorithm accurately approximates the true histogram over different data distributions.

299 citations


Journal ArticleDOI
TL;DR: An adaptive IF estimator with a time-varying and data-driven window length, which is able to provide quality close to what could be achieved if the smoothness of the IF were known in advance is developed.
Abstract: The estimation of the instantaneous frequency (IF) of a harmonic complex-valued signal with an additive noise using the Wigner distribution is considered. If the IF is a nonlinear function of time, the bias of the estimate depends on the window length. The optimal choice of the window length, based on the asymptotic formulae for the variance and bias, can be used in order to resolve the bias-variance tradeoff. However, the practical value of this solution is not significant because the optimal window length depends on the unknown smoothness of the IF. The goal of this paper is to develop an adaptive IF estimator with a time-varying and data-driven window length, which is able to provide quality close to what could be achieved if the smoothness of the IF were known in advance. The algorithm uses the asymptotic formula for the variance of the estimator only. Its value may be easily obtained in the case of white noise and relatively high sampling rate. Simulation shows good accuracy for the proposed adaptive algorithm.

240 citations


Journal ArticleDOI
TL;DR: The results indicated that the proposed method is very effective in adaptively finding the optimal solution in a mean square error (MSE) sense and it is shown that this method gives better MSE performance than those conventional wavelet shrinkage methods.
Abstract: A new adaptive denoising method is presented based on Stein's (1981) unbiased risk estimate (SURE) and on a new class of thresholding functions. First, we present a new class of thresholding functions that has a continuous derivative while the derivative of standard soft-thresholding function is not continuous. The new thresholding functions make it possible to construct the adaptive algorithm whenever using the wavelet shrinkage method. By using the new thresholding functions, a new adaptive denoising method is presented based on SURE. Several numerical examples are given. The results indicated that for denoising applications, the proposed method is very effective in adaptively finding the optimal solution in a mean square error (MSE) sense. It is also shown that this method gives better MSE performance than those conventional wavelet shrinkage methods.

240 citations


Journal ArticleDOI
TL;DR: A comprehensive comparison of 2D spectral estimation methods for SAR imaging shows that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fouriers.
Abstract: Discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.

212 citations


Book ChapterDOI
27 Sep 1998
TL;DR: Comp compelling evidence is presented that AntNet, when measuring performance by standard measures such as network throughput and average packet delay, outperforms the current Internet routing algorithm (OSPF), some old Internet routing algorithms (SPF and distributed adaptive Bellman-Ford), and recently proposed forms of asynchronous online BellMan-Ford (Q-routing and Predictive Q- routing).
Abstract: In this paper we present AntNet, a novel adaptive approach to routing tables learning in packet-switched communications networks. AntNet is inspired by the stigmergy model of communication observed in ant colonies. We present compelling evidence that AntNet, when measuring performance by standard measures such as network throughput and average packet delay, outperforms the current Internet routing algorithm (OSPF), some old Internet routing algorithms (SPF and distributed adaptive Bellman-Ford), and recently proposed forms of asynchronous online Bellman-Ford (Q-routing and Predictive Q-routing).

184 citations


Journal ArticleDOI
01 Jan 1998
TL;DR: An adaptive fuzzy sliding-mode control system, which combines the merits of sliding- mode control, the fuzzy inference mechanism and the adaptive algorithm, is proposed and position control of a permanent magnet synchronous servo motor drive using the proposed control strategies is illustrated.
Abstract: An adaptive fuzzy sliding-mode control system, which combines the merits of sliding-mode control, the fuzzy inference mechanism and the adaptive algorithm, is proposed. First a sliding-mode controller with an integral-operation switching surface is designed. Then a fuzzy sliding-mode controller is investigated in which a simple fuzzy inference mechanism is used to estimate the upper bound of uncertainties. The fuzzy inference mechanism with centre adaptation of membership functions is investigated to estimate the optimal bound of uncertainties. Position control of a permanent magnet synchronous servo motor drive using the proposed control strategies is illustrated.

156 citations


Journal ArticleDOI
TL;DR: A novel error concealment method called best neighborhood matching (BNM) is presented by using a special kind of information redundancy-blockwise similarity within the image by utilizing information of not only neighboring pixels, but also remote regions in the image.
Abstract: Imperfect transmission of block-coded images often results in lost blocks. A novel error concealment method called best neighborhood matching (BNM) is presented by using a special kind of information redundancy-blockwise similarity within the image. The proposed algorithm can utilize the information of not only neighboring pixels, but also remote regions in the image. Very good restoration results are obtained by experiments.

156 citations


Journal ArticleDOI
01 Jul 1998
TL;DR: A review of adaptive detection techniques for direct-sequence code division multiple access (CDMA) signals is given, to improve CDMA system performance and capacity by reducing interference between users.
Abstract: A review of adaptive detection techniques for direct-sequence code division multiple access (CDMA) signals is given The goal is to improve CDMA system performance and capacity by reducing interference between users The techniques considered are implementations of multiuser receivers, for which background material is given Adaptive algorithms improve the feasibility of such receivers Three main forms of receivers are considered The minimum mean square error (MMSE) receiver is described and its performance illustrated Numerous adaptive algorithms can be used to implement the MMSE receiver, including blind techniques, which eliminate the need for training sequences The adaptive decorrelator can be used to eliminate interference from known interferers, though it is prone to noise enhancement Multistage and successive interference cancellation techniques reduce interference by cancellation of one detected signal from another Practical problems and some open research topics are mentioned These typically relate to the convergence rate and tracking performance of the adaptive algorithm

142 citations


Journal ArticleDOI
TL;DR: The NIC algorithm provides a fast on-line learning of the optimum weights for the two-layer linear NN and an adaptive algorithm based on the NIC for estimating and tracking the principal subspace of a vector sequence is developed.
Abstract: We introduce a novel information criterion (NIC) for searching for the optimum weights of a two-layer linear neural network (NN). The NIC exhibits a single global maximum attained if and only if the weights span the (desired) principal subspace of a covariance matrix. The other stationary points of the NIC are (unstable) saddle points. We develop an adaptive algorithm based on the NIC for estimating and tracking the principal subspace of a vector sequence. The NIC algorithm provides a fast on-line learning of the optimum weights for the two-layer linear NN. We establish the connections between the NIC algorithm and the conventional mean-square-error (MSE) based algorithms such as Oja's algorithm (Oja 1989), LMSER, PAST, APEX, and GHA. The NIC algorithm has several key advantages such as faster convergence, which is illustrated through analysis and simulation.

131 citations


Journal ArticleDOI
TL;DR: An adaptive algorithm compatible with the use of rectangular orthogonal transforms is proposed, thus allowing better tradeoffs between algorithm improvement, arithmetic complexity, and input/output delay, and leading to improvements in the convergence rate compared with both LMS and classical frequency domain algorithms.
Abstract: Transform-domain adaptive algorithms have been proposed to reduce the eigenvalue spread of the matrix governing their convergence, thus improving the convergence rate. However, a classical problem arises from the conflicting requirements between algorithm improvement requiring rather long transforms and the need to keep the input/output delay as small as possible, thus imposing short transforms. This dilemma has been alleviated by the so-called "short-block transform domain algorithms" but is still apparent. This paper proposes an adaptive algorithm compatible with the use of rectangular orthogonal transforms (e.g., critically subsampled, lossless, perfect reconstruction filter banks), thus allowing better tradeoffs between algorithm improvement, arithmetic complexity, and input/output delay. The method proposed makes a direct connection between the minimization of a specific weighted least squares criterion and the convergence rate of the corresponding stochastic gradient algorithm. This method leads to improvements in the convergence rate compared with both LMS and classical frequency domain algorithms.

119 citations


Journal ArticleDOI
TL;DR: An adaptive algorithm for extracting foreground objects from background in videophone or videoconference applications is presented, and is incorporated in motion-compensated discrete cosine transform (MC-DCT)-based coding schemes, allocating more bits to ROI than to non-ROI areas.
Abstract: An adaptive algorithm for extracting foreground objects from background in videophone or videoconference applications is presented. The algorithm uses a neural network architecture that classifies the video frames in regions of interest (ROI) and non-ROI areas, also being able to automatically adapt its performance to scene changes. The algorithm is incorporated in motion-compensated discrete cosine transform (MC-DCT)-based coding schemes, allocating more bits to ROI than to non-ROI areas. Simulation results are presented, using the Claire and Trevor sequences, which show reconstructed images of better quality, as well as signal-to-noise ratio improvements of about 1.4 dB, compared to those achieved by standard MC-DCT encoders.

Journal ArticleDOI
TL;DR: A posteriori error estimates for the heat equation in two space dimensions are presented and an adaptive algorithm is proposed, so that the estimated relative error is close to a preset tolerance.

Journal ArticleDOI
TL;DR: The connections of the alternative model for mixture of experts (ME) to the normalized radial basis function (NRBF) nets and extended normalized RBF (ENRBF) nets are established, and the well-known expectation-maximization (EM) algorithm for maximum likelihood learning is suggested to the two types of RBF nets.

Journal ArticleDOI
TL;DR: The gaussian mixture model of Pearson is employed in deriving a closed-form generic score function for strictly subgaussian sources to provide a computationally simple yet powerful algorithm for performing independent component analysis on arbitrary mixtures of nongaussian sources.
Abstract: This article develops an extended independent component analysis algorithm for mixtures of arbitrary subgaussian and supergaussian sources. The gaussian mixture model of Pearson is employed in deriving a closedform generic score function for strictly subgaussian sources. This is combined with the score function for a unimodal supergaussian density to provide a computationally simple yet powerful algorithm for performing independent component analysis on arbitrary mixtures of nongaussian sources.

Journal ArticleDOI
TL;DR: This algorithm involves a very simple update term that is computationally comparable to the update in the classical LMS algorithm and is demonstrated through a computer simulation example involving lowpass filtering of a one-dimensional chirp-type signal in impulsive noise.
Abstract: Stochastic gradient-based adaptive algorithms are developed for the optimization of weighted myriad filters (WMyFs). WMyFs form a class of nonlinear filters, motivated by the properties of /spl alpha/-stable distributions, that have been proposed for robust non-Gaussian signal processing in impulsive noise environments. The weighted myriad for an N-long data window is described by a set of nonnegative weights {w/sub i/}/sub i=l//sup N/ and the so-called linearity parameter K>0. In the limit, as K/spl rarr//spl infin/, the filter reduces to the familiar weighted mean filter (which is a constrained linear FIR filter). Necessary conditions are obtained for optimality of the filter weights under the mean absolute error criterion. An implicit formulation of the filter output is used to find an expression for the gradient of the cost function. Using instantaneous gradient estimates, an adaptive steepest-descent algorithm is then derived to optimize the weights. This algorithm involves a very simple update term that is computationally comparable to the update in the classical LMS algorithm. The robust performance of this adaptive algorithm is demonstrated through a computer simulation example involving lowpass filtering of a one-dimensional chirp-type signal in impulsive noise.

Journal ArticleDOI
TL;DR: This paper applies the ‘working parameter’ approach to derive alternative EM‐type implementations for fitting mixed effects models, which it is shown empirically can be hundreds of times faster than the common EM‐ type implementations.
Abstract: The mixed effects model, in its various forms, is a common model in applied statistics. A useful strategy for fitting this model implements EM-type algorithms by treating the random effects as missing data. Such implementations, however, can be painfully slow when the variances of the random effects are small relative to the residual variance. In this paper, we apply the ‘working parameter’ approach to derive alternative EM-type implementations for fitting mixed effects models, which we show empirically can be hundreds of times faster than the common EM-type implementations. In our limited simulations, they also compare well with the routines in S-PLUS® and Stata® in terms of both speed and reliability. The central idea of the working parameter approach is to search for efficient data augmentation schemes for implementing the EM algorithm by minimizing the augmented information over the working parameter, and in the mixed effects setting this leads to a transfer of the mixed effects variances into the regression slope parameters. We also describe a variation for computing the restricted maximum likelihood estimate and an adaptive algorithm that takes advantage of both the standard and the alternative EM-type implementations.

Journal ArticleDOI
TL;DR: In this paper, a simple change of dependent variables that guarantees positivity of turbulence variables in numerical simulation codes is presented, which is valid for any numerical scheme, be it finite difference, a finite volume, or a finite element method.
Abstract: A simple change of dependent variables that guarantees positivity of turbulence variables in numerical simulation codes is presented. The approach consists of solving for the natural logarithm of the turbulence variables, which are known to be strictly positive. The approach is valid for any numerical scheme, be it finite difference, a finite volume, or a finite element method. The work focuses on the advantages of the proposed change of dependent variables within the framework of an adaptive finite element method. The turbulence equations in logarithmic variables are presented for the standard κ-e model. Error estimation and mesh adaptation procedures are described. The formulation is validated on a shear layer case for which an analytical solution is available. This provides a framework for rigorous comparison of the proposed approach with the standard solution technique, which makes use of k and e as dependent variables. The approach is then applied to solve turbulent flow over a NACA0012 airfoil for which experimental measurements are available. The proposed procedure results in a robust adaptive algorithm. Improved predictions of turbulence variables are obtained using the proposed formulation

Journal ArticleDOI
TL;DR: In this paper, a sliding mode controller with an integral-operation switching surface is proposed, in which a simple adaptive algorithm is utilized to estimate the bound of uncertainties. And the position control for a permanent magnet (PM) synchronous servo motor drive using the proposed control strategies is illustrated.
Abstract: A novel sliding mode controller with an integral-operation switching surface is proposed. Furthermore, an adaptive sliding mode controller is investigated, in which a simple adaptive algorithm is utilized to estimate the bound of uncertainties. The position control for a permanent magnet (PM) synchronous servo motor drive using the proposed control strategies is illustrated. The theoretical analysis and the theorems for the proposed sliding mode controllers are described in detail. Simulation and experimental results show that the proposed controllers provide high-performance dynamic characteristics and are robust with regard to plant parameter variations and external load disturbance.

Journal ArticleDOI
TL;DR: A stable adaptive control method is developed for a class of first-order nonlinearly parameterized plants that yield a stable system in which the output errors are guaranteed to converge to zero.
Abstract: A stable adaptive control method is developed for a class of first-order nonlinearly parameterized plants. The method is based on modified adaptive algorithms that yield a stable system in which the output errors are guaranteed to converge to zero. The stability analysis is carried out using suitably constructed Lyapunov functions.

Proceedings ArticleDOI
11 Oct 1998
TL;DR: An optimization algorithm based on immune model and applied to the n-th agents' travelling salesman problem called n-TSP shows good performance for the combinatorial optimization problems.
Abstract: As the neural networks or genetic algorithms, adaptive algorithms become popular and these techniques are applied to many kinds of optimization problems. The immune system is one of the adaptive biological system whose functions are to identify and to eliminate foreign material. In this paper, we propose an optimization algorithm based on immune model and applied to the n-th agents' travelling salesman problem called n-TSP. Some computer simulations are designed to investigate the performance of the immune algorithm. The results of simulations represent that the immune algorithm shows good performance for the combinatorial optimization problems.

Journal ArticleDOI
D. J. Craft1
TL;DR: This paper reports on work at IBM's Austin and Burlington laboratories concerning fast hardware implementations of general-purpose lossless data compression algorithms, particularly for use in enhancing the data capacity of computer storage devices or systems, and transmission data rates for networking or telecommunications channels.
Abstract: This paper reports on work at IBM's Austin and Burlington laboratories concerning fast hardware implementations of general-purpose lossless data compression algorithms, particularly for use in enhancing the data capacity of computer storage devices or systems, and transmission data rates for networking or telecommunications channels. The distinctions between lossy and lossless compression and static and adaptive compression techniques are first reviewed. Then, two main classes of adaptive Lempel-Ziv algorithm, now known as LZ1 and LZ2, are introduced. An outline of early work comparing these two types of algorithm is presented, together with some fundamental distinctions which led to the choice and development of an IBM variant of the LZ1 algorithm, ALDC, and its implementation in hardware. The encoding format for ALDC is presented, together with details of IBM's current fast hardware CMOS compression engine designs, based on use of a content-addressable memory (CAM) array. Overall compression results are compared for ALDC and a number of other algorithms, using the CALGARY data compression benchmark file corpus. More recently, work using small hardware preprocessors to enhance the compression of ALDC on other types of data has shown promising results. Two such algorithmic extensions, BLDC and cLDC, are presented, with the results obtained on important data types for which significant improvement over ALDC alone is achieved.

Journal ArticleDOI
TL;DR: The separation ability of this method is shown to be qualitatively superior to its original model with prefixed nonlinearity, and a heuristic way is suggested for selecting the number of densities in a learned parametric mixture.

Journal ArticleDOI
TL;DR: The performance of the constant modulus algorithm-a reference algorithm for adaptive blind equalization-is studied in terms of the excess mean square error (EMSE) due to the nonvanishing step size of the gradient descent algorithm.
Abstract: The performance of the constant modulus algorithm (CMA)-a reference algorithm for adaptive blind equalization-is studied in terms of the excess mean square error (EMSE) due to the nonvanishing step size of the gradient descent algorithm. An analytical approximation of EMSE is provided, emphasizing the effect of the constellation size and resulting in design guidelines.

Journal ArticleDOI
TL;DR: Channel estimation using a class of set-membership identification algorithms known as optimal bounding ellipsoid (OBE) algorithms and their extension to tracking time-varying channels are described and U-SHAPE is shown to reduce the hardware complexity significantly.
Abstract: This paper considers the problems of channel estimation and adaptive equalization in the novel framework of set-membership parameter estimation. Channel estimation using a class of set-membership identification algorithms known as optimal bounding ellipsoid (OBE) algorithms and their extension to tracking time-varying channels are described. Simulation results show that the OBE channel estimators outperform the least-mean-square (LMS) algorithm and perform comparably with the RLS and the Kalman filter. The concept of set-membership equalization is introduced along with the notion of a feasible equalizer. Necessary and sufficient conditions are derived for the existence of feasible equalizers in the case of linear equalization for a linear FIR additive noise channel. An adaptive OBE algorithm is shown to provide a set of estimated feasible equalizers. The selective update feature of the OBE algorithms is exploited to devise an updator-shared scheme in a multiple channel environment, referred to as updator-shared parallel adaptive equalization (USHAPE). U-SHAPE is shown to reduce the hardware complexity significantly. Procedures to compute the minimum number of updating processors required for a specified quality of service are presented.

Journal ArticleDOI
TL;DR: It is shown that an N-body communication graph can be partitioned into two subgraphs with equal computation load by removing only $O(\sqrt{n\log n})$ and O(n2/3(log n)1/3) number of nodes, respectively, for two and three dimensions.
Abstract: We present an efficient and provably good partitioning and load balancing algorithm for parallel adaptive N-body simulation. The main ingredient of our method is a novel geometric characterization of a class of communication graphs that can be used to support hierarchical N-body methods such as the fast multipole method (FMM) and the Barnes--Hut method (BH). We show that communication graphs of these methods have a good partition that can be found efficiently sequentially and in parallel. In particular, we show that an N-body communication graph (either for BH or for FMM) can be partitioned into two subgraphs with equal computation load by removing only $O(\sqrt{n\log n})$ and O(n2/3(log n)1/3) number of nodes, respectively, for two and three dimensions. These bounds on node-partition imply bounds on edge-partition of $O(\sqrt{n}(\log n)^{3/2})$ and O(n2/3(log n)4/3), respectively, for two and three dimensions. To the best of our knowledge, this is the first theoretical result on the quality of partitioning N-body communication graphs for nonuniformly distributed particles. Our results imply that parallel adaptive N-body simulation can be made as scalable as computation on regular grids and as efficient as parallel N-body simulation on uniformly distributed particles.

Journal ArticleDOI
TL;DR: A normalized stochastic gradient adaptive filtering algorithm based on a finite impulse response (FIR) model is discussed, which identifies the system exactly, given only coarsely quantized output measurements.
Abstract: A normalized stochastic gradient adaptive filtering algorithm based on a finite impulse response (FIR) model is discussed. The algorithm identifies the system exactly, given only coarsely quantized output measurements. A description of the quantizer is included in the overall input-output model, and the scheme exploits an approximation of the derivative of the quantizer. Using an associated differential equation, global convergence is established to a zero output error (except for possible colored measurement disturbances) parameter setting or to the boundary of the model set.

Journal ArticleDOI
Marc Bodson1
TL;DR: An algorithm for the tuning of two input shaping methods designed to prevent the excitation of oscillatory modes in resonant systems is presented and an algorithm is developed for the automatic adjustment of the controller parameters.

Book ChapterDOI
01 Jan 1998
TL;DR: The idea is to minimize an empirical estimate - like the cross-validation estimate - of the generalization error with respect to regularization parameters by employing a simple iterative gradient descent scheme using virtually no additional programming overhead compared to standard training.
Abstract: In this paper we address the important problem of optimizing regularization parameters in neural network modeling The suggested optimization scheme is an extended version of the recently presented algorithm [25] The idea is to minimize an empirical estimate - like the cross-validation estimate - of the generalization error with respect to regularization parameters This is done by employing a simple iterative gradient descent scheme using virtually no additional programming overhead compared to standard training Experiments with feed-forward neural network models for time series prediction and classification tasks showed the viability and robustness of the algorithm Moreover, we provided some simple theoretical examples in order to illustrate the potential and limitations of the proposed regularization framework

Journal ArticleDOI
TL;DR: It is shown that using a finite Fourier basis expansion, a TV antenna array system can be cast into a time-invariant multi-input, multi-output (MIMO) framework, thereby allowing blind equalization to be accomplished without the use of higher order statistics.
Abstract: In this paper, we study the blind equalization problem of time-varying (TV) systems where the channel variations are too rapid to be tracked with conventional adaptive equalizers. We show that using a finite Fourier basis expansion, a TV antenna array system can be cast into a time-invariant multi-input, multi-output (MIMO) framework. The multiple inputs are related through the bases, thereby allowing blind equalization to be accomplished without the use of higher order statistics. Two deterministic blind equalization approaches are presented: one determines the channels first and then the equalizers, whereas the other estimates the equalizers directly. Related issues such as order determination are addressed briefly. The proposed algorithms are illustrated using simulations.

Journal ArticleDOI
TL;DR: The merit of the nonlinear predictor structure is confirmed by yielding approximately 2 dB higher prediction gain than a linear structure predictor that employs the conventional recursive least squares (RLS) algorithm.
Abstract: New learning algorithms for an adaptive nonlinear forward predictor that is based on a pipelined recurrent neural network (PRNN) are presented. A computationally efficient gradient descent (GD) learning algorithm, together with a novel extended recursive least squares (ERLS) learning algorithm, are proposed. Simulation studies based on three speech signals that have been made public and are available on the World Wide Web (WWW) are used to test the nonlinear predictor. The gradient descent algorithm is shown to yield poor performance in terms of prediction error gain, whereas consistently improved results are achieved with the ERLS algorithm. The merit of the nonlinear predictor structure is confirmed by yielding approximately 2 dB higher prediction gain than a linear structure predictor that employs the conventional recursive least squares (RLS) algorithm.