scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The genetic search approach. A new learning algorithm for adaptive IIR filtering

01 Nov 1996-IEEE Signal Processing Magazine (IEEE)-Vol. 13, Iss: 6, pp 38-46
TL;DR: A new hybrid search methodology is developed in which the genetic-type search is embedded into gradient-descent algorithms (such as the LMS algorithm), which has the characteristics of faster convergence, global search capability, less sensitivity to the choice of parameters, and simple implementation.
Abstract: An "evolutionary" approach called the genetic algorithm (GA) was introduced for multimodal optimization in adaptive IIR filtering. However, the disadvantages of using such an algorithm are slow convergence and high computational complexity. Initiated by the merits and shortcomings of the gradient-based algorithms and the evolutionary algorithms, we developed a new hybrid search methodology in which the genetic-type search is embedded into gradient-descent algorithms (such as the LMS algorithm). The new algorithm has the characteristics of faster convergence, global search capability, less sensitivity to the choice of parameters, and simple implementation. The basic idea of the new algorithm is that the filter coefficients are evolved in a random manner once the filter is found to be stuck at a local minimum or to have a slow convergence rate. Only the fittest coefficient set survives and is adapted according to the gradient-descent algorithm until the next evolution. As the random perturbation will be subject to the stability constraint, the filter can always minimum in a stable manner and achieve a smaller error performance with a fast rate. The article reviews adaptive IIR filtering and discusses common learning algorithms for adaptive filtering. It then presents a new learning algorithm based on the genetic search approach and shows how it can help overcome the problems associated with gradient-based and GA algorithms.
Citations
More filters
Journal ArticleDOI
TL;DR: A new method based on ABC algorithm for designing digital IIR filters is described and its performance is compared with that of a conventional optimization algorithm (LSQ-nonlin) and particle swarm optimization (PSO) algorithm.
Abstract: Digital filters can be broadly classified into two groups: recursive (infinite impulse response (IIR)) and non-recursive (finite impulse response (FIR)). An IIR filter can provide a much better performance than the FIR filter having the same number of coefficients. However, IIR filters might have a multi-modal error surface. Therefore, a reliable design method proposed for IIR filters must be based on a global search procedure. Artificial bee colony (ABC) algorithm has been recently introduced for global optimization. The ABC algorithm simulating the intelligent foraging behaviour of honey bee swarm is a simple, robust, and very flexible algorithm. In this work, a new method based on ABC algorithm for designing digital IIR filters is described and its performance is compared with that of a conventional optimization algorithm (LSQ-nonlin) and particle swarm optimization (PSO) algorithm.

551 citations

Journal ArticleDOI
TL;DR: Differential evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum of a multimodal search space regardless of the initial parameter values, fast convergence, and using a few control parameters.
Abstract: Any digital signal processing algorithm or processor can be reasonably described as a digital filter. The main advantage of an infinite impulse response (IIR) filter is that it can provide a much better performance than the finite impulse response (FIR) filter having the same number of coefficients. However, they might have a multimodal error surface. Differential evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum of a multimodal search space regardless of the initial parameter values, fast convergence, and using a few control parameters. In this work, DE algorithm has been applied to the design of digital IIR filters and its performance has been compared to that of a genetic algorithm.

208 citations

Journal ArticleDOI
TL;DR: The IIR system identification task is formulated as an optimization problem and a recently introduced cat swarm optimization (CSO) is used to develop a new population based learning rule for the model.
Abstract: Conventional derivative based learning rule poses stability problem when used in adaptive identification of infinite impulse response (IIR) systems. In addition the performance of these methods substantially deteriorates when reduced order adaptive models are used for such identification. In this paper the IIR system identification task is formulated as an optimization problem and a recently introduced cat swarm optimization (CSO) is used to develop a new population based learning rule for the model. Both actual and reduced order identification of few benchmarked IIR plants is carried out through simulation study. The results demonstrate superior identification performance of the new method compared to that achieved by genetic algorithm (GA) and particle swarm optimization (PSO) based identification.

197 citations

Journal ArticleDOI
TL;DR: Three applications, maximum likelihood (ML) joint channel and data estimation, infinite-impulse-response (IIR) filter design and evaluation of minimum symbol-error-rate (MSER) decision feedback equalizer (DFE) are used to demonstrate the effectiveness of the ASA.

142 citations


Cites methods from "The genetic search approach. A new ..."

  • ...The two examples used here are wellknown benchmark problems, and the GA method has been applied to them [ 20 ,30]....

    [...]

  • ...The GA has been applied to IIR "lter design (e.g. [19, 20 ,32]) to overcome this di$culty....

    [...]

Journal ArticleDOI
TL;DR: A new method based on the ant colony optimisation algorithm with global optimisation ability is proposed for digital IIR filter design, and simulation results show that the proposed approach is accurate and has a fast convergence rate.

135 citations

References
More filters
Book ChapterDOI
01 Aug 1976
TL;DR: It is shown that for stationary inputs the LMS adaptive algorithm, based on the method of steepest descent, approaches the theoretical limit of efficiency in terms of misadjustment and speed of adaptation when the eigenvalues of the input correlation matrix are equal or close in value.
Abstract: This paper describes the performance characteristics of the LMS adaptive filter, a digital filter composed of a tapped delay line and adjustable weights, whose impulse response is controlled by an adaptive algorithm. For stationary stochastic inputs, the mean-square error, the difference between the filter output and an externally supplied input called the "desired response," is a quadratic function of the weights, a paraboloid with a single fixed minimum point that can be sought by gradient techniques. The gradient estimation process is shown to introduce noise into the weight vector that is proportional to the speed of adaptation and number of weights. The effect of this noise is expressed in terms of a dimensionless quantity "misadjustment" that is a measure of the deviation from optimal Wiener performance. Analysis of a simple nonstationary case, in which the minimum point of the error surface is moving according to an assumed first-order Markov process, shows that an additional contribution to misadjustment arises from "lag" of the adaptive process in tracking the moving minimum point. This contribution, which is additive, is proportional to the number of weights but inversely proportional to the speed of adaptation. The sum of the misadjustments can be minimized by choosing the speed of adaptation to make equal the two contributions. It is further shown, in Appendix A, that for stationary inputs the LMS adaptive algorithm, based on the method of steepest descent, approaches the theoretical limit of efficiency in terms of misadjustment and speed of adaptation when the eigenvalues of the input correlation matrix are equal or close in value. When the eigenvalues are highly disparate (λ max /λ min > 10), an algorithm similar to LMS but based on Newton's method would approach this theoretical limit very closely.

1,423 citations

Proceedings Article
20 Aug 1989
TL;DR: A set of experiments performed on data from a sonar image classification problem are described to illustrate the improvements gained by using a genetic algorithm rather than backpropagation and chronicle the evolution of the performance of the genetic algorithm as it added more and more domain-specific knowledge into it.
Abstract: Multilayered feedforward neural networks possess a number of properties which make them particularly suited to complex pattern classification problems. However, their application to some realworld problems has been hampered by the lack of a training algonthm which reliably finds a nearly globally optimal set of weights in a relatively short time. Genetic algorithms are a class of optimization procedures which are good at exploring a large and complex space in an intelligent way to find values close to the global optimum. Hence, they are well suited to the problem of training feedforward networks. In this paper, we describe a set of experiments performed on data from a sonar image classification problem. These experiments both 1) illustrate the improvements gained by using a genetic algorithm rather than backpropagation and 2) chronicle the evolution of the performance of the genetic algorithm as we added more and more domain-specific knowledge into it.

1,087 citations

Journal ArticleDOI
TL;DR: In this article, an overview of several methods, filter structures, and recursive algorithms used in adaptive infinite-impulse response (IIR) filtering is presented, and several important issues associated with adaptive IIR filtering, including stability monitoring, the SPR condition, and convergence are addressed.
Abstract: An overview is presented of several methods, filter structures, and recursive algorithms used in adaptive infinite-impulse response (IIR) filtering. Both the equation-error and output-error formulations are described, although the focus is on the adaptive algorithms and properties of the output-error configuration. These parameter-update algorithms have the same generic form, and they are based on a prediction-error performance criterion. A direct-form implementation of the adaptive filters is emphasized, but alternative realizations such as the parallel and lattice forms are briefly discussed. Several important issues associated with adaptive IIR filtering, including stability monitoring, the SPR condition, and convergence, are addressed. >

644 citations

Journal ArticleDOI
TL;DR: This paper provides a brief overview of how one might use genetic algorithms as a key element in learning systems.
Abstract: Genetic algorithms represent a class of adaptive search techniques that have been intensively studied in recent years. Much of the interest in genetic algorithms is due to the fact that they provide a set of efficient domain-independent search heuristics which are a significant improvement over traditional “weak methods” without the need for incorporating highly domain-specific knowledge. There is now considerable evidence that genetic algorithms are useful for global function optimization and NP-hard problems. Recently, there has been a good deal of interest in using genetic algorithms for machine learning problems. This paper provides a brief overview of how one might use genetic algorithms as a key element in learning systems.

416 citations

Journal ArticleDOI
TL;DR: A modified genetic algorithm is used to solve the parameter identification problem for linear and nonlinear IIR digital filters and the estimation error is shown to converge in probability to zero.
Abstract: A modified genetic algorithm is used to solve the parameter identification problem for linear and nonlinear IIR digital filters. Under suitable hypotheses, the estimation error is shown to converge in probability to zero. The scheme is also applied to feedforward and recurrent neural networks. >

236 citations