scispace - formally typeset
Search or ask a question

Showing papers on "Premature convergence published in 2009"


Journal ArticleDOI
TL;DR: A family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member, and is shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions.
Abstract: Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.

1,086 citations


Journal ArticleDOI
TL;DR: It is demonstrated that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process.
Abstract: We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the covariance matrix adaptation evolution strategy (CMA-ES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental setup to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gain-delay or model-based H infin controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the model-building and design process.

307 citations


Journal ArticleDOI
TL;DR: The practical NCED problem is solved here using PSO with a novel parameter automation strategy in which time varying acceleration coefficients are employed to efficiently control the local and global search, such that premature convergence is avoided and global solutions are achieved.

236 citations


Journal ArticleDOI
TL;DR: An overview of previous and present conditions of the PSO algorithm as well as its opportunities and challenges is presented and all major PSO-based methods are comprehensively surveyed.
Abstract: The Particle Swarm Optimization (PSO) algorithm, as one of the latest algorithms inspired from the nature, was introduced in the mid 1990s and since then, it has been utilized as an optimization tool in various applications, ranging from biological and medical applications to computer graphics and music composition. In this paper, following a brief introduction to the PSO algorithm, the chronology of its evolution is presented and all major PSO-based methods are comprehensively surveyed. Next, these methods are studied separately and their important factors and parameters are summarized in a comparative table. In addition, a new taxonomy of PSO-based methods is presented. It is the purpose of this paper is to present an overview of previous and present conditions of the PSO algorithm as well as its opportunities and challenges. Accordingly, the history, various methods, and taxonomy of this algorithm are discussed and its different applications together with an analysis of these applications are evaluated. among agents on survival of the fittest. Algorithms related to this group include Evolutionary Programming (EP), Genetic Programming (GP), and Differential Evolutionary (DE). The Ontogeny group is associated with the algorithms in which the adaptation of a special organism to its environment is happened. The algorithms like PSO and Genetic Algorithms (GA) are of this type and in fact, they have a cooperative nature in comparison with other types (16). The advantages of above-mention ed categories can be noted as their ability to be developed for various applications and not needing the previous knowledge of the problem space. Their drawbacks include no guarantee in finding an optimum solution and high computational costs in completing Fitness Function (F.F.) in intensive iterations. Among the aforementioned paradigms, the PSO algorithm seems to be an attractive one to study since it has a simple but efficient nature added to being novel. It can even be a substitution for other basic and important evolutionary algorithms. The most important similarity between these paradigms and the GA is in having the seam interactive population. This algorithm, compared to GA, has a faster speed in finding the solutions close to the optimum and it is faster than GA in premature convergence (4).

194 citations


Journal ArticleDOI
TL;DR: The numerical results demonstrate that the proposed SARGA method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time.

183 citations


Journal ArticleDOI
TL;DR: The results showed that the GA-based neural network model gives superior predictions and the well-trained neural network can be used as a useful tool for runoff forecasting.
Abstract: This paper investigates the effectiveness of the genetic algorithm (GA) evolved neural network for rainfall-runoff forecasting and its application to predict the runoff in a catchment located in a semi-arid climate in Morocco. To predict the runoff at given moment, the input variables are the rainfall and the runoff values observed on the previous time period. Our methodology adopts a real coded GA strategy and hybrid with a back-propagation (BP) algorithm. The genetic operators are carefully designed to optimize the neural network, avoiding premature convergence and permutation problems. To evaluate the performance of the genetic algorithm-based neural network, BP neural network is also involved for a comparison purpose. The results showed that the GA-based neural network model gives superior predictions. The well-trained neural network can be used as a useful tool for runoff forecasting.

179 citations


Journal ArticleDOI
TL;DR: A new dominant selection operator that enhances the action of the dominant individuals, along with a cyclical mutation operator that periodically varies the mutation probability in accordance with evolution generation found in biological evolutionary processes are introduced.

166 citations


Journal ArticleDOI
TL;DR: This paper presents an attempt to improve the search performance of HS by hybridizing it with Differential Evolution (DE) algorithm and has been compared with classical HS, the global best HS, and a very popular variant of DE over a test-suite of six well known benchmark functions and one interesting practical optimization problem.
Abstract: Harmony Search (HS) is a recently developed stochastic algorithm which imitates the music improvisation process. In this process, the musicians improvise their instrument pitches searching for the perfect state of harmony. Practical experiences, however, suggest that the algorithm suffers from the problems of slow and/or premature convergence over multimodal and rough fitness landscapes. This paper presents an attempt to improve the search performance of HS by hybridizing it with Differential Evolution (DE) algorithm. The performance of the resulting hybrid algorithm has been compared with classical HS, the global best HS, and a very popular variant of DE over a test-suite of six well known benchmark functions and one interesting practical optimization problem. The comparison is based on the following performance indices - (i) accuracy of final result, (ii) computational speed, and (iii) frequency of hitting the optima.

160 citations


Journal ArticleDOI
TL;DR: Experiments on a number of synthetic and real-world data sets demonstrate that the proposed approach is more accurate, much faster, and can handle data sets that are hundreds of times larger than the largest data set reported in the MMC literature.
Abstract: Motivated by the success of large margin methods in supervised learning, maximum margin clustering (MMC) is a recent approach that aims at extending large margin methods to unsupervised learning. However, its optimization problem is nonconvex and existing MMC methods all rely on reformulating and relaxing the nonconvex optimization problem as semidefinite programs (SDP). Though SDP is convex and standard solvers are available, they are computationally very expensive and only small data sets can be handled. To make MMC more practical, we avoid SDP relaxations and propose in this paper an efficient approach that performs alternating optimization directly on the original nonconvex problem. A key step to avoid premature convergence in the resultant iterative procedure is to change the loss function from the hinge loss to the Laplacian/square loss so that overconfident predictions are penalized. Experiments on a number of synthetic and real-world data sets demonstrate that the proposed approach is more accurate, much faster (hundreds to tens of thousands of times faster), and can handle data sets that are hundreds of times larger than the largest data set reported in the MMC literature.

155 citations


Journal ArticleDOI
TL;DR: A novel adaptive mutation operator that has no parameter is reported that is non-revisiting: It remembers every position that it has searched before and shows and maintains a stable good performance.
Abstract: A novel genetic algorithm is reported that is non-revisiting: It remembers every position that it has searched before. An archive is used to store all the solutions that have been explored before. Different from other memory schemes in the literature, a novel binary space partitioning tree archive design is advocated. Not only is the design an efficient method to check for revisits, if any, it in itself constitutes a novel adaptive mutation operator that has no parameter. To demonstrate the power of the method, the algorithm is evaluated using 19 famous benchmark functions. The results are as follows. (1) Though it only uses finite resolution grids, when compared with a canonical genetic algorithm, a generic real-coded genetic algorithm, a canonical genetic algorithm with simple diversity mechanism, and three particle swarm optimization algorithms, it shows a significant improvement. (2) The new algorithm also shows superior performance compared to covariance matrix adaptation evolution strategy (CMA-ES), a state-of-the-art method for adaptive mutation. (3) It can work with problems that have large search spaces with dimensions as high as 40. (4) The corresponding CPU overhead of the binary space partitioning tree design is insignificant for applications with expensive or time-consuming fitness evaluations, and for such applications, the memory usage due to the archive is acceptable. (5) Though the adaptive mutation is parameter-less, it shows and maintains a stable good performance. However, for other algorithms we compare, the performance is highly dependent on suitable parameter settings.

154 citations


Journal ArticleDOI
TL;DR: A neural classifier based on improved particle swarm optimization (IPSO) is proposed to classify an electroencephalogram (EEG) of mental tasks for left-hand movement imagination, right-handmovement imagination, and word generation.

Journal ArticleDOI
TL;DR: From the reliability point of view, it seems that the real encoded differential algorithm, improved by the technology described in this paper, is a universal and reliable method capable of solving all proposed test problems.
Abstract: This paper presents several types of evolutionary algorithms (EAs) used for global optimization on real domains. The interest has been focused on multimodal problems, where the difficulties of a premature convergence usually occurs. First the standard genetic algorithm (SGA) using binary encoding of real values and its unsatisfactory behavior with multimodal problems is briefly reviewed together with some improvements of fighting premature convergence. Two types of real encoded methods based on differential operators are examined in detail: the differential evolution (DE), a very modern and effective method firstly published by R. Storn and K. Price, and the simplified real-coded differential genetic algorithm SADE proposed by the authors. In addition, an improvement of the SADE method, called CERAF technology, enabling the population of solutions to escape from local extremes, is examined. All methods are tested on an identical set of objective functions and a systematic comparison based on a reliable methodology is presented. It is confirmed that real coded methods generally exhibit better behavior on real domains than the binary algorithms, even when extended by several improvements. Furthermore, the positive influence of the differential operators due to their possibility of self-adaptation is demonstrated. From the reliability point of view, it seems that the real encoded differential algorithm, improved by the technology described in this paper, is a universal and reliable method capable of solving all proposed test problems.

Book ChapterDOI
01 Jan 2009
TL;DR: This chapter aims to address some of the fundamental issues that are often encountered in optimization problems, making them difficult to solve, and to help both practitioners and fellow researchers to create more efficient optimization applications and novel algorithms.
Abstract: This chapter aims to address some of the fundamental issues that are often encountered in optimization problems, making them difficult to solve. These issues include premature convergence, ruggedness, causality, deceptiveness, neutrality, epistasis, robustness, overfitting, oversimplification, multi-objectivity, dynamic fitness, the No Free Lunch Theorem, etc. We explain why these issues make optimization problems hard to solve and present some possible countermeasures for dealing with them. By doing this, we hope to help both practitioners and fellow researchers to create more efficient optimization applications and novel algorithms.

Journal ArticleDOI
TL;DR: By using genetic operators, the premature convergence of the particles is avoided and the search region of particles enlarged and the corresponding importance weight is derived to approximate the given target distribution.
Abstract: Particle filters perform the nonlinear estimation and have received much attention from many engineering fields over the past decade. Unfortunately, there are some cases in which most particles are concentrated prematurely at a wrong point, thereby losing diversity and causing the estimation to fail. In this paper, genetic algorithms (GAs) are incorporated into a particle filter to overcome this drawback of the filter. By using genetic operators, the premature convergence of the particles is avoided and the search region of particles enlarged. The GA-inspired proposal distribution is proposed and the corresponding importance weight is derived to approximate the given target distribution. Finally, a computer simulation is performed to show the effectiveness of the proposed method.

Proceedings ArticleDOI
04 Dec 2009
TL;DR: The proposed RegPSO avoids the stagnation problem by automatically triggering swarm regrouping when premature convergence is detected and reduces each popular benchmark tested to its approximate global minimum.
Abstract: Particle swarm optimization (PSO) is known to suffer from stagnation once particles have prematurely converged to any particular region of the search space. The proposed regrouping PSO (RegPSO) avoids the stagnation problem by automatically triggering swarm regrouping when premature convergence is detected. This mechanism liberates particles from sub-optimal solutions and enables continued progress toward the true global minimum. Particles are regrouped within a range on each dimension proportional to the degree of uncertainty implied by the maximum deviation of any particle from the globally best position. This is a computationally simple yet effective addition to the computationally simple PSO algorithm. Experimental results show that the proposed RegPSO successfully reduces each popular benchmark tested to its approximate global minimum.

Journal ArticleDOI
TL;DR: The empirical results indicate that the SVR model with CGA (SVRCGA) results in better forecasting performance than the other methods, namely SVMG (SVM model with GAs), regression model, and ANN model.

Proceedings ArticleDOI
12 Aug 2009
TL;DR: In this paper, several selection strategies, such as disruptive selection strategy, tournament selection strategy and rank selection strategy are compared and analyzed through simulation, and the results show that the modified algorithm outperforms the basic ABC algorithm.
Abstract: Artificial bee colony (ABC) algorithm is a new global stochastic optimization algorithm based on the particular intelligent behavior of honeybee swarms, in which there exists many issues to be improved and solved. When onlooker bees exploit in ABC algorithm, they choose food source depending on the strategy of proportional selection that can result in the premature of the evolutionary process. In this paper, in order to improve the population diversity and avoid the premature, several selection strategies, such as disruptive selection strategy, tournament selection strategy and rank selection strategy, are compared and analyzed through simulation, and the results show that the modified algorithm outperforms the basic ABC algorithm.

Proceedings ArticleDOI
08 Jul 2009
TL;DR: This paper introduces and compares two conceptually simple, yet efficient methods to improve exploration and avoid premature convergence when evolving both the topology and the parameters of neural networks.
Abstract: Encouraging exploration, typically by preserving the diversity within the population, is one of the most common method to improve the behavior of evolutionary algorithms with deceptive fitness functions. Most of the published approaches to stimulate exploration rely on a distance between genotypes or phenotypes; however, such distances are difficult to compute when evolving neural networks due to (1) the algorithmic complexity of graph similarity measures, (2) the competing conventions problem and (3) the complexity of most neural-network encodings. In this paper, we introduce and compare two conceptually simple, yet efficient methods to improve exploration and avoid premature convergence when evolving both the topology and the parameters of neural networks. The two proposed methods, respectively called behavioral novelty and behavioral diversity, are built on multiobjective evolutionary algorithms and on a user-defined distance between behaviors. They can be employed with any genotype. We benchmarked them on the evolution of a neural network to compute a Boolean function with a deceptive fitness. The results obtained with the two proposed methods are statistically similar to those of NEAT and substantially better than those of the control experiment and of a phenotype-based diversity mechanism.

Proceedings ArticleDOI
21 May 2009
TL;DR: An improved particle swarm optimal algorithm is given in which a kind of exponent decreasing inertia weights is given to improve the convergence speed and akind of stochastic mutations is used to improved the diversity of the swarm in order to overcome the disadvantage of premature convergence and later period oscillatory occurrences.
Abstract: The paper gives an improved particle swarm optimal algorithm in which a kind of exponent decreasing inertia weights is given to improve the convergence speed and a kind of stochastic mutations is used to improve the diversity of the swarm in order to overcome the disadvantage of premature convergence and later period oscillatory occurrences. It is shown by five representative benchmarks function’s test that the improved algorithm is better than both a particle swarm optimization with linear decreasing inertia weight and a particle swarm optimization with exponent decreasing inertia weight in global searching and performance.

Journal ArticleDOI
TL;DR: Simulation results for nonstationary sinusoidal signals occurring in power networks with varying amplitudes, phases, and harmonic contents corrupted with noise having a low SNR reveal significant improvements in noise rejection and speed of convergence and accuracy.
Abstract: This paper presents a hybrid approach for tracking the amplitude, phase, frequency, and harmonic content of power quality disturbance signals occurring in power networks using an unscented Kalman filter (UKF) and swarm intelligence. The UKF is a novel extension of the well-known extended Kalman filter (EKF) using an unscented transformation to overcome the difficulties of linearization and derivative calculations of signals with a low signal-to-noise ratio (SNR). Further, the model and measurement error covariance matrices Q and R, along with the UKF parameters, are selected using a modified particle swarm optimization (PSO) algorithm for accurate tracking of signal parameters. To circumvent the problem of premature convergence and local minima in conventional PSO, a dynamically varying inertia weight based on the variance of the population fitness is used. This results in a better local and global searching ability of the particles, which improves the convergence of the velocity, and in a better accuracy of the UKF parameters. Various simulation results for nonstationary sinusoidal signals occurring in power networks with varying amplitudes, phases, and harmonic contents corrupted with noise having a low SNR reveal significant improvements in noise rejection and speed of convergence and accuracy.

Journal ArticleDOI
TL;DR: A modified PSO algorithm is proposed which makes the most optimal particle of every time of iteration evolving continuously, and assigns the worst particle with a new value to increase its disturbance.
Abstract: Particle Swarm Optimization (PSO) is a new optimization algorithm, which is applied in many fields widely. But the original PSO is likely to cause the local optimization with premature convergence phenomenon. By using the idea of simulated annealing algorithm, we propose a modified algorithm which makes the most optimal particle of every time of iteration evolving continuously, and assign the worst particle with a new value to increase its disturbance. By the testing of three classic testing functions, we conclude the modified PSO algorithm has the better performance of convergence and global searching than the original PSO.

01 Jan 2009
TL;DR: A modified particle swarm optimization where weightage would be given, not only to the best position, but also to the worst position during an iteration cycle, to free PSO from sub-optimal solutions and enable it to progress towards the global optimum.
Abstract: Particle Swarm Optimization (PSO) was introduced by Kennedy and Eberhart (1). PSO is a very popular optimization technique, but it suffers from a major drawback of a possible premature convergence i.e. convergence to a local optimum and not to the global optimum. This paper attempts to improve on the reliability of PSO by addressing the drawback. The main cause for the premature convergence is the fact that particles get highly influenced during an iteration cycle by the global best and the personal best positions of the previous iteration cycle. Another reason can be a similar kind of information flow between particles during optimization, and this can result in a swarm consisting of similar particles (i.e. a loss in diversity). In literature, there are some attempts to improve the reliability of PSO, for example, Liu et al (2) proposed a multi-start technique. In the present paper, a modified particle swarm optimization is proposed. During an iteration cycle, while deciding new positions of particles, weightage would be given, not only to the best position, but also to the worst position. This mechanism would free PSO from sub-optimal solutions and would enable it to progress towards the global optimum. Experiments on the benchmark functions are in progress and results would be reported in the paper.

Book ChapterDOI
01 Jan 2009
TL;DR: It is demonstrated that the initial population plays an important role in the convergence of genetic algorithms independently from the algorithm and the problem, and using a well-distributed sampling increases the robustness and avoids premature convergence.
Abstract: This paper aims to demonstrate that the initial population plays an important role in the convergence of genetic algorithms independently from the algorithm and the problem. Using a well-distributed sampling increases the robustness and avoids premature convergence. The observation is proved using MOGA-II and NSGA-II with different sampling methods. This result is particularly important whenever the optimization involves time-consuming functions.

Journal ArticleDOI
TL;DR: A spatial correlation hybrid genetic algorithm based on the characteristics of fractal and partitioned iterated function system (PIFS) and adopts dyadic mutation operator to take place of the traditional one to avoid premature convergence.

Book ChapterDOI
01 Jan 2009
TL;DR: An Opposition-based PSO (OVCPSO) which uses Velocity Clamping to accelerate its convergence speed and to avoid premature convergence of algorithm is presented.
Abstract: This paper presents an Opposition-based PSO(OVCPSO) which uses Velocity Clamping to accelerate its convergence speed and to avoid premature convergence of algorithm. Probabilistic opposition-based learning for particles has been used in the proposed method which uses velocity clamping to control the speed and direction of particles. Experiments have been performed upon various well known benchmark optimization problems and results have shown that OVCPSO can deal with difficult unimodal and multimodal optimization problems efficiently and effectively. The numbers of function calls (NFC) are significantly less than other PSO variants i.e. basic PSO with inertia weight, PSO with inertia weight and velocity clamping (VCPSO) and opposition based PSO with Cauchy Mutation (OPSOCM).

Journal ArticleDOI
TL;DR: A detailed comparative analysis carried out from the viewpoint of the performance and the design methodology, is provided for the fuzzy cascade controller and the conventional PD cascade controller whose design relied on the use of the serial genetic algorithms.

Journal ArticleDOI
TL;DR: Based on the analysis of visual modeling, the reason for premature convergence and diversity loss in PSO is explained, and a new modified algorithm is proposed to ensure the rational flight of every particle's dimensional component.
Abstract: A particle is treated as a whole individual in all researches on particle swarm optimization (PSO) currently, these are not concerned with the information of every particle's dimensional vector. A visual modeling method describing particle's dimensional vector behavior is presented in this paper. Based on the analysis of visual modeling, the reason for premature convergence and diversity loss in PSO is explained, and a new modified algorithm is proposed to ensure the rational flight of every particle's dimensional component. Meanwhile, two parameters of particle-distribution-degree and particle-dimension-distance are introduced into the proposed algorithm in order to avoid premature convergence. Simulation results of the new PSO algorithm show that it has a better ability of finding the global optimum, and still keeps a rapid convergence as with the standard PSO.

Journal ArticleDOI
TL;DR: An alternate two phases particle swarm optimization algorithm called ATPPSO is proposed to solve the flow shop scheduling problem with the objective of minimizing makespan and both the solution quality and the convergence speed both precede the other two algorithms.
Abstract: An alternate two phases particle swarm optimization algorithm called ATPPSO is proposed to solve the flow shop scheduling problem with the objective of minimizing makespan. It includes two processes, the attractive process and the repulsive process which execute alternatively. In order to refrain from the shortcoming of premature convergence, a two point reversal crossover operator is defined and in the repulsive process each particle is made to fly towards some promising areas which can introduce some new information to guide the swarm searching process. In order to improve the algorithm speed, a fast makespan computation method based on matrix is designed. Finally, the proposed algorithm is tested on different scale benchmarks and compared with the recently proposed efficient algorithms. The results show that the solution quality and the convergence speed of the ATPPSO algorithm both precede the other two algorithms. It can be used to solve large scale flow shop scheduling problem effectively.

Proceedings ArticleDOI
18 May 2009
TL;DR: From the comparison results over several variant PSO algorithms, ALPSO shows an outstanding performance on most test functions, especially the fast convergence characteristic.
Abstract: Traditional particle swarm optimization (PSO) suffers from the premature convergence problem, which usually results in PSO being trapped in local optima. This paper presents an adaptive learning PSO (ALPSO) based on a variant PSO learning strategy. In ALPSO, the learning mechanism of each particle is separated into three parts: its own historical best position, the closest neighbor and the global best one. By using this individual level adaptive technique, a particle can well guide its behavior of exploration and exploitation. A set of 21 test functions were used including un-rotated, rotated and composition functions to test the performance of ALPSO. From the comparison results over several variant PSO algorithms, ALPSO shows an outstanding performance on most test functions, especially the fast convergence characteristic.

Proceedings ArticleDOI
18 May 2009
TL;DR: A new SFLA for continuous space optimization is presented, in which the population is divided based on the principle of uniform performance of memeplexes, and all the frogs participate in the evolvement by keeping the inertia learning behaviors and learning from better ones selected randomly.
Abstract: Shuffled frog leaping algorithm (SFLA) is mainly used for the discrete space optimization. For SFLA, the population is divided into several memeplexes, several frogs of each memeplex are selected to compose a submemeplex for local evolvement, according to the mechanism that the worst frog learns from the best frog in submemeplex or the best frog in population, and the memeplexes are shuffled for the global evolvement after some generations of each memeplex. Derived by the discrete SFLA, a new SFLA for continuous space optimization is presented, in which the population is divided based on the principle of uniform performance of memeplexes, and all the frogs participate in the evolvement by keeping the inertia learning behaviors and learning from better ones selected randomly. The simulation results of searching minima of several multi-peak continuous functions show that the improved SFLA can effectively overcome the problems of premature convergence and slow convergence speed, and achieve high optimization precision.