scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Distance-Based Locally Informed Particle Swarm Model for Multimodal Optimization

TL;DR: A distance-based locally informed particle swarm (LIPS) optimizer, which eliminates the need to specify any niching parameter and enhance the fine search ability of PSO.
Abstract: Multimodal optimization amounts to finding multiple global and local optima (as opposed to a single solution) of a function, so that the user can have a better knowledge about different optimal solutions in the search space and when needed, the current solution may be switched to a more suitable one while still maintaining the optimal system performance. Niching particle swarm optimizers (PSOs) have been widely used by the evolutionary computation community for solving real-parameter multimodal optimization problems. However, most of the existing PSO-based niching algorithms are difficult to use in practice because of their poor local search ability and requirement of prior knowledge to specify certain niching parameters. This paper has addressed these issues by proposing a distance-based locally informed particle swarm (LIPS) optimizer, which eliminates the need to specify any niching parameter and enhance the fine search ability of PSO. Instead of using the global best particle, LIPS uses several local bests to guide the search of each particle. LIPS can operate as a stable niching algorithm by using the information provided by its neighborhoods. The neighborhoods are estimated in terms of Euclidean distance. The algorithm is compared with a number of state-of-the-art evolutionary multimodal optimizers on 30 commonly used multimodal benchmark functions. The experimental results suggest that the proposed technique is able to provide statistically superior and more consistent performance over the existing niching algorithms on the test functions, without incurring any severe computational burdens.
Citations
More filters
Journal ArticleDOI
TL;DR: It is found that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research on DE.
Abstract: Differential Evolution (DE) is arguably one of the most powerful and versatile evolutionary optimizers for the continuous parameter spaces in recent times. Almost 5 years have passed since the first comprehensive survey article was published on DE by Das and Suganthan in 2011. Several developments have been reported on various aspects of the algorithm in these 5 years and the research on and with DE have now reached an impressive state. Considering the huge progress of research with DE and its applications in diverse domains of science and technology, we find that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research. The purpose of this paper is to summarize and organize the information on these current developments on DE. Beginning with a comprehensive foundation of the basic DE family of algorithms, we proceed through the recent proposals on parameter adaptation of DE, DE-based single-objective global optimizers, DE adopted for various optimization scenarios including constrained, large-scale, multi-objective, multi-modal and dynamic optimization, hybridization of DE with other optimizers, and also the multi-faceted literature on applications of DE. The paper also presents a dozen of interesting open problems and future research issues on DE.

1,265 citations


Cites methods from "A Distance-Based Locally Informed P..."

  • ...Index-based topologies [37] have been commonly used while Euclidean distance based topologies [85,153,154,176] have not been frequently used....

    [...]

Journal ArticleDOI
TL;DR: Empirical results demonstrate that the proposed CSO exhibits a better overall performance than five state-of-the-art metaheuristic algorithms on a set of widely used large scale optimization problems and is able to effectively solve problems of dimensionality up to 5000.
Abstract: In this paper, a novel competitive swarm optimizer (CSO) for large scale optimization is proposed. The algorithm is fundamentally inspired by the particle swarm optimization but is conceptually very different. In the proposed CSO, neither the personal best position of each particle nor the global best position (or neighborhood best positions) is involved in updating the particles. Instead, a pairwise competition mechanism is introduced, where the particle that loses the competition will update its position by learning from the winner. To understand the search behavior of the proposed CSO, a theoretical proof of convergence is provided, together with empirical analysis of its exploration and exploitation abilities showing that the proposed CSO achieves a good balance between exploration and exploitation. Despite its algorithmic simplicity, our empirical results demonstrate that the proposed CSO exhibits a better overall performance than five state-of-the-art metaheuristic algorithms on a set of widely used large scale optimization problems and is able to effectively solve problems of dimensionality up to 5000.

644 citations


Cites background from "A Distance-Based Locally Informed P..."

  • ...Recently, a distance-based locally informed particle swarm optimizer was proposed specifically to tackle multimodal problems in [25]....

    [...]

Journal ArticleDOI
TL;DR: This paper introduces social learning mechanisms into particle swarm optimization (PSO) to develop a social learning PSO (SL-PSO), which performs well on low-dimensional problems and is promising for solving large-scale problems as well.

566 citations


Cites background from "A Distance-Based Locally Informed P..."

  • ...More recently, a distance-based locally informed particle swarm optimizer has been proposed to tackle multi-modal problems [63]....

    [...]

Journal ArticleDOI
TL;DR: The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques.
Abstract: In recent years, the research community has witnessed an explosion of literature dealing with the mimicking of behavioral patterns and social phenomena observed in nature towards efficiently solving complex computational tasks. This trend has been especially dramatic in what relates to optimization problems, mainly due to the unprecedented complexity of problem instances, arising from a diverse spectrum of domains such as transportation, logistics, energy, climate, social networks, health and industry 4.0, among many others. Notwithstanding this upsurge of activity, research in this vibrant topic should be steered towards certain areas that, despite their eventual value and impact on the field of bio-inspired computation, still remain insufficiently explored to date. The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization. An analysis and discussion are also carried out over the general trajectory followed in recent years by the community working in this field, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques.

401 citations


Cites background from "A Distance-Based Locally Informed P..."

  • ...tive proposals to multimodal problems using PSO algorithms have gravitated on the use of multi-swarms [176], the induction of Euclideanbased niching [168] or a ring topology in the neighborhoods within the swarm [177]....

    [...]

Journal ArticleDOI
TL;DR: A statistical analysis on performance evaluation of the different algorithms on CEC2005 problems indicates that SRPSO is better than other algorithms with a 95% confidence level.

254 citations

References
More filters
Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Proceedings ArticleDOI
04 Oct 1995
TL;DR: The optimization of nonlinear functions using particle swarm methodology is described and implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm.
Abstract: The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed.

14,477 citations

Book
01 Jan 1992
TL;DR: GAs and Evolution Programs for Various Discrete Problems, a Hierarchy of Evolution Programs and Heuristics, and Conclusions.
Abstract: 1 GAs: What Are They?.- 2 GAs: How Do They Work?.- 3 GAs: Why Do They Work?.- 4 GAs: Selected Topics.- 5 Binary or Float?.- 6 Fine Local Tuning.- 7 Handling Constraints.- 8 Evolution Strategies and Other Methods.- 9 The Transportation Problem.- 10 The Traveling Salesman Problem.- 11 Evolution Programs for Various Discrete Problems.- 12 Machine Learning.- 13 Evolutionary Programming and Genetic Programming.- 14 A Hierarchy of Evolution Programs.- 15 Evolution Programs and Heuristics.- 16 Conclusions.- Appendix A.- Appendix B.- Appendix C.- Appendix D.- References.

12,212 citations

01 Jan 2015
TL;DR: In the second edition, the authors have reorganized the material to focus on problems, how to represent them, and then how to choose and design algorithms for different representations as discussed by the authors.
Abstract: The overall structure of this new edition is three-tier: Part I presents the basics, Part II is concerned with methodological issues, and Part III discusses advanced topics. In the second edition the authors have reorganized the material to focus on problems, how to represent them, and then how to choose and design algorithms for different representations. They also added a chapter on problems, reflecting the overall book focus on problem-solvers, a chapter on parameter tuning, which they combined with the parameter control and "how-to" chapters into a methodological part, and finally a chapter on evolutionary robotics with an outlook on possible exciting developments in this field. The book is suitable for undergraduate and graduate courses in artificial intelligence and computational intelligence, and for self-study by practitioners and researchers engaged with all aspects of bioinspired design and optimization.

4,461 citations

Journal ArticleDOI
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Abstract: Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard evolutionary algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This paper presents a detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far. Also, it provides an overview of the significant engineering applications that have benefited from the powerful nature of DE.

4,321 citations