scispace - formally typeset
Search or ask a question
Topic

Local optimum

About: Local optimum is a research topic. Over the lifetime, 9667 publications have been published within this topic receiving 176414 citations.


Papers
More filters
Book
25 Nov 2014
TL;DR: The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast as discussed by the authors, which is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimisation.
Abstract: Problems demanding globally optimal solutions are ubiquitous, yet many are intractable when they involve constrained functions having many local optima and interacting, mixed-type variables.The differential evolution (DE) algorithm is a practical approach to global numerical optimization which is easy to understand, simple to implement, reliable, and fast. Packed with illustrations, computer code, new insights, and practical advice, this volume explores DE in both principle and practice. It is a valuable resource for professionals needing a proven optimizer and for students wanting an evolutionary perspective on global numerical optimization.

4,273 citations

Journal ArticleDOI
TL;DR: This chapter presents the basic schemes of VNS and some of its extensions, and presents five families of applications in which VNS has proven to be very successful.

3,572 citations

Journal ArticleDOI
TL;DR: This paper discusses natural biogeography and its mathematics, and then discusses how it can be used to solve optimization problems, and sees that BBO has features in common with other biology-based optimization methods, such as GAs and particle swarm optimization (PSO).
Abstract: Biogeography is the study of the geographical distribution of biological organisms. Mathematical equations that govern the distribution of organisms were first discovered and developed during the 1960s. The mindset of the engineer is that we can learn from nature. This motivates the application of biogeography to optimization problems. Just as the mathematics of biological genetics inspired the development of genetic algorithms (GAs), and the mathematics of biological neurons inspired the development of artificial neural networks, this paper considers the mathematics of biogeography as the basis for the development of a new field: biogeography-based optimization (BBO). We discuss natural biogeography and its mathematics, and then discuss how it can be used to solve optimization problems. We see that BBO has features in common with other biology-based optimization methods, such as GAs and particle swarm optimization (PSO). This makes BBO applicable to many of the same types of problems that GAs and PSO are used for, namely, high-dimension problems with multiple local optima. However, BBO also has some features that are unique among biology-based optimization methods. We demonstrate the performance of BBO on a set of 14 standard benchmarks and compare it with seven other biology-based optimization algorithms. We also demonstrate BBO on a real-world sensor selection problem for aircraft engine health estimation.

3,418 citations

Journal ArticleDOI
TL;DR: The SCA algorithm obtains a smooth shape for the airfoil with a very low drag, which demonstrates that this algorithm can highly be effective in solving real problems with constrained and unknown search spaces.
Abstract: This paper proposes a novel population-based optimization algorithm called Sine Cosine Algorithm (SCA) for solving optimization problems. The SCA creates multiple initial random candidate solutions and requires them to fluctuate outwards or towards the best solution using a mathematical model based on sine and cosine functions. Several random and adaptive variables also are integrated to this algorithm to emphasize exploration and exploitation of the search space in different milestones of optimization. The performance of SCA is benchmarked in three test phases. Firstly, a set of well-known test cases including unimodal, multi-modal, and composite functions are employed to test exploration, exploitation, local optima avoidance, and convergence of SCA. Secondly, several performance metrics (search history, trajectory, average fitness of solutions, and the best solution during optimization) are used to qualitatively observe and confirm the performance of SCA on shifted two-dimensional test functions. Finally, the cross-section of an aircraft's wing is optimized by SCA as a real challenging case study to verify and demonstrate the performance of this algorithm in practice. The results of test functions and performance metrics prove that the algorithm proposed is able to explore different regions of a search space, avoid local optima, converge towards the global optimum, and exploit promising regions of a search space during optimization effectively. The SCA algorithm obtains a smooth shape for the airfoil with a very low drag, which demonstrates that this algorithm can highly be effective in solving real problems with constrained and unknown search spaces. Note that the source codes of the SCA algorithm are publicly available at http://www.alimirjalili.com/SCA.html .

3,088 citations

Journal ArticleDOI
01 Dec 2009
TL;DR: An adaptive particle swarm optimization that features better search efficiency than classical particle Swarm optimization (PSO) is presented and can perform a global search over the entire search space with faster convergence speed.
Abstract: An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity.

1,713 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
90% related
Artificial neural network
207K papers, 4.5M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Fuzzy logic
151.2K papers, 2.3M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023187
2022427
2021804
2020739
2019772
2018689