An adaptive multimodal continuous ACO algorithm is introduced and an adaptive parameter adjustment is developed, which takes the difference among niches into consideration, which affords a good balance between exploration and exploitation.
Abstract:
Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization (ACO) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ACO algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima.
TL;DR: The results show that the OEMACS generally outperforms conventional heuristic and other evolutionary-based approaches, especially on VMP with bottleneck resource characteristics, and offers significant savings of energy and more efficient use of different resources.
TL;DR: This paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of nICHing methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment, and poses challenges and research questions on nichin that are yet to be appropriately addressed.
TL;DR: This work considers particles in the swarm as mixed-level students and proposes a level-based learning swarm optimizer (LLSO) to settle large-scale optimization, which is still considerably challenging in evolutionary computation.
TL;DR: The proposed ANDE algorithm acts as a parameter-free automatic niching method that does not need to predefine the number of clusters or the cluster size and is enhanced by a contour prediction approach (CPA) and a two-level local search strategy.
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
TL;DR: It is shown how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling, and the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
TL;DR: The results show that the ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and it is concluded comparing ACS-3-opt, a version of the ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs.
TL;DR: Ant colony optimization (ACO) is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals as discussed by the authors In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization.
TL;DR: A detailed review of the basic concepts of DE and a survey of its major variants, its application to multiobjective, constrained, large scale, and uncertain optimization problems, and the theoretical studies conducted on DE so far are presented.
Q1. What are the contributions in "Adaptive multimodal continuous ant colony optimization" ?
Taking advantage of ant colony optimization ( ACO ) algorithms in preserving high diversity, this paper intends to extend ACO algorithms to deal with multimodal optimization. This work was supported in part by the National Natural Science Foundation of China under Project 61379061, Project 61332002, Project 61511130078, and Project 6141101191, in part by the Natural Science Foundation of Guangdong for Distinguished Young Scholars under Project 2015A030306024, in part by the Guangdong Special Support Program under Project 2014TQ01X550, and in part by the Guangzhou Pearl River New Star of Science and Technology under Project 201506010002 and Project 151700098. This paper has supplementary downloadable multimedia material available at http: //ieeexplore. Ieee. org provided by the authors. Color versions of one or more of the figures in this paper are available online at http: //ieeexplore.
Q2. What are the future works in "Adaptive multimodal continuous ant colony optimization" ?
Therefore, there is room to further improve the performance of the proposed algorithms on very complex problems, which forms a part of future work.
Q3. What are the two parameters needed to set in the proposed LAM-ACOs?
In the proposed LAM-ACOs, there are only two parameters needed to set, namely, the ant colony size (NP) and the niche size set G.
Q4. What are the five accuracy levels used in the experiments?
In this paper, five accuracy levels, namely ε = 1.0E-01, ε = 1.0E-02, ε = 1.0E-03, ε = 1.0E-04, and ε = 1.0E-05, are adopted in the experiments.
Q5. Why do the authors use the same scheme for ants?
For the local search method, the authors propose to utilize a similar scheme used in the solution construction for ants in (3), because Gaussian distribution has a narrow sampling space, especially when the standard deviation δ is small.
Q6. Why is LAMS-ACO better than the compared winner?
Please note that due to the absence of the detailed results in the associated competitions, whether LAMS-ACO is better than, equivalent to or worse than the compared winner is just determined by the values of PR without any statistical test validation, in these tables.
Q7. What are the evaluation criteria used in the special session and the state-of-the-art?
In addition, the evaluation criteria used in both the special session and the state-of-the-art papers [44], [50]–[52], [54] are utilized to evaluate the performance of different algorithms.
Q8. What is the third technique used to refine the obtained solutions?
The third one is self-adaptively performed around seeds of niches to refine the obtained solutions, which is profitable for exploitation.
Q9. What is the probability of the ith seed to do local search?
Pi is the probability of the ith seed to do local search, FSEi is the fitness of the ith seed and FSEmax is the maximum fitness value among all seeds.