Topic
Extremal optimization
About: Extremal optimization is a research topic. Over the lifetime, 1168 publications have been published within this topic receiving 104943 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Jan 2016TL;DR: This chapter deals with the fundamentals of the optimization, and it also presents various existing heuristic and meta-heuristic optimization techniques.
Abstract: This chapter deals with the fundamentals of the optimization. The concepts of stochastic optimization and how the stochastic optimization is advantageous over the deterministic approaches are described in Sect. 3.2. Heuristic and meta-heuristic optimization techniques are defined in Sect. 3.3, and it also presents various existing heuristic and meta-heuristic optimization techniques. The fundamentals of the swarm intelligence are given in Sect. 3.4. The applications of the swarm intelligence in various fields are also presented in this section.
6 citations
••
TL;DR: A gradient-based adaptive PSO with improved EO (called GAPSO-IEO) to overcome the issue of local optima deficiency of optimization in high-dimensional search and reduce the time complexity of the algorithm.
Abstract: Most real-world applications can be formulated as optimization problems, which commonly suffer from being trapped into the local optima In this paper, we make full use of the global search capability of particle swarm optimization (PSO) and local search ability of extremal optimization (EO), and propose a gradient-based adaptive PSO with improved EO (called GAPSO-IEO) to overcome the issue of local optima deficiency of optimization in high-dimensional search and reduce the time complexity of the algorithm In the proposed algorithm, the improved EO (IEO) is adaptively incorporated into PSO to avoid the particles being trapped into the local optima according to the evolutional states of the swarm, which are estimated based on the gradients of the fitness functions of the particles We also improve the mutation strategy of EO by performing polynomial mutation (PLM) on each particle, instead of on each component of the particle, therefore, the algorithm is not sensitive to the dimension of the swarm The proposed algorithm is tested on several unimodal/multimodal benchmark functions and Berkeley Segmentation Dataset and Benchmark (BSDS300) The results of experiments have shown the superiority and efficiency of the proposed approach compared with those of the state-of-the-art algorithms, and can achieve better performance in high-dimensional tasks
6 citations
01 Jan 1997
6 citations
•
02 May 2007
TL;DR: A multi-objective model for NP-hard spanning tree problems is presented and it is shown that the model can help to speed up approximation algorithms for this kind of problems.
Abstract: Randomized search heuristics have widely been applied to complex engineering problems as well as to problems from combinatorial optimization. We investigate the runtime behavior of randomized search heuristics and present runtime bounds for these heuristics on some well-known combinatorial optimization problems. Such analyses can help to understand better the working principle of these algorithms on combinatorial optimization problems as well as help to design better algorithms for a newly given problem. Our analyses mainly consider evolutionary algorithms that have achieved good results on a wide class of NP-hard combinatorial optimization problems. We start by analyzing some easy single-objective optimization problems such as the minimum spanning tree problem or the problem of computing an Eulerian cycle of a given Eulerian graph and prove bounds on the runtime of simple evolutionary algorithms. For the minimum spanning tree problem we also investigate a multi-objective model and show that randomized search heuristics find minimum spanning trees easier in this model than in a single-objective one. Many polynomial solvable problems become NP-hard when a second objective has to be optimized at the same time. We show that evolutionary algorithms are able to compute good approximations for such problems by examining the NP-hard multi-objective minimum spanning tree problem. Another kind of randomized search heuristic is ant colony optimization. Up to now no runtime bounds have been achieved for this kind of heuristic. We investigate a simple ant colony optimization algorithm and present a first runtime analysis. At the end we turn to classical approximation algorithms. Motivated by our investigations of randomized search heurisitics for the minimum spanning tree problem, we present a multi-objective model for NP-hard spanning tree problems and show that the model can help to speed up approximation algorithms for this kind of problems.
6 citations
•
24 May 2009-World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering
TL;DR: This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint with a simple but fast heuristic algorithm that outperforms the latest published algorithms for large-scale problem instances.
Abstract: This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint. In order to solve the large-scale problem effectively and efficiently, we propose a simple but fast heuristic algorithm. It is shown that this heuristic outperforms the latest published algorithms for large-scale problem instances. Keywords—Combinatorial optimization, heuristic, large-scale, stock-cutting.
6 citations