scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 2016"


Journal ArticleDOI
TL;DR: A discrete bat-inspired algorithm is extended to solve the famous TSP to achieve significant improvements, not only compared to traditional algorithms but also to another metaheuristics.
Abstract: The travelling salesman problem (TSP) is one of the well-known NP-hard combinatorial optimization and extensively studied problems in discrete optimization. The bat algorithm is a new nature-inspired metaheuristic optimization algorithm introduced by Yang in 2010, especially based on echolocation behavior of microbats when searching their prey. Firstly, this algorithm is used to solve various continuous optimization problems. In this paper we extend a discrete bat-inspired algorithm to solve the famous TSP. Although many algorithms have been used to solve TSP, the main objective of this research is to investigate this discrete version to achieve significant improvements, not only compared to traditional algorithms but also to another metaheuristics. Moreover, this study is based on a benchmark dataset of symmetric TSP from TSPLIB library.

91 citations


Journal ArticleDOI
TL;DR: The proposed IMOPEO-PLM adopts population-based iterated optimization, a more effective mutation operation called polynomial mutation, and a novel and more effective mechanism of generating new population to solve multi-objective optimization problems (MOPs).

89 citations


Journal ArticleDOI
TL;DR: The experimental results on a large number of benchmark functions with the different dimensions by using non-parametric statistical tests have shown that the proposed RPEO-PLM algorithm outperforms other popular population-based evolutionary algorithms, e.g., real-coded genetic algorithm with adaptive directed mutation, RCGA with polynomial mutation, and an improved RPEO algorithm with random mutation in terms of accuracy.

71 citations


Proceedings ArticleDOI
20 Jul 2016
TL;DR: This paper studies and compares different approaches for solving the Travelling Thief Problem from a metaheuristics perspective and proposes two heuristic algorithms, one of which is a Memetic Algorithm and the second a single-solution heuristic empowered by Hill Climbing and Simulated Annealing.
Abstract: The Travelling Thief Problem (TTP) is an optimization problem introduced in order to provide a more realistic model for real-world optimization problems. The problem combines the Travelling Salesman Problem and the Knapsack Problem and introduces the notion of interdependence between sub-problems. In this paper, we study and compare different approaches for solving the TTP from a metaheuristics perspective. Two heuristic algorithms are proposed. The first is a Memetic Algorithm, and the second is a single-solution heuristic empowered by Hill Climbing and Simulated Annealing. Two other state-of-the-art algorithms are briefly revisited, analyzed, and compared to our algorithms. The obtained results prove that our algorithms are very efficient for many TTP instances.

45 citations


Journal ArticleDOI
TL;DR: The superiority of the proposed method to Z-N empirical method, binary-coded genetic algorithm,binary-coded particle swarm optimization is demonstrated by both simulation and experimental results on a 20-kW three-phase inverter with nominal and variable loads.
Abstract: How to design an effective and efficient double closed-loop proportional–integral (PI) controller for a three-phase inverter to obtain satisfied quality of output voltage waveform is of great practical significance This paper presents a novel double closed-loop PI controller design method for a three-phase inverter based on a binary-coded extremal optimization (BCEO) algorithm The basic idea behind the proposed method is first formulating the optimal design problem of double closed-loop PI controller for a three-phase inverter as a typical constrained optimization problem, where the total harmonic distortion and the integral of time weighted absolute error of output voltage waveform are weighted as the optimization objective function, and then a BCEO algorithm is designed to solve this formulated problem The superiority of the proposed method to Z-N empirical method, binary-coded genetic algorithm, binary-coded particle swarm optimization is demonstrated by both simulation and experimental results on a 20-kW three-phase inverter with nominal and variable loads

44 citations


Journal ArticleDOI
TL;DR: Numerical optimization was used to find the optimal parameter values for simulated annealing (SA), threshold accepting (TA), great deluge (GD), tabu search (TS), genetic algorithm (GA) and ant colony optimization (AC) when they are used for combinatorial optimization in forest planning.
Abstract: Heuristic methods are commonly used in complicated spatial forest planning problems to find the best combination of management alternatives for stands. The performance of heuristic methods depends on the parameters that guide their search processes. This study used numerical optimization to find the optimal parameter values for simulated annealing (SA), threshold accepting (TA), great deluge (GD), tabu search (TS), genetic algorithm (GA) and ant colony optimization (AC) when they are used for combinatorial optimization in forest planning. Ant colony optimization was implemented using the Max–Min Ant System, which was applied for the first time to forest planning problem. Solutions found by different heuristic methods for a non-spatial and a spatial forest planning problem were compared in a situation where the search time was restricted. The comparisons revealed that SA and TA were the best methods for fast search in both non-spatial and spatial problems. GA and AC were the least satisfactory methods, and GD and TS were between the best and the worst heuristics. The main reason for the poor performance of GA and AC was their slow search process. Differences between heuristic methods decreased when the allowed search time increased.

30 citations


Journal ArticleDOI
TL;DR: A novel optimization algorithm based on competitive behavior of various creatures such as birds, cats, bees and ants to survive in nature and is an efficient method in finding the solution of optimization problems.
Abstract: This paper presents a novel optimization algorithm based on competitive behavior of various creatures such as birds, cats, bees and ants to survive in nature. In the proposed method, a competition is designed among all aforementioned creatures according to their performances. Every optimization algorithm can be appropriate for some objective functions and may not be appropriate for another. Due to the interaction between different optimization algorithms proposed in this paper, the algorithms acting based on the behavior of these creatures can compete each other for the best. The rules of competition between the optimization methods are based on imperialist competitive algorithm. Imperialist competitive algorithm decides which of the algorithms can survive and which of them must be extinct. In order to have a comparison to well-known heuristic global optimization methods, some simulations are carried out on some benchmark test functions with different and high dimensions. The obtained results shows that the proposed competition based optimization algorithm is an efficient method in finding the solution of optimization problems.

29 citations



Journal ArticleDOI
TL;DR: An innovative method for the automatic design of optical systems is presented and verified, based on a multi-objective evolutionary memetic optimization algorithm that delivers alternative and useful insights for the compromise solutions in a lens design problem.
Abstract: An innovative method for the automatic design of optical systems is presented and verified. The proposed method is based on a multi-objective evolutionary memetic optimization algorithm. The multi-objective approach simultaneously, but separately, addresses the image quality, tolerance, and complexity of the system. The memetic technique breaks down the search for optical designs in to three different parts or phases: optical glass selection, exploration, and exploitation. The optical glass selection phase defines the most appropriate set of glasses for the system under design. The glass selection phase limits the available glasses from hundreds to just a few, drastically reducing the design space and significantly increasing the efficiency of the automatic design method. The exploration phase is based on an evolutionary algorithm (EA), more specifically, on a problem-tailored generalized extremal optimization (GEO) algorithm, named Optical GEO (O-GEO). The new EA incorporates many features customized for lens design, such as optical system codification and diversity operators. The trade-off systems found in the exploration phase are refined by a local search, based on the damped least square method in the exploitation phase. As a result, the method returns a set of trade-off solutions, generating a Pareto front. Our method delivers alternative and useful insights for the compromise solutions in a lens design problem. The efficiency of the proposed method is verified through real-world examples, showing excellent results for the tested problems.

16 citations


Journal ArticleDOI
TL;DR: The proposed SPPBO scheme identifies different types of populations and their influence on the construction of new solutions and shows how SSO can be adapted for solving combinatorial optimization problems and how it is related to PACO.
Abstract: A generic scheme is proposed for designing and classifying simple probabilistic population-based optimization (SPPBO) algorithms that use principles from population-based ant colony optimization (PACO) and simplified swarm optimization (SSO) for solving combinatorial optimization problems. The scheme, called SPPBO, identifies different types of populations (or archives) and their influence on the construction of new solutions. The scheme is used to show how SSO can be adapted for solving combinatorial optimization problems and how it is related to PACO. Moreover, several new variants and combinations of these two metaheuristics are generated with the proposed scheme. An experimental study is done to evaluate and compare the influence of different population types on the optimization behavior of SPPBO algorithms, when applied to the traveling salesperson problem and the quadratic assignment problem.

15 citations


Journal ArticleDOI
Ziqiang Li1, Yuan Zeng1, Yishou Wang2, Lu Wang1, Bowen Song1 
TL;DR: This study designs three-stage solution strategy for payload packing of SM3P and proposes a hybrid multi-mechanism optimization approach (HMMOA) integrating knowledge heuristic rules with two evolutionary algorithms such as ant colony optimization (ACO) and particle swarm optimization (PSO) in different stages.

Proceedings ArticleDOI
28 May 2016
TL;DR: The simulation experiments results show that the improved ACO is effective for solving traveling salesman problem, which not only accelerates the convergence velocity, but also inhibits the premature stagnation in the convergence process.
Abstract: Traveling salesman problem (TSP) is a typical combinatorial optimization problem and a NP problem in operations research. Ant colony algorithm (ACO) is a kind of probability technology used to find the optimal path in the graph. Through the analysis on the main reasons resulting in the premature stagnation phenomenon of standard ACO, the updating strategy of information hormone is modified, and the changing parameters and local optimal search strategy are introduced to effectively restrain the premature stagnation phenomenon in the convergence process. Then the improved ant colony algorithm is applied to solve the typical TSP problem. The simulation experiments results show that the improved ACO is effective for solving traveling salesman problem, which not only accelerates the convergence velocity, but also inhibits the premature stagnation in the convergence process.

Proceedings ArticleDOI
20 Jul 2016
TL;DR: Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
Abstract: Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). GECCO’16 Companion, July 20-24, 2016, Denver, CO, USA ACM 978-1-4503-4323-7/16/07. http://dx.doi.org/10.1145/2908961.2926994 Final version

Journal ArticleDOI
TL;DR: A novel method to solve the travelling salesman problem using the divide-and-conquer strategy to solve a sequence of sub-cities in a given order and merge them by the radius particle swarm optimization (RPSO).
Abstract: The travelling salesman problem (TSP) is a well-known established scheduling problem. We propose a novel method to solve the TSP using the divide-and-conquer strategy. We employ K-means to cluster the sub-cities and then solve a sequence of sub-cities in a given order and merge them by the radius particle swarm optimization (RPSO). The RPSO incorporates adaptive mutation to avoid the impact of the bound of the solution. In addition, a local search procedure is embedded into the RPSO to accelerate the convergence and improve the solution. The performance of our proposed method is tested on a number of instances from the travelling salesman problem library (TSPLIB). Computational results and comparisons have demonstrated the effectiveness of the method.

Book ChapterDOI
30 Mar 2016
TL;DR: This work addresses the Quadratic Assignment Problem using a local search technique, based on Extremal Optimization, and shows that cooperative parallel versions of the solver improve performance so much that large and hard instances can be solved quickly.
Abstract: Several real-life applications can be stated in terms of the Quadratic Assignment Problem. Finding an optimal assignment is computationally very difficult, for many useful instances. We address this problem using a local search technique, based on Extremal Optimization and present experimental evidence that this approach is competitive. Moreover, cooperative parallel versions of our solver improve performance so much that large and hard instances can be solved quickly.

Book ChapterDOI
28 Jan 2016
TL;DR: This investigation shows how a supplementary level of complexity can be successfully handled in a heuristic optimization method, initially designed for the static and deterministic TSP variant when applied to an uncertain and dynamic TSP version.
Abstract: Many optimization problems have huge solution spaces, deep restrictions, highly correlated parameters, and operate with uncertain or inconsistent data. Such problems sometimes elude the usual solving methods we are familiar with, forcing us to continuously improve these methods or to even completely reconsider the solving methodologies. When decision makers need faster and better results to more difficult problems, the quality of a decision support system is crucial. To estimate the quality of a decision support system when approaching difficult problems is not easy, but is very important when designing and conducting vital industrial processes or logistic operations. This paper studies the resilience of a solving method, initially designed for the static and deterministic TSP (Traveling Salesman Problem) variant, when applied to an uncertain and dynamic TSP version. This investigation shows how a supplementary level of complexity can be successfully handled. The traditional ant-based system under investigation is infused with a technique which allows the evaluation of its performances when uncertain input data are present. Like the real ant colonies do, the system rapidly adapts to repeated environmental changes. A comparison with the performance of another heuristic optimization method is also done.

Journal ArticleDOI
TL;DR: This paper investigates the performance of two ACO algorithms, called and , on the travelling salesman problem with distance one and two (TSP(1,2)) which is an NP-complete problem.
Abstract: Ant colony optimization ACO is a kind of powerful and popular randomized search heuristic for solving combinatorial optimization problems. This paper investigates the performance of two ACO algorithms, called and , on the travelling salesman problem with distance one and two TSP1,2 which is an NP-complete problem. It is shown that two ACO algorithms obtain an approximation ratio of with regard to the optimal solutions in expected polynomial runtimes. We also study the influence of pheromone information and heuristic information on the approximation performance. Finally, we construct an instance and demonstrate that ACO outperforms the local search algorithms on this instance.

Journal ArticleDOI
TL;DR: In this paper, an optimization-simulation-optimization approach is used to solve the Stochastic uncapacitated location-allocation problem with an unknown number of facilities, and an objective of minimizing the fixed and transportation costs.
Abstract: This study proposes a novel methodology towards using ant colony optimization ($ACO$) with stochastic demand. In particular, an optimization-simulation-optimization approach is used to solve the Stochastic uncapacitated location-allocation problem with an unknown number of facilities, and an objective of minimizing the fixed and transportation costs. $ACO$ is modeled using discrete event simulation to capture the randomness of customers' demand, and its objective is to optimize the costs. On the other hand, the simulated $ACO$'s parameters are also optimized to guarantee superior solutions. This approach's performance is evaluated by comparing its solutions to the ones obtained using deterministic data. The results show that simulation was able to identify better facility allocations where the deterministic solutions would have been inadequate due to the real randomness of customers' demands.

Journal ArticleDOI
TL;DR: An integrated framework where improved version of PSO and DE is executed in interleaved fashion for balancing exploration and exploitation dilemma in the evolution process and the performance of the proposed method is confirmed.
Abstract: Stochastic optimization algorithms have potential to solve optimization problems in various fields of engineering and science. However, increasing non-linearity, non convexity, multi-modality, discontinuity, and even dynamics make the problems more complex and intractable. Classical optimization techniques are not able to determine global solution by analyzing rough non-linear surfaces. Heuristic algorithms have been used for determining global solution for this type of problems. However, heuristic algorithm is knowledge dependent, so finding a unique heuristic optimization algorithm for obtaining optimum solutions for all problems is not feasible. Hybridization is an integrated framework where merits of algorithms are utilized to improve performance of the optimizers. Particle Swarm Optimization (PSO) and Differential Evolution (DE) algorithm are two heuristic algorithms despite certain shortcomings have been applied to solve global optimization problems. In this paper, we propose an integrated framework...

OtherDOI
30 Sep 2016
TL;DR: In this article, the application of modern heuristic optimization techniques to power systems is discussed, including evolutionary algorithms (EAs), genetic algorithm (GA), particle swarm optimization (PSO), ant colony search algorithm, immune algorithm (IA), simulated annealing (SA), and the tabu search (TS).
Abstract: This chapter provides with basic knowledge of recent intelligent optimization and control techniques, and how they are combined with knowledge elements in computational intelligence systems. It is devoted to the application of modern heuristic optimization techniques to power systems. The chapter is composed of various optimization techniques applied power systems: evolutionary algorithms (EAs), genetic algorithm (GA), particle swarm optimization (PSO), ant colony search algorithm, immune algorithm (IA), simulated annealing (SA), and the tabu search (TS). It shows that these heuristic techniques can solve very complex large‐scale nonlinear optimization problems, which cannot be handled by any analytic approaches. Mutation randomly perturbs a candidate solution; recombination randomly mixes their parts to form a novel solution; reproduction replicates the most successful solutions found in a population; whereas, selection purges poor solutions from a population. This process produces advanced generations with candidates that are successively better suited to their environment.

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter deals with the fundamentals of the optimization, and it also presents various existing heuristic and meta-heuristic optimization techniques.
Abstract: This chapter deals with the fundamentals of the optimization. The concepts of stochastic optimization and how the stochastic optimization is advantageous over the deterministic approaches are described in Sect. 3.2. Heuristic and meta-heuristic optimization techniques are defined in Sect. 3.3, and it also presents various existing heuristic and meta-heuristic optimization techniques. The fundamentals of the swarm intelligence are given in Sect. 3.4. The applications of the swarm intelligence in various fields are also presented in this section.

Patent
03 Feb 2016
TL;DR: In this article, a dynamic adjustment method and system for a virtual machine is presented, which comprises: obtaining a server host in first and second running states, wherein an overload server host cannot meet an SLA default rate, and in the second running state, the load utilization rate of a no-load server host is lower than a preset threshold; and solving a global optimal solution of the virtual machine required to be migrated by using an extremal optimization based particle swarm algorithm.
Abstract: The invention provides a dynamic adjustment method and system for a virtual machine The method comprises: obtaining a server host in first and second running states, wherein in the first running state, an overload server host cannot meet an SLA default rate, and in the second running state, the load utilization rate of a no-load server host is lower than a preset threshold; and solving a global optimal solution of the virtual machine required to be migrated by using an extremal optimization based particle swarm algorithm The virtual machine is adjusted according to an optimal solution result; and the steps are repeatedly performed every a predetermined cycle in a predetermined time window so as to finish dynamic adjustment of the virtual machine The VMDA problem is efficiently solved by using the improved algorithm fusing extremal optimization with particle swarm

Proceedings ArticleDOI
24 Jul 2016
TL;DR: The effectiveness and efficiency of the proposed D-OSADE algorithm can be observed and is seen to be able to achieve competitive performance when benchmarked against several state-of-the-art multi-objective evolutionary algorithms in this study.
Abstract: The multiple Traveling Salesman Problem (mTSP) is a complex combinatorial optimization problem, which is a generalization of the well-known Traveling Salesman Problem (TSP), where one or more salesmen can be used in the solution. In this paper, we propose a novel differential evolution algorithm called D-OSADE to solve the Multi-objective Multiple Salesman Problem. For the algorithm, an opposition-based self-adaptive differential evolution variant is incorporated into the decomposition-based framework, and then hybridized with the multipoint evolutionary gradient search (EGS) as a form of local search to enhance the search behaviour. The proposed algorithm is used to solve a multi-objective mTSP with different number of objectives, salesmen and problem sizes. Through the experimental results that are presented by employing the Inverted Generational Distance (IGD) performance indicator, the effectiveness and efficiency of the proposed algorithm can be observed and is seen to be able to achieve competitive performance when benchmarked against several state-of-the-art multi-objective evolutionary algorithms in this study.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper has introduced modularity metrics but also hamiltonian function (potts model) amalgamated with meta-heuristic optimization approaches of Bat algorithm and Novel Bat algorithm with a promising outcome supporting the modified variants for community detection.
Abstract: In the present world, it is hard to overlook — the omnipresence of ‘network’. Be it the study of internet structure, mobile network, protein interactions or social networks, they all religiously emphasizes on network and graph studies. Social network analysis is an emerging field including community detection as its key task. A community in a network, depicts group of nodes in which density of links is high. To find the community structure modularity metric of social network has been used in different optimization approaches like greedy optimization, simulated annealing, extremal optimization, particle swarm optimization and genetic approach. In this paper we have not only introduced modularity metrics but also hamiltonian function (potts model) amalgamated with meta-heuristic optimization approaches of Bat algorithm and Novel Bat algorithm. By utilizing objective functions (modularity and hamiltonian) with modified discrete version of Bat and Novel Bat algorithm we have devised four new variants for community detection. The results obtained across four variants are compared with traditional approaches like Girvan and Newman, fast greedy modularity optimization, Reichardt and Bornholdt, Ronhovde and Nussinov, and spectral clustering. After analyzing the results, we have dwelled upon a promising outcome supporting the modified variants.

12 Dec 2016
TL;DR: An improved ant colony optimization algorithm is proposed with two highlights: first, candidate set strategy is adapted to rapid convergence speed and second, the adaptive adjustment pheromone strategy is used to make relatively uniform peromone distribution to balance the exploration and exploitation between the random search of ant.
Abstract: Ant colony optimization is a technique for optimization that was introduced by Marco Dorigo in the early 1990’s. The inspiring source of ant colony optimization is the foraging behaviour of real ant colonies. The ant system is a new meta-heuristic for hard combinatorial optimization problems. It is a population-based approach that uses exploitation of positive feedback as well as greedy search. TSP is one of the most famous NP HARD combinatorial optimization (CO) problems and which has wide application background. Combinatorial optimization (CO) is a topic that consist of finding an optimal object from a set of object. Ant colony optimization which has been proven a successful technique and applied to a number of combinatorial optimization problems and taken as one of the high performance computing methods for travelling salesman problem (TSP). But ACO algorithm costs too much time to convergence and traps in local optima in order to find an optimal solution for TSP problems. In this paper we propose an improved ant colony optimization algorithm with two highlights. First, candidate set strategy is adapted to rapid convergence speed. Second, the adaptive adjustment pheromone strategy is used to make relatively uniform pheromone distribution to balance the exploration and exploitation between the random search of ant.

Proceedings ArticleDOI
01 Jul 2016
TL;DR: A tour construction algorithm, which is based on a memory-based statistical learning mechanism, which shows that the proposed algorithm is an efficient learning algorithm for the travelling salesman problem.
Abstract: The travelling salesman problem is a well known combinatorial optimization problem and evolutionary computation methods are one of the important methods to solve it. A difficult issue for evolutionary computation methods is to identify good edges that belong to the global optimum during the search progress. To address this issue, this paper proposes a tour construction algorithm, which is based on a memory-based statistical learning mechanism. A probability matrix is created according to the edge distribution in a memory population, which stores the best solution found by every individual. For each individual, a tour is constructed according to its local personal best solution found so far and the global probability matrix. Two variants of ant colony optimization are chosen to test the effectiveness of the proposed algorithm. The result show that the proposed algorithm is an efficient learning algorithm for the travelling salesman problem.

Proceedings ArticleDOI
05 Sep 2016
TL;DR: This paper is investigating and discussing the influences of the parameters of the Ant Colony Optimization algorithm solving travelling salesman region problems.
Abstract: A variant of the well-known travelling salesman problem is about introducing particular dependencies among the cities. Such dependencies might describe the relations between single cities which can be used for autonomous vehicles that need to follow certain paths for some reasons. This paper deals with solving the Travelling Salesman Region Problem where particular connections of cities are already predefined. We are investigating and discussing the influences of the parameters of the Ant Colony Optimization algorithm solving travelling salesman region problems.

Journal ArticleDOI
TL;DR: A new optimization heuristic called AFO Attraction Force Optimization is presented, able to maximize discontinuous, non-differentiable and highly nonlinear functions in discrete simulation problems, developed specifically to overcome the limitations of traditional search algorithms in optimization problems performed on discreteevent simulation models.
Abstract: The paper presents a new optimization heuristic called AFO Attraction Force Optimization, able to maximize discontinuous, non-differentiable and highly nonlinear functions in discrete simulation problems. The algorithm was developed specifically to overcome the limitations of traditional search algorithms in optimization problems performed on discreteevent simulation models used, for example, to study industrial systems and processes. Such applications are characterized by three particular aspects: the response surfaces of the objective function is not known to the experimenter, a few number of independent variables are involved, very high computational time for each single simulation experiment. In this context it is therefore essential to use an optimization algorithm that on one hand tries to explore as effectively as possible the entire domain of investigation but, in the same time, does not require an excessive number of experiments. The article, after a quick overview of the most known optimization techniques, explains the properties of AFO, its strengths and limitations compared to other search algorithms. The operating principle of the heuristic, inspired by the laws of attraction occurring in nature, is discussed in detail in the case of 1, 2 and Ndimensional functions from a theoretical and applicative point of view. The algorithm was then validated using the most common 2-dimensional and N-dimensio990 I. Bendato et al. nal benchmark functions. The results are absolutely positive if compared, for the same initial conditions, with the traditional methods up to 10-dimensional vector spaces. A higher number of independent variables is generally not of interest for discrete simulation optimization problems in industrial applications (our research field).

Journal ArticleDOI
01 Sep 2016
TL;DR: An improved EO algorithm with guided state changes that provides parallel search for next solution state during solution improvement based on some knowledge of the problem is used and several versions of the parallelization methods of EO algorithms in the context of processor load balancing are proposed and evaluated.
Abstract: Graphical abstractDisplay Omitted The paper concerns parallel methods for extremal optimization (EO) applied in processor load balancing in execution of distributed programs. In these methods EO algorithms detect an optimized strategy of tasks migration leading to reduction of program execution time. We use an improved EO algorithm with guided state changes (EO-GS) that provides parallel search for next solution state during solution improvement based on some knowledge of the problem. The search is based on two-step stochastic selection using two fitness functions which account for computation and communication assessment of migration targets. Based on the improved EO-GS approach we propose and evaluate several versions of the parallelization methods of EO algorithms in the context of processor load balancing. Some of them use the crossover operation known in genetic algorithms. The quality of the proposed algorithms is evaluated by experiments with simulated load balancing in execution of distributed programs represented as macro data flow graphs. Load balancing based on so parallelized improved EO provides better convergence of the algorithm, smaller number of task migrations to be done and reduced execution time of applications.

Journal ArticleDOI
Xuepeng Huang1
TL;DR: The new algorithm improves the algorithm in aspects of ant colony initialization, information density function, distribution algorithms, direction of ant colonymotion, and so on, and uses multiple optimization strategy, such as polynomial time reduction and branching factor, and improves the ant colony algorithm effectively.
Abstract: Ant colony algorithm is a heuristic algorithm which is fit for solving complicated combination optimization.It showed great advantage on solving combinatorial optimization problem since it was proposed. The algorithm uses distributed parallel computing and positive feedback mechanism, and is easy to combine with other algorithms.This ant colony algorithm has already been widespread used in the field of discrete space optimization, however, is has been rarely used for continuous space optimization question.On the basis of basic ant colony algorithm principles and mathematical model, this paper proposes an ant colony algorithm for solving continuous space optimization question.Comparing with the ant colony algorithm, the new algorithm improves the algorithm in aspects of ant colony initialization, information density function, distribution algorithms, direction of ant colonymotion, and so on. The new algorithm uses multiple optimization strategy, such as polynomial time reduction and branching factor, and improves the ant colony algorithm effectively.