scispace - formally typeset
Search or ask a question

Showing papers on "Extremal optimization published in 2005"


Journal ArticleDOI
TL;DR: The method outperforms the optimal modularity found by the existing algorithms in the literature and is feasible to be used for the accurate identification of community structure in large complex networks.
Abstract: We propose a method to find the community structure in complex networks based on an extremal optimization of the value of modularity The method outperforms the optimal modularity found by the existing algorithms in the literature giving a better understanding of the community structure We present the results of the algorithm for computer-simulated and real networks and compare them with other approaches The efficiency and accuracy of the method make it feasible to be used for the accurate identification of community structure in large complex networks

1,534 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: Sequential parameter optimization as discussed by the authors is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms, and it can be performed algorithmically and requires basically the specification of the relevant algorithm's parameters.
Abstract: Sequential parameter optimization is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms. To demonstrate its flexibility, three scenarios are discussed: (1) no experience how to choose the parameter setting of an algorithm is available, (2) a comparison with other algorithms is needed, and (3) an optimization algorithm has to be applied effectively and efficiently to a complex real-world optimization problem. Although sequential parameter optimization relies on enhanced statistical techniques such as design and analysis of computer experiments, it can be performed algorithmically and requires basically the specification of the relevant algorithm's parameters

283 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: This paper investigates the use of evolutionary multi-objective optimization methods (EMOs) for solving single-objectives optimization problems in dynamic environments and adopts the non-dominated sorting genetic algorithm version 2 (NSGA2).
Abstract: This paper investigates the use of evolutionary multi-objective optimization methods (EMOs) for solving single-objective optimization problems in dynamic environments. A number of authors proposed the use of EMOs for maintaining diversity in a single objective optimization task, where they transform the single objective optimization problem into a multi-objective optimization problem by adding an artificial objective function. We extend this work by looking at the dynamic single objective task and examine a number of different possibilities for the artificial objective function. We adopt the non-dominated sorting genetic algorithm version 2 (NSGA2). The results show that the resultant formulations are promising and competitive to other methods for handling dynamic environments.

156 citations


Journal ArticleDOI
TL;DR: Results based on real-examination scheduling problems including standard benchmark data show that the final implementation is able to compete effectively with the best-known solution approaches to the problem.
Abstract: Ant colony optimization is an evolutionary search procedure based on the way that ant colonies cooperate in locating shortest routes to food sources. Early implementations focussed on the travelling salesman and other routing problems but it is now being applied to an increasingly diverse range of combinatorial optimization problems. This paper is concerned with its application to the examination scheduling problem. It builds on an existing implementation for the graph colouring problem to produce clash-free timetables and goes on to consider the introduction of a number of additional practical constraints and objectives. A number of enhancements and modifications to the original algorithm are introduced and evaluated. Results based on real-examination scheduling problems including standard benchmark data (the Carter data set) show that the final implementation is able to compete effectively with the best-known solution approaches to the problem.

102 citations


Journal ArticleDOI
TL;DR: In this article, the authors used Extreme Optimization (EO) to approximate ground states of the mean-field spin glass model and showed that EO can be applied to systems with highly connected variables.
Abstract: Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.

102 citations


Proceedings Article
01 Jan 2005
TL;DR: A hybrid ACO algorithm, similar to the one independently developed in [16], which uses a genetic algorithm in the early stages to ‘breed’ a population of ants possessing near optimal behavioural parameter settings for a given problem.
Abstract: Ant Colony Optimization (ACO) is a metaheuristic introduced by Dorigo et al. [9] which uses ideas from nature to find solutions to instances of the Travelling Salesman Problem (TSP) and other combinatorial optimisation problems. In this paper we analyse the parameter settings of the ACO algorithm. These determine the behaviour of each ant and are critical for fast convergence to near optimal solutions of a given problem instance. We classify TSP instances using three measures of complexity and uniformity. We describe experimental work that attempts to correlate ‘types’ of TSP problems with parameter settings for fast convergence. We found these optimal parameter settings to be highly problemspecific and dependent on the required accuracy of the solution. This inspired us to explore techniques for automatically learning the optimal parameters for a given TSP instance. We devised and implemented a hybrid ACO algorithm, similar to the one independently developed in [16], which uses a genetic algorithm in the early stages to ‘breed’ a population of ants possessing near optimal behavioural parameter settings for a given problem. This hybrid algorithm converges rapidly for a wide range of problems when given a population of ants with diverse behavioural parameter settings.

102 citations


Journal ArticleDOI
TL;DR: This paper reviews the theory and applications of ant algorithms, new methods of discrete optimization based on the simulation of self-organized colony of biologic ants, which are especially efficient for online optimization of processes in distributed nonstationary systems.
Abstract: This paper reviews the theory and applications of ant algorithms, new methods of discrete optimization based on the simulation of self-organized colony of biologic ants. The colony can be regarded as a multi-agent system where each agent is functioning independently by simple rules. Unlike the nearly primitive behavior of the agents, the behavior of the whole system happens to be amazingly reasonable. The ant algorithms have been extensively studied by European researchers from the mid-1990s. These algorithms have successfully been applied to solving many complex combinatorial optimization problems, such as the traveling salesman problem, the vehicle routing problem, the problem of graph coloring, the quadratic assignment problem, the problem of network-traffic optimization, the job-shop scheduling problem, etc. The ant algorithms are especially efficient for online optimization of processes in distributed nonstationary systems (for example, telecommunication network routing).

85 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: This paper introduces an algorithm that makes use of two main concepts, particle swarm optimization and fitness sharing to tackle multi-objective optimization problems.
Abstract: The particle swarm optimization algorithm has been shown to be a competitive heuristic to solve multi-objective optimization problems. Also, fitness sharing concepts have shown to be significant when used by multi-objective optimization methods. In this paper we introduce an algorithm that makes use of these two main concepts, particle swarm optimization and fitness sharing to tackle multi-objective optimization problems.

77 citations


Proceedings ArticleDOI
12 Dec 2005
TL;DR: The algorithm described herein is tested on a suite of 10D and 30D reference optimization problems collected for the special session on real-parameter optimization of the IEEE Congress on Evolutionary Computation 2005.
Abstract: An evolutionary algorithm for the optimization of a function with real parameters is described in this paper. It uses a cooperative co-evolution to breed and reproduce successful mutation steps. The algorithm described herein is then tested on a suite of 10D and 30D reference optimization problems collected for the special session on real-parameter optimization of the IEEE Congress on Evolutionary Computation 2005. The results are of mixed quality (as expected), but reveal several interesting aspects of this simple algorithm

61 citations


Proceedings ArticleDOI
10 Oct 2005
TL;DR: The proposed algorithm is based on the Einstein's general theory of relativity, which utilizes the concept of gravitational field to search for the global optimal solution for a given problem.
Abstract: A new concept for the optimization of nonlinear functions is proposed For most of the proposed evolutionary optimization algorithms, such as particle swarm optimization and ant colony optimization, they search the solution space by sharing known knowledge The proposed algorithm is based on the Einstein's general theory of relativity, which we utilize the concept of gravitational field to search for the global optimal solution for a given problem In this paper, detail procedure of the proposed algorithm is introduced The proposed algorithm has been tested on an application that is known difficult with promising and exciting results

60 citations


Journal ArticleDOI
TL;DR: A general-purpose heuristic algorithm for finding high-quality solutions to continuous optimization problems that can be considered as an extension of extremal optimization and consists of two components: one which is responsible for global searching and the other which isresponsible for local searching.
Abstract: We explore a general-purpose heuristic algorithm for finding high-quality solutions to continuous optimization problems. The method, called continuous extremal optimization (CEO), can be considered as an extension of extremal optimization and consists of two components, one which is responsible for global searching and the other which is responsible for local searching. The CEO's performance proves competitive with some more elaborate stochastic optimization procedures such as simulated annealing, genetic algorithms, and so on. We demonstrate it on a well-known continuous optimization problem: the Lennard-Jones cluster optimization problem.

Journal ArticleDOI
TL;DR: In this paper, a non-thermal local search, called Extremal Optimization (EO), was used to analyze a short-range spin glass and to determine which features of the landscape are algorithm dependent and which are inherently geometrical.
Abstract: Using a non-thermal local search, called Extremal Optimization (EO), in conjunction with a recently developed scheme for classifying the valley structure of complex systems, we analyze a short-range spin glass. In comparison with earlier studies using a thermal algorithm with detailed balance, we determine which features of the landscape are algorithm dependent and which are inherently geometrical. Apparently a characteristic for any local search in complex energy landscapes, the time series of successive energy records found by EO is also characterized approximately by a Poisson statistic with logarithmic time arguments. Differences in the results provide additional insights into the performance of EO. In contrast with a thermal search, the extremal search visits dramatically higher energies while returning to more widely separated low-energy configurations. Two important properties of the energy landscape are independent of either algorithm: first, to find lower energy records, progressively higher energy barriers need to be overcome. Second, the Hamming distance between two consecutive low-energy records is linearly related to the height of the intervening barrier.

Book ChapterDOI
16 Sep 2005
TL;DR: This chapter focuses on the Ant Colony Optimization (ACO) metaheuristic for solving combinatorial optimization problems, inspired by the foraging behaviour of ants.
Abstract: Ant colony algorithms are computational methods for solving problems that are inspired by the behaviour of real ant colonies. One particularly interesting aspect of the behaviour of ant colonies is that relatively simple individuals perform complicated tasks. Examples for such collective behavior are: i) the foraging behaviour that guides ants on short paths to their food sources, ii) the collective transport of food where a group of ants can transport food particles that are heavier than the sum of what all members of the group can transport individually, and iii) the brood sorting behavior of ants to place larvae and eggs into brood chambers of the nest that have the best environmental conditions. In this chapter we concentrate on the Ant Colony Optimization (ACO) metaheuristic for solving combinatorial optimization problems. ACO is inspired by the foraging behaviour of ants. An essential aspect thereby is the indirect communication of the ants via pheromones, i.e., chemical substances which are released into the environment and influence the behavior or the development of other individuals of the same species. In a famous biological experiment called double bridge experiment ([9, 23]) it was shown how trail pheromone lead ants along short paths to their food sources. In this experiment a double bridge with two branches of different lengths connected a nest of the Argentine ant with a food source. It was found that after a few minutes nearly all ants use the shorter branch. This is interesting because Argentine ants can not see very well. The explanation of this behavior has to do with the fact that the ants lay pheromone along their path. It is likely that ants which randomly chose the shorter branch arrive earlier at the food source. When they go back to the nest they smell some pheromone on the shorter branch

Journal ArticleDOI
TL;DR: It is shown that the conventional transition rule used in ant algorithms is responsible for the stagnation phenomenon and a new transition rule is developed as a remedy for the premature convergence problem, shown to overcome the stagnation problem leading to high quality solutions.
Abstract: Ant algorithms are now being used more and more to solve optimization problems other than those for which they were originally developed. The method has been shown to outperform other general purpose optimization algorithms including genetic algorithms when applied to some benchmark combinatorial optimization problems. Application of these methods to real world engineering problems should, however, await further improvements regarding the practicality of their application to these problems. The sensitivity analysis required to determine the controlling parameters of the ant method is one of the main shortcomings of the ant algorithms for practical use. Premature convergence of the method, often encountered with an elitist strategy of pheromone updating, is another problem to be addressed before any industrial use of the method is expected. It is shown in this article that the conventional transition rule used in ant algorithms is responsible for the stagnation phenomenon. A new transition rule is, therefo...

Proceedings ArticleDOI
25 Sep 2005
TL;DR: An algorithm based on ant colony system for solving traveling salesman problem is proposed, which introduces an inner loop aiming to update the pheromone trails and generates improved tours.
Abstract: An algorithm based on ant colony system for solving traveling salesman problem is proposed. The new algorithm, introduces in ant colony system an inner loop aiming to update the pheromone trails. The update increases the pheromone in the trail followed by the ants and therefore generates improved tours.

01 Jan 2005
TL;DR: ACO/F-Race is introduced, an algorithm for tackling combinatorial optimization problems under uncertainty based on ant colony optimization and on F-Race, and some experimental results on the probabilistic traveling salesman problem are presented.
Abstract: The paper introduces ACO/F-Race, an algorithm for tackling combinatorial optimization problems under uncertainty. The algorithm is based on ant colony optimization and on F-Race. The latter is a general method for the comparison of a number of candidates and for the selection of the best one according to a given criterion. Some experimental results on the probabilistic traveling salesman problem are presented.

Book ChapterDOI
01 Jan 2005
TL;DR: A new hybrid model, based on Particle Swarm Optimization, Genetic Algorithms and Fast Local Search, is presented for the symmetric blind traveling salesman problem, with excellent results.
Abstract: This work presents a new hybrid model, based on Particle Swarm Optimization, Genetic Algorithms and Fast Local Search, for the symmetric blind traveling salesman problem. A detailed description of the model is provided. The implemented system was tested with instances from 76 to 2103 cities. For instances up to 439 cities, results were, in average, less than or around 1% in excess of the known optima. When considering all instances, results were 2.1498% in excess, in average. These excellent results encourage further research and improvement of the hybrid model.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: This paper proposes a new algorithm based on inver-over operator, for combinatorial optimization problems, which shows great efficiency in solving TSP with the problem scale under 300.
Abstract: In this paper we proposed a new algorithm based on inver-over operator, for combinatorial optimization problems. Inver-over is based on simple inversion; however, knowledge taken from other individuals in the population influences its action. In the new algorithm we use some new strategies including selection operator, replace operator and some new control strategy, which have been proved to be very efficient to accelerate the converge speed. Through the experiment, the new algorithm shows great efficiency in solving TSP with the problem scale under 300.

Proceedings ArticleDOI
25 Jun 2005
TL;DR: The purpose of the optimization algorithm changes from finding an optimal solution to being able to continuously track the movement of the optimum over time to seriously challenges traditional EAs since they cannot adapt well to the changing environment once converged.
Abstract: Evolutionary algorithms (EAs) have been widely applied to solve stationary optimization problems. However, many real-world optimization problems are actually dynamic. For example, new jobs are to be added to the schedule, the quality of the raw material may be changing, and new orders have to be included into the vehicle routing problem etc. In such cases, when the problem changes over the course of the optimization, the purpose of the optimization algorithm changes from finding an optimal solution to being able to continuously track the movement of the optimum over time. This seriously challenges traditional EAs since they cannot adapt well to the changing environment once converged.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: A new effective optimization algorithm suitably developed for electromagnetic applications called genetical swarm optimization (GSO), which is essentially a population-based heuristic search technique, which can be used to solve combinatorial optimization problems.
Abstract: In this paper a new effective optimization algorithm suitably developed for electromagnetic applications called genetical swarm optimization (GSO) will be presented. This is an hybrid algorithm developed in order to combine in the most effective way the properties of two of the most popular evolutionary optimization approaches now in use for the optimization of electromagnetic structures, the particle swarm optimization (PSO) and genetic algorithms (GA). This algorithm is essentially, as PSO and GA, a population-based heuristic search technique, which can be used to solve combinatorial optimization problems, modeled on the concepts of natural selection and evolution (GA) but also based on cultural and social rules derived from the analysis of the swarm intelligence and from the interaction among particles (PSO). The algorithm is tested here with respect to the other optimization techniques dealing with two typical problems, a purely mathematical one, the search for the global maximum of a multi-dimensional sine function and an electromagnetic application, the optimization of a linear array

Book ChapterDOI
27 Aug 2005
TL;DR: A hybrid algorithm is proposed, which can combine the merits of these two algorithms by running them alternately and is superior to both tabu search and ant colony optimization individually.
Abstract: Many algorithms that solve optimization problems are being developed and used However, large and complex optimization problems still exist, and it is often difficult to obtain the desired results with one of these algorithms alone This paper applies tabu search and ant colony optimization method to the container load sequencing problem We also propose a hybrid algorithm, which can combine the merits of these two algorithms by running them alternately Experiments have shown that the proposed hybrid algorithm is superior to both tabu search and ant colony optimization individually

16 Jun 2005
TL;DR: In this paper, ant colony optimization algorithm (ACO) is presented and tested with few benchmark examples and compares well with the results of some other well-known heuristic approaches.
Abstract: Over the last decade, evolutionary and meta-heuristic algorithms have been extensively used as search and optimization tools in various problem domains, including science, commerce, and engineering. Their broad applicability, ease of use, and global perspective may be considered as the primary reason for their success. Ant colony foraging behavior may also be considered as a typical swarm-based approach to optimization. In this paper, ant colony optimization algorithm (ACO) is presented and tested with few benchmark examples. To test the performance of the algorithm, three benchmarks constrained and/or unconstrained real valued mathematical models were selected. The first example is the Ackley's function which is a continuous and multimodal test function obtained by modulating an exponential function with a cosine wave of moderate amplitude. The algorithm application resulted in the global optimal with reasonable CPU time. To show the efficiency of the algorithm in constraint handling, the model was applied to a two-variable, two constraint highly nonlinear problem. It was shown that the performance of the model is quite comparable with the results of well developed GA. The third example is a real world water resources operation optimization problem. The developed model was applied to a single reservoir with 60 periods with objective of minimizing the total square deviation from target demand. Results obtained are quit promising and compares well with the results of some other well-known heuristic approaches.

Proceedings ArticleDOI
18 Apr 2005
TL;DR: Numerical calculations show that population based methods (genetic algorithm and particle swarm optimization) work better than local search algorithms (simulated annealing and tabu search), and the main idea is to compare the efficiency of these algorithms by using examples.
Abstract: *† In this study four heuristic optimization algorithms are used as a solution method for a discrete space frame sizing optimization problem. Suitable profiles for each beam have to be selected from a given standard selection and the mass of frame is minimized regarding to stress, displacement, buckling and frequency constraints. Selected heuristic algorithms are simulated annealing, tabu search, genetic algorithm and particle swarm optimization. The main idea is to compare the efficiency of these algorithms by using example problems. The criteria for the efficiency is considered to be the improvement of the object function as the function of needed FEM-analysis. Numerical calculations show that population based methods (genetic algorithm and particle swarm optimization) work better than local search algorithms (simulated annealing and tabu search).

Book ChapterDOI
01 Jan 2005
TL;DR: A communication scheme for message passing environment, tested on the known optimization problem - VRPTW, allows speed-up without worsening quality of solutions - for one of Solomon's benchmarking tests the new best solution was found.
Abstract: It is known, that concurrent computing can be applied to heuristic methods (e.g. simulated annealing) for combinatorial optimization to shorten time of computation. This paper presents a communication scheme for message passing environment, tested on the known optimization problem - VRPTW. Application of the scheme allows speed-up without worsening quality of solutions - for one of Solomon’s benchmarking tests the new best solution was found.

Book ChapterDOI
26 Oct 2005
TL;DR: This study is concerned with the PMSAT solution in setting a co-evolutionary stochastic local search algorithm guided by an estimated backbone variables of the problem and results suggest that this approach can outperform state-of-the-art PMS AT techniques.
Abstract: The concept of backbone variables in the satisfiability problem has been recently introduced as a problem structure property and shown to influence its complexity. This suggests that the performance of stochastic local search algorithms for satisfiability problems can be improved by using backbone information. The Partial MAX-SAT Problem (PMSAT) is a variant of MAX-SAT which consists of two CNF formulas defined over the same variable set. Its solution must satisfy all clauses of the first formula and as many clauses in the second formula as possible. This study is concerned with the PMSAT solution in setting a co-evolutionary stochastic local search algorithm guided by an estimated backbone variables of the problem. The effectiveness of our algorithm is examined by computational experiments. Reported results for a number of PMSAT instances suggest that this approach can outperform state-of-the-art PMSAT techniques.

16 Jun 2005
TL;DR: A generic software framework that is able to handle different types of combinatorial optimization problems by coordinating so-called OptLets that work on a set of solutions to a problem and provides a high degree of self-organization.
Abstract: Meta-heuristics are an effective paradigm for solving large-scale combinatorial optimization problems. However, the development of such algorithms is often very time-consuming as they have to be designed for a concrete problem class with little or no opportunity for reuse. In this paper, we present a generic software framework that is able to handle different types of combinatorial optimization problems by coordinating so-called OptLets that work on a set of solutions to a problem. The framework provides a high degree of self-organization and offers a generic and concise interface to reduce the adaptation effort for new problems as well as to integrate with external systems. The performance of the OptLets framework is demonstrated by solving the well-known Traveling Salesman Problem.

Journal Article
TL;DR: Comparative tests showed that this extremal optimization procedure for MAXSAT improves significantly previous results obtained on the same benchmark with other modern local search methods like WSAT, simulated annealing and Tabu Search.
Abstract: The MAXimum propositional SATisfiability problem (MAXSAT) is a well known NP-hard optimization problem with many theoretical and practical applications in artificial intelligence and mathematical logic. Heuristic local search algorithms are widely recognized as the most effective approaches used to solve them. However, their performance depends both on their complexity and their tuning parameters which are controlled experimentally and remain a difficult task. Extremal Optimization (EO) is one of the simplest heuristic methods with only one free parameter, which has proved competitive with the more elaborate general-purpose method on graph partitioning and coloring. It is inspired by the dynamics of physical systems with emergent complexity and their ability to self-organize to reach an optimal adaptation state. In this paper, we propose an extremal optimization procedure for MAXSAT and consider its effectiveness by computational experiments on a benchmark of random instances. Comparative tests showed that this procedure improves significantly previous results obtained on the same benchmark with other modern local search methods like WSAT, simulated annealing and Tabu Search (TS).

01 Jan 2005
TL;DR: It is shown how to convert natural ants behaviour to algorithms able to escape from local minima and find global minimum solution of constrained combinatorial problems.
Abstract: 1.Abstract Ant colony optimization metaheuristic (ACO) represents a new class of algorithms particularly suited to solve realworld combinatorial optimization problems. ACO algorithms, published for the first time in 1991 by M. Dorigo and his co-workers, have been applied, particularly starting from 1999 to several kind of optimization problems as the traveling salesman problem, quadratic assignement problem, vehicle routing, sequential ordering, scheduling, graph coloring, management of communications networks and so on. The ant colony optimization metaheuristic takes inspiration from the studies of real ants colonies' foraging behaviour. The main characteristic of such colonies is that individuals have no global knowledge of the problem solving but communicate among them indirectly, depositing on the ground a chemical substance called pheromone, which influences probabilistically the choice of subsequent ants, which tend to follow paths were the pheromone concentration is higher. Such behaviour, called stigmergy, is the basic mechanism which controls ants activity and permits to them to get the shortest path connecting their nest to food source. In this paper it is shown how to convert natural ants behaviour to algorithms able to escape from local minima and find global minimum solution of constrained combinatorial problems. Some examples on plane trusses are also presented. 2.

Journal Article
TL;DR: The mathematical model of assignment problem is established as well as the problem by mutated ant colony algorithm is solved, which shows that the best solution can be found rapidly.
Abstract: Assignment problem, a kind of combinatorial optimization problem, has significant importance for real life. Ant system algorithm is a kind of evolutionary algorithms, which is efficient in solving combinatorial optimization problem. In this paper, we established the mathematical model of assignment problem as well as solved this problem by mutated ant colony algorithm. Experiments show that, by using this algorithm, the best solution can be found rapidly.

Journal ArticleDOI
TL;DR: It is found that the small-world structure stabilizes the system and it is more realistic to augment the Bak–Sneppen model by the concepts of extremal dynamics, multiobjective optimization and coherent noise.
Abstract: Small-world networks (SWN) are relevant to biological systems. We study the dynamics of the Bak–Sneppen (BS) model on small-world network, including the concepts of extremal dynamics, multiobjective optimization and coherent noise. We find that the small-world structure stabilizes the system. Also, it is more realistic to augment the Bak–Sneppen model by these concepts.