scispace - formally typeset
Open AccessJournal ArticleDOI

Chemical-Reaction-Inspired Metaheuristic for Optimization

Reads0
Chats0
TLDR
This work proposes a new metaheuristic, called chemical reaction optimization (CRO), which mimics the interactions of molecules in a chemical reaction to reach a low energy stable state and can outperform all other metaheuristics when matched to the right problem type.
Abstract
We encounter optimization problems in our daily lives and in various research domains. Some of them are so hard that we can, at best, approximate the best solutions with (meta-) heuristic methods. However, the huge number of optimization problems and the small number of generally acknowledged methods mean that more metaheuristics are needed to fill the gap. We propose a new metaheuristic, called chemical reaction optimization (CRO), to solve optimization problems. It mimics the interactions of molecules in a chemical reaction to reach a low energy stable state. We tested the performance of CRO with three nondeterministic polynomial-time hard combinatorial optimization problems. Two of them were traditional benchmark problems and the other was a real-world problem. Simulation results showed that CRO is very competitive with the few existing successful metaheuristics, having outperformed them in some cases, and CRO achieved the best performance in the real-world problem. Moreover, with the No-Free-Lunch theorem, CRO must have equal performance as the others on average, but it can outperform all other metaheuristics when matched to the right problem type. Therefore, it provides a new approach for solving optimization problems. CRO may potentially solve those problems which may not be solvable with the few generally acknowledged approaches.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 3, JUNE 2010 381
Chemical-Reaction-Inspired Metaheuristic
for Optimization
Albert Y.S. Lam, Student Member, IEEE, and Victor O.K. Li, Fellow, IEEE
Abstract—We encounter optimization problems in our daily
lives and in various research domains. Some of them are so hard
that we can, at best, approximate the best solutions with (meta-)
heuristic methods. However, the huge number of optimization
problems and the small number of generally acknowledged
methods mean that more metaheuristics are needed to fill the
gap. We propose a new metaheuristic, called chemical reaction
optimization (CRO), to solve optimization problems. It mimics
the interactions of molecules in a chemical reaction to reach
a low energy stable state. We tested the performance of CRO
with three nondeterministic polynomial-time hard combinatorial
optimization problems. Two of them were traditional benchmark
problems and the other was a real-world problem. Simulation
results showed that CRO is very competitive with the few existing
successful metaheuristics, having outperformed them in some
cases, and CRO achieved the best performance in the real-
world problem. Moreover, with the No-Free-Lunch theorem,
CRO must have equal performance as the others on average,
but it can outperform all other metaheuristics when matched to
the right problem type. Therefore, it provides a new approach
for solving optimization problems. CRO may potentially solve
those problems which may not be solvable with the few generally
acknowledged approaches.
Index Terms—Chemical reaction, metaheuristics, nature-
inspired algorithms, optimization methods.
I. Introduction
O
PTIMIZATION is prevalent in almost every field of sci-
ence and engineering, ranging from profit maximization
in economics to signal interference minimization in electrical
engineering. In our daily lives, we also encounter various
optimization problems, such as finding the quickest route from
one place to another, at minimum cost, and minimizing the
construction costs of building facilities in a city, while, at
the same time, avoiding congestion of human flow among
such facilities. Optimization refers to the study of problems in
which one seeks to optimize (either minimize or maximize) the
result by systematically choosing the values of the variables in
feasible regions. We normally define an optimization problem
with several components: an objective function f, a vector of
Manuscript received August 8, 2008; revised February 23, 2009, June 4,
2009, and September 3, 2009. First version published December 15, 2009;
current version published May 28, 2010. This work was supported in part by
the Strategic Research Theme of Information Technology of The University
of Hong Kong.
The authors are with the Department of Electrical and Electronic
Engineering, The University of Hong Kong, Pokfulam, Hong Kong (e-mail:
ayslam@eee.hku.hk; vli@eee.hku.hk).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TEVC.2009.2033580
variables X = {x
1
,x
2
,...,x
n
}, and a vector of constraints
C = {c
1
,c
1
,...,c
m
} which limit the values assigned to X,
where n and m correspond to the problem dimensions and
the total number of constraints, respectively. We define a
solution s as the set of values assigned to X confined by C,
and the solution space S as the set of all possible solutions.
For minimization problems, our goal is to find the minimum
solution s
S where f (s
) f (s) for all s. We can write
min
XR
n
f (X) subject to
c
i
(X)=0,i E
c
i
(X) 0,i I
(1)
where R, E, and I represent the real number set, the index set
for equalities, and the index set for inequalities, respectively.
Equation (1) represents the generic form for every type of
optimization. Without loss of generality, we consider mini-
mization problems throughout this paper.
1
What we need to
do is to search the solution space and pick out solution points
sequentially. Then one evaluates the objective function value of
each solution point. An optimization method (i.e., algorithm)
tells us which point should be picked from the current solution.
We can get one, or multiple, points in an instance, depending
on how the algorithm operates.
We can formulate many problems into this generic form,
i.e., (1), and then apply the existing methods to obtain
the optimal solutions, with the help of the computer. How-
ever, in computation complexity theory [1], there is a class
of problems, namely, nondeterministic polynomial-time hard
(NP-hard) problems, with no known algorithms in finding the
optimal solutions in polynomial time, unless P = NP. In other
words, for such problems, the computational effort required to
obtain the best solutions grows exponentially with the problem
size. They are normally not solvable by any optimization
algorithms in a reasonable amount of time or we cannot
guarantee that the computed results are of high quality. Most
of the time, the formulated problems are of huge dimensions
and examining every possible solution (i.e., the brute-force
method) becomes impossible. It may take several years of
CPU time to obtain the solutions, in spite of using the most
powerful supercomputer. We often cannot tolerate such long
computational time and sacrifice optimality for near-optimal
solutions if the processing time is limited. Thus, we always
adopt approximate algorithms, which can compute “good”
solutions efficiently, to tackle the NP-hard problems.
1
Maximization problems work similarly, by simply adding a negative sign
to f .
1089-778X/$26.00
c
2009 IEEE

382 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 3, JUNE 2010
Fig. 1. Potential energy surface of a chemical reactive system.
In quantum mechanics and statistical mechanics, we can
model chemical reactions and molecular interactions with
potential energy surface (PES) (Fig. 1), which is subject to
the Born–Oppenheimer separation of nuclear and electronic
motion [2]. Fig. 1 depicts the potential energy (PE) changes
of the atom arrangements in a chemical system. The z-axis
represents the PE while the x and y-axes capture the molecular
structures of the chemical substances, which correspond to
the atomic positions and every possible orientation of all
the involved atomic nuclei. PES can be a two, three, or
multi-dimensional (hyper)surface, depending on how com-
plicated the chemical system is. In any chemical reaction,
the initial species (i.e., reactants) change to the products by
the formation and destruction of chemical bonds. Before the
formation of products, the reactants normally change to a
series of intermediate species. These small chemical changes
are called elementary steps. During each step, the chemicals
are in the transition states. Fig. 1 shows a simple example
of a chemical reaction involving three elementary steps. The
solid line gives the reaction pathway from the reactants
to products, via several transition states and intermediate
species.
There is a rule of thumb for this natural tendency—“Every
reacting system seeks to achieve a minimum of free energy”
2
[3]. That means chemical reactions tend to release energy, and
thus, products generally have less energy than reactants. In
terms of stability, the lower the energy of the substance, the
more stable it is. Therefore, products are always more stable
than reactants.
It is not difficult to discover the correspondence between
optimization and chemical reaction. Both of them aim to seek
the global minimum (but with respect to different objectives)
and the process evolves in a stepwise fashion. With this discov-
ery, we develop a chemical-reaction-inspired metaheuristic for
solving optimization problems by mimicking what happens to
molecules in chemical reactions. We name it chemical reaction
2
Free energy is also known as Gibbs free energy, which indicates the amount
of energy needed for a system to do useful work at constant temperature and
pressure. In a chemical reaction, the reactants have higher free energy, and
thus, they can do useful work (i.e., react with the others). At equilibrium (i.e.,
final stage of the reaction), the products have a minimum of free energy so
they can no longer react.
optimization (CRO). It is a multidisciplinary design which
loosely couples computation with chemistry.
The rest of this paper is organized as follows. Section II
briefly describes some basic concepts of optimization and
related work. In Section III, we give the design framework of
CRO and show how the concept of chemical reaction is im-
plemented in our algorithm. We show the workability of CRO
with evaluations using computer simulations in Section IV.
We conclude this paper and give some potential future work
in Section V.
II. Background
Metaheuristics are collections of ideas aiming to solve
general computational problems. A metaheuristic is usually in
the form of a procedure framework which instructs computers
how to search for solutions in the solution space. Each
metaheuristic consists of several building blocks and control
parameters for fine tuning. We can replace these components
and/or change the parameter values in order to suit our
purposes. From this, we can see metaheuristics contain a
high degree of flexibility. Most of the metaheuristics involve
randomization in the calculation, and thus, the outputs may
vary in different runs of the computation. Since exact optimal
solutions are not guaranteed, they belong to the group of
approximate algorithms. We adopt them to solve NP-hard
optimization problems because they can locate good solutions
efficiently most of the time.
This paper is motivated by other efforts to apply natural
phenomena to metaheuristics. Among the most famous ones
are simulated annealing (SA) [4], genetic algorithm (GA)
[5]–[7], and ant colony optimization (ACO) [8], [9]. Other
proposed metaheuristics include particle swarm optimization
(inspired by the social behavior of bird flocking) [10], bees
algorithm (inspired by the behavior of honey bees in collecting
nectar) [11], harmony algorithm (inspired by the improvisation
process of musicians) [12], etc. There are non-nature-inspired
metaheuristics also, like tabu search (TS) [13]. We will briefly
introduce SA, GA, ACO, and TS in the following sections.
A. Simulated Annealing
SA is inspired by annealing in metallurgy. Annealing is the
physical process of increasing the crystal size of a material
and reducing the defects through a controllable cooling pro-
cedure. SA picks a solution in each iteration. By employing
the Metropolis algorithm [14] from statistical mechanics, SA
always allows downhill movements, while uphill movements
are allowed with a probability whose distribution is controlled
by a so-called temperature parameter. Therefore, it does not
always get stuck at local minima. As the temperature drops, the
ability to jump out of local minima decreases and the system
converges to the final solution.
B. Genetic Algorithm
Holland [5] created GAs based on the idea of natural
selection, which is the phenomenon that organisms with
favorable characteristics have higher probability to survive

LAM AND LI: CHEMICAL-REACTION-INSPIRED METAHEURISTIC FOR OPTIMIZATION 383
and reproduce than those with unfavorable traits. GA is a
population-based metaheuristic and simulates this biologi-
cal process through producing generations of chromosomes,
which represent possible solutions of the optimization prob-
lems. Through inheritance, selection, and crossover, those
chromosomes which are favored by the objective functions,
and which satisfy the constraints, can survive and reproduce
the next generation of chromosomes with higher quality. It can
escape from local optima through mutation.
C. Ant Colony Optimization
ACO is also population-based and mimics the ecological
behavior of ants in finding food. Food paths represent solu-
tions. When ants discover paths to the food locations from their
colony, they lay down a chemical, called pheromone, along the
paths to remind other ants about the food trails. Shorter paths
have more pheromone as more ants shuttle around. It employs
the effect of evaporation of pheromone to prevent getting stuck
with local optima. We can obtain the best solution by checking
the route with the greatest amount of pheromone.
D. Tabu Search
TS is introduced by Glover [13] and it is a non-nature-
inspired metaheuristic. The core is local search together with
a tabu mechanism. In each iteration, the algorithm searches the
neighborhood of the current solution to get a new one with
an improved functional value. At the same time, it maintains
a tabu list, which contains the solutions obtained in the recent
iterations. The purpose is to prevent looping in the recent
solutions and to diversify the search to an unexplored region
of the search space. Sometimes the tabu mechanism may be
too restrictive and forbid some attractive moves. TS allows
overriding the tabu list if the newly picked solution meets
certain aspiration criteria.
E. Development of Optimization Algorithms
Generally, we can classify optimization algorithms into
heuristics and metaheuristics. Heuristics are different from
metaheuristics in that the former are tailor-made for specific
problems. They may be able to solve some problems very well
but may give poor solutions to others. On the other hand, (well-
designed) metaheuristics can be applied to a broader range
of problems and results in good performance. For a specific
problem, a tailor-made heuristic normally performs better than
a metaheuristic, but the heuristic may not be readily available.
If the heuristic does not exist, we may utilize a metaheuristic
to solve the problem. The relationship between heuristics and
metaheuristics is an accuracy-flexibility tradeoff.
When we encounter a new problem with no polynomial-
time algorithm available, we consider metaheuristics. If a
metaheuristic seems to work well on the problem, greedy and
heuristic components may be added in order to “maximize” its
performance. Although the resultant algorithm may have better
performance on this problem, it becomes more heuristic-like
and may not be able to solve other problems well. In this
way, we sacrifice flexibility for accuracy. We can take SA as
an example. SA was firstly introduced in [4] and then applied
Fig. 2. Comparison of performance for different problem types.
to quadratic assignment problem (QAP) in [15]. Afterward
modified versions were proposed sequentially in [16]–[19]
with better performance, and they became more heuristic-like.
Hybrid algorithms with SA and TS are also possible [20].
F. No-Free-Lunch (NFL) Theorem
We will never be satisfied with the existing metaheuristics,
such as those mentioned above, even though they enjoy great
success in solving many optimization problems, e.g., [21]–
[23]. According to the NFL theorem [24], all metaheuristics
which search for extrema are exactly the same in perfor-
mance when averaged over all possible objective functions.
All metaheuristics perform statistically identically on solving
computational problems. Ho and Pepyne [25] further elaborate
upon the idea, showing that it is theoretically impossible to
have a best general-purpose universal optimization strategy,
and the only way for one strategy to be superior to the others
is when we focus on a particular class of problems only.
With prior knowledge on the problem under consideration, a
metaheuristic can be modified and, thus, may become more
suited to the problem. However, this kind of modification
may not always be successful. For example, it is natural to
apply ACO to routing-related problems [26]. The performance
may be worse with other methods, say SA [27]. One can
deduce that a particular metaheuristic is, by nature, more easily
transformed to suit specific classes of optimization problems
than others.
G. Summary
No one algorithm can always, on average, surpass the oth-
ers in all possible optimization problems. However, superior
performance is still possible in a particular problem. Fig. 2
shows the comparison of performance for different problem
types. Every metaheuristic has equal performance on the
average. One may have superior performance for some types of
problems but becomes inferior on other problems. At point (a),
metaheuristic 1 outperforms metaheuristic 2, but at point (b),
metaheuristic 2 outperforms metaheuristic 1. Hereafter suc-
cessful metaheuristics refer to those which are governed by
the NFL theorem, and which is successful in solving some
problems. However, the “spectrum” of problems is so huge
that we cannot find the best match for each of them. Thus, it is
worthwhile to bring forth a new optimization search strategy if
we can prove it works well in some problems. This helps open

384 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 3, JUNE 2010
Fig. 3. Profile of a molecule. The first column contains the properties of a
molecule used in CRO. The second column shows the corresponding meanings
in the metaheuristic.
up new territory for optimization; new optimization methods
may resolve the “unsolved” problems well. That is the reason
why we propose CRO for solving optimization problems.
III. Design Framework
CRO loosely mimics what happens to molecules in a
chemical reaction system microscopically. It tries to capture
the phenomenon that reactions give products with the lowest
energy on the PES. In the following sections, we will first
describe the major components of the design, i.e., molecules
and elementary reactions. Then we will give the basic idea
of the design. Next, we will explain how to bring the idea
into reality, i.e., how to do computation with the idea in terms
of an algorithm. Finally, we discuss CRO from the angle of
optimization and highlight its unique features.
A. Molecules
The manipulated agents are molecules and each has a profile
containing some properties (Fig. 3). A molecule is composed
of several atoms and characterized by the atom type, bond
length, angle, and torsion. One molecule is distinct from
another when they contain different atoms and/or different
number of atoms. If two molecules have exactly the same
set of atoms but with different molecular attributes (i.e., bond
length, angle, and torsion), we will consider them distinct
molecules. We utilize the term “molecular structure” to sum-
marize all these characteristics and it corresponds to a solution
in the mathematical domain. The presentation of a molecular
structure depends on the problem we are solving, provided
that it can express a feasible solution of the problem.
3
For
example, if a problem defines the feasible solution set as
the set of n-dimensional positive real numbers R
n
+
, then any
vector with n elements whose values are positive real numbers
is a valid molecular structure, and no molecule structure
can contain numbers with non-positive values.
4
A change
in molecular structure is tantamount to switching to another
feasible solution. A molecule possesses two kinds of energies,
i.e., PE and kinetic energy (KE). The former quantifies the
molecular structure in terms of energy and we model it as the
3
All molecular structures corresponding to a problem are of the same
solution format. They vary only with the values assigned to them and do
not represent partial solutions.
4
In Section IV, CRO is applied to the quadratic assignment problem. Assume
the problem dimension is n. The feasible solution set is the set of permutations
of n numbers. Then the molecular structure of a molecule can be a permutation
of n numbers.
objective function value when evaluating the corresponding
solution. Let ω and f denote a molecular structure (or a
solution) and an objective function. Then
PE
ω
= f (ω). (2)
The latter does not have such a direct analogy. We use
it as a measure of tolerance for the molecule changing to a
less favorable structure (i.e., a solution with higher functional
value). For example, a molecule intends to change from ω to
ω
. The change is always possible if PE
ω
PE
ω
. Otherwise,
we allow the change only when PE
ω
+ KE
ω
PE
ω
.
5
Thus,
the higher the KE of the molecule, the higher the possibility
it can possess a new molecular structure with higher PE.
Recall that the molecules involved in a reaction attempt to
reach the lowest possible potential state, but blindly seeking
more favorable structures (i.e., a solution with lower functional
value) will result in metastable states (i.e., getting stuck in
local minima). KE allows the molecules to move to a higher
potential state, and hence a chance of having a more favorable
structure in a future change. Therefore, KE of a molecule
symbolizes its ability of escaping from a local minimum.
With the conservation of energy, energy cannot be created
or destroyed. We cannot intentionally add or remove KE to
a molecule. Nevertheless, we allow the conversion between
PE and KE, within a molecule or among molecules, through
some elementary reactions (or steps). As will be explained
in the next section, we intend to draw KE of the molecules
to a central energy buffer (buffer), and thus, the molecules
are getting less KE as the algorithm evolves. In other words,
we drive them to possess molecular structures with lower and
lower PE in the subsequent changes. This phenomenon is the
driving force in CRO to ensure convergence to lower energy
state. The rest of the properties listed in Fig. 3, i.e., number
of hits, minimum structure, minimum value and minimum hit
number, are used in an implementation of the algorithm. Their
uses will be discussed in Section IV-A.
B. Elementary Reactions
In a chemical reaction process, a sequence of collisions
among molecules occurs. Molecules collide either with each
other or with the walls of the container. Collisions under
different conditions provoke distinct elementary reactions,
each of which may have a different way of manipulating the
energies of the involved molecule(s). There are four types of
elementary reactions implemented in CRO (Fig. 4), namely,
on-wall ineffective collision, decomposition, inter-molecular
ineffective collision, and synthesis. These elementary reac-
tions may be categorized in terms of molecularity and extent
of change of the molecular structure. By molecularity, on-
wall ineffective collision and decomposition are unimolec-
ular reactions triggered when the molecule hits a wall of
the container, while inter-molecular ineffective collision and
synthesis involve more than one molecule. They take place
when molecules collide with each other. By the extent of
5
Note that the change is not restricted to a single molecule. It can involve
more than one molecule simultaneously. This will be explained in the next
section.

LAM AND LI: CHEMICAL-REACTION-INSPIRED METAHEURISTIC FOR OPTIMIZATION 385
Fig. 4. Four elementary reactions implemented in CRO. (a) On-wall inef-
fective collision. (b) Decomposition. (c) Inter-molecular ineffective collision.
(d) Synthesis.
change in the molecular structure of the resultant molecule(s),
on-wall and inter-molecular ineffective collisions react much
less vigorously than decomposition and synthesis. Ineffective
collisions correspond to those cases in which the molecules get
new molecular structures in their own neighborhoods on PES
(i.e., they pick new solutions close to the original ones). Thus,
the PE of the resultant molecules tends to be close to those
of the original ones. Conversely, decomposition and synthesis
tend to obtain new molecular structures which may be far away
from their immediate neighborhoods on PES. When compared
with ineffective collisions, the resultant molecules are apt to
have greater change in PE than the original ones.
1) On-wall Ineffective Collision: An on-wall ineffective
collision [Fig. 4(a)] occurs when a molecule hits the wall and
then bounces back. Some molecular attributes change in this
collision, and thus, the molecular structure varies accordingly.
As the collision is not so vigorous, the resultant molecular
structure should not be too different from the original one.
Suppose the current molecular structure is ω. The molecule
intends to obtain a new structure ω
= Neighbor(ω) (Table I)
in its neighborhood
6
on the PES in this collision. The change
is allowed only if
PE
ω
+ KE
ω
PE
ω
. (3)
6
The neighborhood structure is problem-dependent. Normally, ω and its
neighbors have similar PE.
We get
KE
ω
=(PE
ω
+ KE
ω
PE
ω
) × q
where q [KELossRate, 1], and (1 q) represents the
fraction of KE lost to the environment when it hits the wall.
KELossRate is a system parameter which limits the maximum
percentage of KE lost at a time. The lost energy is stored
in the central energy buffer.
7
The stored energy can be used
to support decomposition. If (3) does not hold, the change
is prohibited and the molecule retains its original ω, PE and
KE. The pseudocode of the on-wall ineffective collision is as
follows:
ineff
coll on wall(M, buffer)
Input: A molecule M with its profile and the central energy
buffer buffer.
1. Obtain ω
= Neighbor(ω)
2. Calculate PE
ω
3. if PE
ω
+ KE
ω
PE
ω
then
4. Get q randomly in interval [KELossRate,1]
5. KE
ω
=(PE
ω
+ KE
ω
PE
ω
) × q
6. Update buffer = buffer +(PE
ω
+ KE
ω
PE
ω
)×
(1 q)
7. Update the profile of M by ω = ω
, PE
ω
= PE
ω
and
KE
ω
= KE
ω
8. end if
9. Output M and buffer
2) Decomposition: A decomposition [Fig. 4(b)] means that
a molecule hits the wall and then decomposes into two or
more (assume two in this framework) pieces. The collision is
vigorous and leads the molecule to break into two pieces. The
resultant molecular structures should be very different from the
original one. Suppose the molecular structure of the original
molecule is ω and those of the resultant molecules are ω
1
and
ω
2
. If the original molecule has sufficient energy (PE and KE)
to endow the PE of the resultant ones, that is
PE
ω
+ KE
ω
PE
ω
1
+ PE
ω
2
, (4)
the change is allowed. Let temp
1
= PE
ω
+KE
ω
PE
ω
1
PE
ω
2
.
We get
KE
ω
1
= temp
1
× k
and
KE
ω
2
= temp
1
× (1 k)
where k is a random number uniformly generated from the
interval [0, 1]. However, it is rather unusual for (4) to hold. In
normal cases, PE
ω
, PE
ω
1
and PE
ω
2
are of similar values (but
much larger than those in the same neighborhood), (4) holds
only when KE
ω
is large enough. However, KE of molecules
tends to decrease in a sequence of on-wall ineffective colli-
sions as the chemical process evolves. Thus, (4) is not likely
7
The conservation of energy also prevents us from intentionally adding
or removing energy from the energy buffer. The change of energy here is
governed only by the mechanisms of the relevant elementary reactions.

Citations
More filters
Journal ArticleDOI

Moth-flame optimization algorithm

TL;DR: The MFO algorithm is compared with other well-known nature-inspired algorithms on 29 benchmark and 7 real engineering problems and the statistical results show that this algorithm is able to provide very promising and competitive results.
Journal ArticleDOI

A survey on new generation metaheuristic algorithms

TL;DR: In this survey, fourteen new and outstanding metaheuristics that have been introduced for the last twenty years other than the classical ones such as genetic, particle swarm, and tabu search are distinguished.
Journal ArticleDOI

Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems

TL;DR: The experimental results suggest that AOA is a high-performance optimization tool with respect to convergence speed and exploration-exploitation balance, as it is effectively applicable for solving complex problems.
Posted Content

Experimental evaluation of state-of-the-art heuristics for the resource-constrained project scheduling problem

TL;DR: In this paper, the performance of several state-of-the-art heuristics from the literature on the basis of a standard set of test instances and point out to the most promising procedures.
Journal ArticleDOI

Review: Multi-objective optimization methods and application in energy saving

TL;DR: In order to get the final optimal solution in the real-world multi-objective optimization problems, trade-off methods including a priori methods, interactive methods, Pareto-dominated methods and new dominance methods are utilized.
References
More filters
Book

Genetic algorithms in search, optimization, and machine learning

TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Journal ArticleDOI

Optimization by Simulated Annealing

TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Journal ArticleDOI

Equation of state calculations by fast computing machines

TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "Chemical-reaction-inspired metaheuristic for optimization" ?

The authors propose a new metaheuristic, called chemical reaction optimization ( CRO ), to solve optimization problems. The authors tested the performance of CRO with three nondeterministic polynomial-time hard combinatorial optimization problems. CRO may potentially solve those problems which may not be solvable with the few generally acknowledged approaches. 

In the future, the authors can develop even better versions of CRO through hybridation with the others ( e. g., CRO + SA, CRO + ACO, etc. ) or through the incorporation of greedy approaches.