scispace - formally typeset
Open AccessJournal ArticleDOI

Memetic Algorithm With Extended Neighborhood Search for Capacitated Arc Routing Problems

Reads0
Chats0
TLDR
Experimental results show that MAENS is superior to a number of state-of-the-art algorithms, and the advanced performance ofMAENS is mainly due to the MS operator, which is capable of searching using large step sizes and is less likely to be trapped in local optima.
Abstract
The capacitated arc routing problem (CARP) has attracted much attention during the last few years due to its wide applications in real life. Since CARP is NP-hard and exact methods are only applicable to small instances, heuristic and metaheuristic methods are widely adopted when solving CARP. In this paper, we propose a memetic algorithm, namely memetic algorithm with extended neighborhood search (MAENS), for CARP. MAENS is distinct from existing approaches in the utilization of a novel local search operator, namely Merge-Split (MS). The MS operator is capable of searching using large step sizes, and thus has the potential to search the solution space more efficiently and is less likely to be trapped in local optima. Experimental results show that MAENS is superior to a number of state-of-the-art algorithms, and the advanced performance of MAENS is mainly due to the MS operator. The application of the MS operator is not limited to MAENS. It can be easily generalized to other approaches.

read more

Content maybe subject to copyright    Report

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 5, OCTOBER 2009 1151
Memetic Algorithm with Extended Neighborhood
Search for Capacitated Arc Routing Problems
Ke Tang, Member, IEEE, Yi Mei, Student Member, IEEE, and Xin Yao, Fellow, IEEE
Abstract The capacitated arc routing problem (CARP) has
attracted much attention during the last few years due to its
wide applications in real life. Since CARP is NP-hard and exact
methods are only applicable to small instances, heuristic and
metaheuristic methods are widely adopted when solving CARP.
In this paper, we propose a memetic algorithm, namely memetic
algorithm with extended neighborhood search (MAENS), for
CARP. MAENS is distinct from existing approaches in the
utilization of a novel local search operator, namely Merge-Split
(MS). The MS operator is capable of searching using large step
sizes, and thus has the potential to search the solution space
more efficiently and is less likely to be trapped in local optima.
Experimental results show that MAENS is superior to a number
of state-of-the-art algorithms, and the advanced performance of
MAENS is mainly due to the MS operator. The application of the
MS operator is not limited to MAENS. It can be easily generalized
to other approaches.
Index Terms Capacitated arc routing problem (CARP),
evolutionary optimization, local search, memetic algorithm,
metaheuristic search.
I. INTRODUCTION
T
HE ARC routing problem is a classic problem with
many applications in the real world, such as urban waste
collection, post delivery, sanding or salting the streets [1], [2],
etc. It is a combinatorial optimization problem that requires
determining the least cost routing plan for vehicles subject
to some constraints [3]. The capacitated arc routing problem
(CARP), which is the most typical form of the arc routing
problem, is considered in this paper. It can be described as
follows: a mixed graph G = (V , E, A), with a set of vertices
denoted by V , a set of edges denoted by E and a set of arcs
(i.e., directed edges) denoted by A, is given. There is a central
depot vertex dep V , where a set of vehicles are based. A
subset E
R
E composed of all the edges required to be
Manuscript received November 18, 2008; revised February 14, 2009;
accepted May 5, 2009. First version published August 11, 2009; current
version published September 30, 2009. This work was supported in part
by the Engineering and Physical Sciences Research Council under Grant
EP/E058884/1 on “Evolutionary Algorithms for Dynamic Optimization Prob-
lems: Design, Analysis and Applications, by the National Natural Science
Foundation of China under Grants 60533020 and U0835002, and by the Fund
for Foreign Scholars in University Research and Teaching Programs, Grant
B07033.
K. Tang and Y. Mei are with the Nature Inspired Computation and
Applications Laboratory, School of Computer Science and Technology, Uni-
versity of Science and Technology of China, Hefei 230027, China (e-mail:
ketang@ustc.edu.cn; meiyi@mail.ustc.edu.cn).
X. Yao is with the Center of Excellence for Research in Computational
Intelligence and Applications, School of Computer Science, University of
Birmingham, Birmingham B15 2TT, U.K. (e-mail: x.yao@cs.bham.ac.uk).
Digital Object Identifier 10.1109/TEVC.2009.2023449
served and a subset A
R
A composed of all the arcs required
to be served are also given. The elements of these two subsets
are called edge tasks and arc tasks, respectively. Each edge
or arc in the graph is associated with a demand, a serving
cost, and a deadheading cost (the cost of a vehicle traveling
along the edge/arc without serving it). Both the demand and
the serving cost are zero for the edges and arcs that do not
require service. A solution to the problem is a routing plan
that consists of a number of routes for the vehicles, and the
objective is to minimize the total cost of the routing plan
subject to the following constraints:
1) each route starts and ends at the depot;
2) each task is served in exactly one route;
3) the total demand of each route must not exceed the
vehicle’s capacity Q.
Since CARP is NP-hard [4], exact methods are only applicable
to small-size instances. As a result, heuristics and meta-
heuristics are often considered in the literature. For example,
Augment–Merge [4], Path-Scanning [5], the “route first-cluster
second” type heuristic proposed in [6], and Ulusoy’s Heuris-
tic [7] are typical heuristic methods. In 2000, Hertz et al.
proposed a tabu search for CARP (CARPET) [8]. In CARPET,
a solution is represented as a set of routes, each of which is an
ordered list of vertices. Every vertex is associated with a 0-1
variable indicating whether the edge between this vertex and
the successive vertex is served. Based on CARPET, a variable
neighborhood descent (VND) algorithm was later proposed
by replacing the tabu search process in CARPET with a
variable neighborhood search process [9]. In 2003, Beullens
et al. developed a guided local search (GLS) algorithm for
CARP [10]. GLS adopts an edge marking scheme, which
marks (unmarks) edges based on the information of a previous
search procedure. Local search operators are only applied to
those marked edges so as to make the search process more
effective. After that, Lacomme et al. proposed a memetic al-
gorithm (LMA
1
), which combines the genetic algorithm (GA)
with local search [11]. LMA employs a genotype encoding
scheme. That is, a solution is represented as a sequence of
tasks, and the intermediate vertices between two subsequent
tasks are omitted. To obtain the complete routing plan from
a solution, one needs to connect every two subsequent tasks
with the shortest path between them, and then apply Ulusoy’s
heuristic to separate the sequence into a number of routes.
1
In the literature, MA is usually referred to as a general framework rather
than any specific algorithm. For the sake of clarity, we denote the MA
proposed by Lacomme et al. [10] as LMA in this paper.
1089-778X/$26.00 © 2009 IEEE
Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on December 27, 2009 at 11:08 from IEEE Xplore. Restrictions apply.

1152 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 5, OCTOBER 2009
In 2006, Handa et al. proposed an evolutionary algorithm
(EA) [1], [2] for a salting route optimization problem in the
U.K. This EA adopts a similar encoding scheme as LMA, i.e.,
a solution is represented by a sequence of tasks. The difference
is that a solution obtained by the EA can be naturally separated
into different routes, and thus it is unnecessary to utilize
Ulusoy’s heuristic any more. Besides, EA and LMA also em-
ploy different evolutionary operators and evaluation schemes.
Recently, Brandão and Eglese proposed a deterministic tabu
search algorithm (TSA) [12] for CARP. In their work, two
variants of TSA were described, namely TSA1 and TSA2.
TSA1 is a rather standard tabu search algorithm, and TSA2 can
be viewed as applying TSA1 with a number of different initial
solutions. Experimental studies showed that TSA2 is superior
to TSA1 and outperformed both CARPET and LMA on three
sets of benchmark instances. In [13], we proposed a global
repair operator (GRO), which aims to amend low-cost infea-
sible solutions. It has been shown that combining GRO with
TSA1 can lead to significant improvement in terms of solution
quality, and may even accelerate convergence of the algorithm.
So far, we have introduced both heuristic and metaheuristic
methods. It can be observed that the former was popular in
early years while the latter is attracting more and more inter-
est recently. Moreover, many existing metaheuristic methods
incorporate previous heuristic methods in their framework.
For example, CARPET, LMA, and TSA all employ one or
more aforementioned heuristics to obtain initial solutions, and
then try to further improve them. Therefore, it is unsurprising
that metaheuristic approaches usually outperform heuristic
methods in terms of solution quality, although they are com-
putationally more expensive. Fortunately, powerful modern
computers can easily afford the additional computational cost.
In this paper, we investigate CARP within the framework
of MA. As an emerging area of evolutionary computation,
MAs are population-based metaheuristic search methods that
combine global search strategies (e.g., crossover) with local
search heuristics, and have been studied under a number of
different names, such as Baldwinian EAs, Lamarckian EAs,
cultural algorithms, genetic local search, etc. They are reported
to not only converge to high-quality solutions, but also search
more efficiently than conventional EAs. The successes of
MAs have been revealed on a wide variety of real-world
problems [14]–[16], including CARP, as demonstrated by
LMA [11]. Compared to conventional EAs, there are two
key issues for the success of MAs. One is an appropriate
balance between global and local search, and the other is a
cost effective coordination of local search. Hence, the local
search procedure, which is usually designed to utilize the
domain knowledge of the problem of interest, plays the most
important role in MAs. In the context of CARP, local search is
often conducted via some traditional move operators, such as
single insertion, double insertion, swap, etc. [11]. These move
operators modify only a small part of the current solution.
More intuitively, they can be said to have small search step size
and search within a small neighborhood of the current solution.
With such characteristics, these operators can be expected to
perform well on simple problems that have a small number
of local optima and a small solution space. However, they
may no longer work when the solution space becomes large
or contains many local optima, or in the case that the solution
space consists of separated feasible regions. In such cases, a
large-step-size local search may be more desirable, either for
jumping out of the local optimum or from one feasible region
to another, or to conduct the search more efficiently. From a
general optimization viewpoint, the benefits of large step size
have been theoretically addressed in [17], where it is proved
that simulated annealing (SA) with a larger neighborhood is
better than SA with a smaller neighborhood. Unfortunately, the
step size issue itself has rarely been addressed in the context
of CARP, let alone the design of a refined memetic approach.
Motivated by the above consideration, we propose a new
move operator for CARP, named the Merge-Split (MS) opera-
tor, in this paper. Compared to traditional move operators, the
MS operator has a larger search step size, which is variable.
Thus, it can flexibly conduct a local search within a large
neighborhood of a solution. We have incorporated the MS
operator into the MA framework, and developed the memetic
algorithm with extended neighborhood search (MAENS) for
CARP. MAENS has been evaluated on four sets of CARP
benchmark instances (a total of 181 instances) and compared
with five existing metaheuristic algorithms, i.e., CARPET [8],
VND [9], GLS [10], LMA [11], and TSA2 [12]. Experimental
results showed that MAENS outperformed all the five existing
algorithms on difficult instances with many local optima, and
performed almost the same as the existing algorithms on
simple instances since they all reach the global optimum.
The rest of this paper is organized as follows. Section II
introduces preliminary background of this paper, including the
formal problem definition of CARP, the general framework
of MA, and the traditional move operators for local search.
Section III describes the MS operator in detail. After that,
MAENS is proposed in Section IV. Section V presents the
experimental studies, which include empirical comparison
between MAENS and other algorithms and the demonstration
of the efficacy of the MS operator. Finally, conclusions and
future work will be presented in Section VI.
II. B
ACKGROUND
In this section, the background of the paper is presented.
We start from the notations, solution representation, and math-
ematical representation of CARP, and then briefly describe the
general framework of MA and traditional move operators for
CARP.
A. Notations and Problem Definition
CARP involves seeking a minimum cost routing plan for
vehicles to serve all the required edges E
R
E and required
arcs A
R
A of a given graph G = (V, E, A), subject to
some constraints. Each arc task is assigned a unique ID, say
t. Each edge (i, j ) is considered as a pair of arcs <i, j>
and < j, i>, one for each direction. Thus, each edge task is
assigned two IDs. For the sake of convenience, the IDs are set
to positive integers. Each ID t is associated with five features,
namely tail(t), head(t), sc(t), dc(t), and dem(t), standing for
the tail and head vertices, serving cost, deadheading cost, and
Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on December 27, 2009 at 11:08 from IEEE Xplore. Restrictions apply.

TANG et al.: MEMETIC ALGORITHM WITH EXTENDED NEIGHBORHOOD SEARCH 1153
3(8)
2(7)
1(6)
4(9)
5(10)
depot
S = (0,3,2,1,0,4,5,0)
The ordered list of task IDs
Fig. 1. Illustration of the solution representation. IDs in parentheses represent
inversions of the current directions.
Initialization: Generate an initial population1
while stopping criteria are not satisfied do2
Evaluate all individuals in the population3
Evolve a new population using evolutionary operators4
for each individual do5
Perform local search around it with probability P
ls
;6
end7
end8
Fig. 2. General framework of MAs.
demand of the corresponding task, respectively. If t belongs
to an edge task, let inv(t) denote the inversion of task t.The
serving cost, deadheading cost, and demand of task inv(t) are
thesameassc(t), dc(t), and dem(t), respectively. But note that
each edge task should be served only once, in either direction
(i.e., only one of tasks t and inv(t) is served). To separate
different routes in a solution, we also define the dummy task.
Both the tail and head vertices of a dummy task are the depot
vertex dep, and its ID is set to 0. Like many other existing
metaheuristic approaches, we represent a solution to CARP as
an ordered list of tasks (IDs), denoted by S = (S
1
, S
2
,...,K ),
where S
i
is the ith element (task) of S. Fig. 1 presents a
simple illustration of such a solution representation. Given a
solution S, the corresponding routing plan can be obtained by
connecting every two subsequent tasks with the shortest path
between them (i.e., finding a shortest path from the tail vertex
of the former task to the head vertex of the subsequent task),
which can be easily found by Dijkstra’s algorithm [18]. In
the context of CARP, the term “shortest path” is equivalent
to the path with minimum deadheading cost. Let sp(S
i
, S
i+1
)
denote the total deadheading cost of the shortest path between
S
i
and S
i+1
, then the total cost of S can be written as
TC(S) =
length(S)1
i=1
[sc(S
i
) + sp(S
i
, S
i+1
)](1)
where lengt h(S) stands for the length of the sequence S.
From Fig. 1, we may see that each solution S may consist
of multiple routes (e.g., the example in Fig. 1 has two routes),
and each starts from and ends at the depot. Hence, we can
further write S in the form
S = (R
1
, R
2
, K , R
m
)
= (0, R
11
, R
12
, K ,

R
1
0, R
21
, R
22
, K ,

R
2
0, R
m1
, R
m2
, K ,

R
m
0)
(2)
where m is the number of routes in S and each R
i
de-
notes a single route. Obviously, every R
i
also consists of a
subsequence of tasks, and the load (i.e., total demand) of it is
load(R
i
) =
length(R
i
)
k=1
dem(R
ik
). (3)
Given all the above notations and the aforementioned three
constraints of CARP, we now arrive at the following repre-
sentation of CARP:
min
S
TC(S) =
length(S)1
i=1
[sc(S
i
) + sp(S
i
, S
i+1
)]
s.t. : app(S
i
) = 1, S
i
A
R
app(S
i
) + app(inv(S
i
)) = 1, S
i
E
R
m nveh
load(R
i
) Q
(4)
where app(S
i
) counts the times that task S
i
appears in the
whole sequence, nveh is the number of vehicles available at
the depot, and Q is the vehicle’s capacity.
B. General Framework of Memetic Algorithms (MAs)
First introduced by Moscato in 1989 [19], MAs were
inspired by both Darwinian principles of natural evolution
and Dawkins’ notion of memes [20]. From the evolutionary
computation perspective, MAs can be viewed as a form of
population-based EAs hybridized with individual learning pro-
cedures that are capable of performing local refinements [19].
Without loss of generality, the framework of MAs can be
summarized by Fig. 2.
From Fig. 2, we can see that one major difference between
MAs and conventional EAs is that the mutation operators
of EAs are replaced by local search in MAs. Hence, the
success of MAs is largely due to the appropriate adoption
of local search operators, and it is not surprising that much
important work in the incremental development of MAs is
centered around the local search procedure [21]–[24]. Unlike
the evolutionary operators, which are usually very general and
applicable to various problems, the local search operators are
usually expected to incorporate some domain specific heuris-
tics, so that the MAs can balance well between generality
and problem specificity. To name a few, local heuristics or
conventional exact enumerative methods, such as the Simplex
method, Newton/Quasi-Newton method, conjugate gradient
method, and line search, are typical local search strategies for
numerical optimization. In the context of combinatorial opti-
mization, local search methods are often specifically designed
to serve a problem of interest well, e.g., k-gene exchange,
the k-opt algorithm for the traveling salesman problem, and
many others. In the next section, some traditional local search
operators for CARP will be briefly introduced.
C. Traditional Move Operators for Local Search
Recall that a solution to CARP is encoded as a sequence
of task IDs, and thus local search around a candidate solution
is often conducted by applying move operators to it. In the
Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on December 27, 2009 at 11:08 from IEEE Xplore. Restrictions apply.

1154 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 5, OCTOBER 2009
depot
5(10)
1(6)
9(4)
7(2)
8(3)
S = (0,1,9,8,7,5,0)
S’ = (0,1,2,3,4,5,0)
5(10)
4(9)
3(8)
2(7)
1(6)
depot
Fig. 3. 2-opt for a single route.
5(11)
3(9)
2(8)
S = (0,1,2,5,6,0,4,3,0)S = (0,1,2,3,0,4,5,6,0)
Plan 1
S’’ = (0,1,2,10,0,9,5,6,0)
Plan 2
1(7)
4(10)
5(11)
4(10)
3(9)
2(8)
depot
depot
1(7)
6(12)
5(11)
10(4)
9(3)
2(8)
1(7)
6(12)
6(12)
depot
Fig. 4. 2-opt for double routes.
literature, there are four commonly used move operators for
CARP, namely single insertion, double insertion, swap, and
2-opt [11].
1) Single Insertion: In the single insertion move, a task is
removed from its current position and re-inserted into another
position of the current solution or a new empty route. If the
selected task belongs to an edge task, both its directions will be
considered when inserting the task into the “target position.”
The direction leading to a better solution will be chosen.
2) Double Insertion: The double insertion move is similar
to the single insertion except that two consecutive tasks are
moved instead of a single task. Similar to the single insertion,
both directions are considered for edge tasks.
3) Swap: In the swap move, two candidate tasks are se-
lected and their positions are exchanged. Similar to the single
insertion, both directions are considered for edge tasks.
4) 2-opt: There are two types of 2-opt move operators,
one for a single route and the other for double routes. In
the 2-opt move for a single route, a subroute (i.e., a part
of the route) is selected and its direction is reversed. When
applying the 2-opt move to double routes, each route is first
cut into two subroutes, and new solutions are generated by
reconnecting the four subroutes. Figs. 3 and 4 illustrate the two
2-opt move operators, respectively. In Fig. 3, given a solution
S = (0, 1, 9, 8, 7, 5, 0), the subroute from task 9 to 7 is
selected and its direction is reversed. In Fig. 4, given a solution
S = (0, 1, 2, 3, 0, 4, 5, 6, 0), the first route is cut between tasks
2 and 3, and the second route is cut between tasks 4 and 5.
A new solution can be obtained either by connecting task 2
with task 5, and task 4 with task 3, or by linking task 2 to
the inversion of task 4, and task 5 with inversion of task 3.
In practice, one may choose the one with the smaller cost.
Unlike the previous three operators, the 2-opt operator is only
applicable to edge tasks. Although it can be easily modified
to cope with arc tasks, such work remains absent in the
literature.
All the above move operators were first proposed to ad-
dress the vehicle routing problem (VRP) [25], and were then
extended and widely used in CARP [1], [2], [8]–[12], [26].
They adopt rather simple schemes to generate new solutions,
and thus are likely to generate new solutions that are quite
similar to the current solutions. Intuitively speaking, we may
say that these traditional move operators have “small” step
size and thus are only capable of searching within a “small”
neighborhood. However, a small step-size local search operator
might not perform well in a case where a CARP has a large
solution space, or the capacity constraints are tight. In the
former case, it may take much longer time for a traditional
move operator to find the global optimum, i.e., the search
process will become inefficient as the solution space enlarges.
In the latter case, the solution space will become more rugged
and contain more local optima as the capacity constraints
become tighter. Consequently, it is likely that the feasible
regions in the solution space are isolated by infeasible regions.
A small step-size local search might easily be trapped in local
optima, and might not be able to “jump” from one feasible
solution to another. Therefore, it may never search all the
feasible regions appropriately.
Obviously, for both the above cases, a local search with
large search step size is more desirable. At first glance, it
appears that a large search step size can be obtained with
little effort, i.e., we may simply apply the traditional move
operators for multiple times. Such an idea can be found in [27],
where Liang et al. tackled a different type of combinatorial
problem—the cutting stock problem. But taking a closer look
at the traditional move operators for CARP, we found that all
of them define a neighborhood of size (n
2
), where n is the
number of tasks. For example, the single insertion selects one
task out of n, and there are n + m 1 possible positions for
the task to be inserted in, where m is the number of routes.
Thus, the single insertion can at most generate n(n + m 1)
different solutions. The swap operator requires selecting two
tasks out of n, and there exist n(n 1)/2 different choices.
Similar observations can be made upon the double insertion
and 2-opt, too. Therefore, consecutively applying single move
operators for k times defines a neighborhood of size O(n
2k
),
which increases exponentially with k. In consequence, it is
prohibitive to enumerate all the possible solutions when k
becomes large. One simple solution to this problem is to
randomly sample a part of the huge neighborhood. However,
it is often the case that some regions in the solution space are
more promising than the others. Hence, random sampling is
a bit blind and might waste a lot of computational resource.
To summarize, although a large step-size local search can be
beneficial, it cannot be implemented by simply extending the
traditional move operators, and a more refined approach is
required. For this purpose, we developed the MS operator.
III. M
ERGE-SPLIT OPERATOR FOR LOCAL SEARCH
The MS operator aims to improve a given solution by mod-
ifying multiple routes of it. As indicated by its name (Merge-
Split), this operator is composed of two components, i.e.,
the Merge and Split. Given a solution, the Merge component
randomly selects p( p > 1) routes of it and, combines them
together to form an unordered list of task IDs, which contains
all the tasks of the selected routes. The Split component
directly operates on the unordered list generated by the Merge
Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on December 27, 2009 at 11:08 from IEEE Xplore. Restrictions apply.

TANG et al.: MEMETIC ALGORITHM WITH EXTENDED NEIGHBORHOOD SEARCH 1155
The current
solution S
New solution S
Merge
Path
scanning
Ulusoys
splitting
procedure
Select
the best
solution
New solution 1
New solution 2
New solution 3
New solution 4
New solution 5
Ordered list 1
Ordered list 2
Ordered list 3
Ordered list 4
Ordered list 5
An unordered
list
Fig. 5. Merge-Split operator.
(1,8)
(2,9)
(3,10)
(4,11)
(5,12)
(6,13)
(7,14)
depot
Merge Split
S = (0,1,2,3,4,5,6,7,0) Unordered list S = (0,5,11,10,0,9,8,14,13,0)
1(8)
2(9)
3(10)
4(11)
5(12)
7(14)
6(13)
depot
8(1)
9(2)
10(3)
11(4)
5(12)
13(6)
14(7)
Fig. 6. Demonstration of the Merge-Split operator.
component. First, the path scanning (PS) heuristic [5] is
applied. PS starts by initializing an empty path. At each
iteration, PS finds out the tasks that do not violate the capacity
constraints. If no task satisfies the constraints, it connects the
end of the current path to the depot with the shortest path
between them to form a route, and then initializes a new empty
path. If a unique task satisfies the constraints, PS connects that
task to the end of the current path (again, with the shortest
path between them). If multiple tasks satisfy the constraints,
the one closest to the end of the current path is chosen. If
multiple tasks not only satisfy the capacity constraints but are
also the closest to the end of the current path, five rules are
further adopted to determine which to choose: 1) maximize
the distance from the head of task to the depot; 2) minimize
the distance from the head of task to the depot; 3) maximize
the term dem(t)/sc(t), where dem(t ) and sc(t) are demand
and serving cost of task t, respectively; 4) minimize the term
dem(t)/sc(t); 5) use rule 1) if the vehicle is less than half-
full, otherwise use rule 2). If multiple tasks still remain, ties
are broken arbitrarily. PS terminates when all the tasks in the
unordered list have been selected. Note that PS does not use
the five rules alternatively. Instead, it scans the unordered list
of tasks for five times. In each scan, only one rule is used.
Hence, PS will generate five ordered lists of tasks in total.
In the Split component, PS is followed by Ulusoy’s splitting
procedure [7]. In other words, Ulusoy’s splitting method is
applied to all the five ordered lists obtained by PS to further
improve them. Given an ordered list of tasks, Ulusoy’s splitting
procedure is an exact algorithm that seeks the optimal way
to split the ordered list into different routes. Since Ulusoy’s
splitting procedure is an exact algorithm and has been well
known for years, we omit its detailed steps in this paper.
Interested readers may refer to the original publication [7].
To summarize, the MS operator first merges multiple routes
to obtain an unordered list of tasks, and then employs PS to
sort the unordered list. After that, Ulusoy’s splitting procedure
is used to split the ordered lists into new routes in the optimal
way. Finally, we may obtain five new solutions of CARP by
embedding the new routes back into the original solution, and
the best one is chosen as the output of the MS operator. Fig. 5
demonstrates the whole process of MS operator.
One main advantage of the MS operator is its capability of
generating new solutions that are significantly different from
the current solution. As illustrated in Fig. 6, the MS opera-
tor generates a solution S
= (0, 5, 11, 10, 0, 9, 8, 14, 13, 0)
based on the solution S = (0, 1, 2, 3, 4, 0, 5, 6, 7, 0).Ifusing
traditional move operators, S
may be achieved by applying
single insertion and double insertion consecutively, but cannot
be reached by applying any of the traditional move operators
only once. Hence, we may say that the MS operator has a
larger search step size than the traditional move operators. In
general, the larger the p (i.e., the number of routes involved
in MS), the more distant the new solution is from the current
solution. Another appealing property of the MS operator is
that it is likely to generate high-quality new solutions. This is
due to the adoption of PS and Ulusoy’s splitting procedure,
both of which are known to be capable of generating relatively
good solutions. The major drawback of the MS operator is its
Authorized licensed use limited to: UNIVERSITY OF BIRMINGHAM. Downloaded on December 27, 2009 at 11:08 from IEEE Xplore. Restrictions apply.

Citations
More filters
Journal ArticleDOI

A Multi-Facet Survey on Memetic Computation

TL;DR: A comprehensive multi-facet survey of recent research in memetic computation is presented and includes simple hybrids, adaptive hybrids and memetic automaton.
Journal ArticleDOI

Evolutionary Multitasking via Explicit Autoencoding

TL;DR: An EMT algorithm with explicit genetic transfer across tasks, namely EMT via autoencoding, which allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm is proposed.
Journal ArticleDOI

Community detection in networks by using multiobjective evolutionary algorithm with decomposition

TL;DR: The community detection is solved as a multiobjective optimization problem by using the multiobjectives evolutionary algorithm based on decomposition, which maximizes the density of internal degrees, and minimizes thedensity of external degrees simultaneously.
Journal ArticleDOI

Decomposition-Based Memetic Algorithm for Multiobjective Capacitated Arc Routing Problem

TL;DR: A new memetic algorithm (MA) called decomposition-based MA with extended neighborhood search (D-MAENS) is proposed, which combines the advanced features from both the MAENS approach for single-objective CARP and multiobjective evolutionary optimization.
Journal ArticleDOI

Memetic Search With Interdomain Learning: A Realization Between CVRP and CARP

TL;DR: A study on evolutionary memetic computing paradigm that is capable of learning and evolving knowledge meme that traverses different but related problem domains, for greater search efficiency is presented.
References
More filters
Journal ArticleDOI

A note on two problems in connexion with graphs

TL;DR: A tree is a graph with one and only one path between every two nodes, where at least one path exists between any two nodes and the length of each branch is given.
Book

The Selfish Gene

TL;DR: In this paper, the authors take up the concepts of altruistic and selfish behaviour; the genetical definition of selfish interest; the evolution of aggressive behaviour; kinship theory; sex ratio theory; reciprocal altruism; deceit; and the natural selection of sex differences.
Journal ArticleDOI

The Selfish Gene

Journal ArticleDOI

Evolutionary programming made faster

TL;DR: A "fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator and is proposed and tested empirically, showing that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.
Journal ArticleDOI

Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art

TL;DR: A comprehensive survey of the most popular constraint-handling techniques currently used with evolutionary algorithms, including approaches that go from simple variations of a penalty function, to others, more sophisticated, that are biologically inspired on emulations of the immune system, culture or ant colonies.
Related Papers (5)