scispace - formally typeset
Open AccessJournal ArticleDOI

Greedy Randomized Adaptive Search Procedures

TLDR
This paper defines the various components comprising a GRASP and demonstrates, step by step, how to develop such heuristics for combinatorial optimization problems.
Abstract
Today, a variety of heuristic approaches are available to the operations research practitioner. One methodology that has a strong intuitive appeal, a prominent empirical track record, and is trivial to efficiently implement on parallel processors is GRASP (Greedy Randomized Adaptive Search Procedures). GRASP is an iterative randomized sampling technique in which each iteration provides a solution to the problem at hand. The incumbent solution over all GRASP iterations is kept as the final result. There are two phases within each GRASP iteration: the first intelligently constructs an initial solution via an adaptive randomized greedy function; the second applies a local search procedure to the constructed solution in hope of finding an improvement. In this paper, we define the various components comprising a GRASP and demonstrate, step by step, how to develop such heuristics for combinatorial optimization problems. Intuitive justifications for the observed empirical behavior of the methodology are discussed. The paper concludes with a brief literature review of GRASP implementations and mentions two industrial applications.

read more

Content maybe subject to copyright    Report

GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES
MAURICIO G.C. RESENDE AND CELSO C. RIBEIRO
Abstract. GRASP is a multi-start metaheuristic for combinatorial problems,
in which each iteration consists basically of two phases: construction and local
search. The construction phase builds a feasible solution, whose neighborhood
is investigated until a local minimum is found during the local search phase.
The best overall solution is kept as the result. In this chapter, we first describe
the basic components of GRASP. Successful implementation techniques and
parameter tuning strategies are discussed and illustrated by numerical results
obtained for different applications. Enhanced or alternative solution construc-
tion mechanisms and techniques to speed up the search are also described:
Reactive GRASP, cost perturbations, bias functions, memory and learning,
local search on partially constructed solutions, hashing, and filtering. We also
discuss in detail implementation strategies of memory-based intensification and
post-optimization techniques using path-relinking. Hybridizations with other
metaheuristics, parallelization strategies, and applications are also reviewed.
1. Introduction
We consider in this chapter a combinatorial optimization problem, defined by a
finite ground set E = {1, . . . , n}, a set of feasible solutions F 2
E
, and an objective
function f : 2
E
. In the minimization version, we search an optimal solution
S
F such that f(S
) f(S), S F . The ground set E, the cost function
f, and the set of feasible solutions F are defined for each specific problem. For
instance, in the case of the traveling salesman problem, the ground set E is that of
all edges connecting the cities to be visited, f(S) is the sum of the costs of all edges
e S, and F is formed by all egde subsets that determine a Hamiltonian cycle.
The GRASP (Greedy Randomized Adaptive Search Procedure) metaheuristic
[38, 39] is a multi-start or iterative process, in which each iteration consists of two
phases: construction and local search. The construction phase builds a feasible
solution, whose neighborhood is investigated until a local minimum is found during
the local search phase. The best overall solution is kept as the result. An extensive
survey of the literature is given in [44]. The pseudo-code in Figure 1 illustrates the
main blocks of a GRASP procedure for minimization, in which Max Iterations
iterations are performed and Seed is used as the initial seed for the pseudorandom
number generator.
Figure 2 illustrates the construction phase with its pseudo-code. At each itera-
tion of this phase, let the set of candidate elements be formed by all elements that
can be incorporated to the partial solution under construction without destroying
feasibility. The selection of the next element for incorporation is determined by
the evaluation of all candidate elements according to a greedy evaluation function.
Date: August 29, 2002.
AT&T Labs Research Technical Report TD-53RSJY, version 2. To appear in State of the Art
Handbook in Metaheuristics, F. Glover and G. Kochenberger, eds., Kluwer, 2002.
1

2 M.G.C. RESENDE AND C.C. RIBEIRO
procedure GRASP(Max Iterations,Seed)
1 Read Input();
2 for k = 1, . . . , Max Iterations do
3 Solution Greedy Randomized Construction(Seed);
4 Solution Local Search(Solution);
5 Update Solution(Solution,Best Solution);
6 end;
7 return Best Solution;
end GRASP.
Figure 1. Pseudo-code of the GRASP metaheuristic.
This greedy function usually represents the incremental increase in the cost func-
tion due to the incorporation of this element into the solution under construction.
The evaluation of the elements by this function leads to the creation of a restricted
candidate list (RCL) formed by the best elements, i.e. those whose incorporation
to the current partial solution results in the smallest incremental costs (this is the
greedy aspect of the algorithm). The element to be incorporated into the partial
solution is randomly selected from those in the RCL (this is the probabilistic aspect
of the heuristic). Once the selected element is incorporated to the partial solution,
the candidate list is updated and the incremental costs are reevaluated (this is the
adaptive aspect of the heuristic). This strategy is similar to the semi-greedy heuris-
tic proposed by Hart and Shogan [55], which is also a multi-start approach based
on greedy randomized constructions, but without local search.
procedure Greedy Randomized Construction(Seed)
1 Solution ;
2 Evaluate the incremental costs of the candidate elements;
3 while Solution is not a complete solution do
4 Build the restricted candidate list (RCL);
5 Select an element s from the RCL at random;
6 Solution Solution {s};
7 Reevaluate the incremental costs;
8 end;
9 return Solution;
end Greedy Randomized Construction.
Figure 2. Pseudo-code of the construction phase.
The solutions generated by a greedy randomized construction are not necessarily
optimal, even with respect to simple neighborhoods. The local search phase usually
improves the constructed solution. A local search algorithm works in an iterative
fashion by successively replacing the current solution by a better solution in the
neighborhood of the current solution. It terminates when no better solution is found
in the neighborhood. The pseudo-code of a basic local search algorithm starting
from the solution Solution constructed in the first phase and using a neighborhood
N is given in Figure 2.

GRASP 3
procedure Local Search(Solution)
1 while Solution is not locally optimal do
2 Find s
0
N(Solution) with f(s
0
) < f(Solution);
3 Solution s
0
;
4 end;
5 return Solution;
end Local Search.
Figure 3. Pseudo-code of the local search phase.
The effectiveness of a local search procedure depends on several aspects, such
as the neighborhood structure, the neighborhood search technique, the fast eval-
uation of the cost function of the neighbors, and the starting solution itself. The
construction phase plays a very important role with respect to this last aspect,
building high-quality starting solutions for the local search. Simple neighborhoods
are usually used. The neighborhood search may be implemented using either a best-
improving or a first-improving strategy. In the case of the best-improving strategy,
all neighbors are investigated and the current solution is replaced by the best neigh-
bor. In the case of a first-improving strategy, the current solution moves to the first
neighbor whose cost function value is smaller than that of the current solution. In
practice, we observed on many applications that quite often both strategies lead to
the same final solution, but in smaller computation times when the first-improving
strategy is used. We also observed that premature convergence to a non-global
local minimum is more likely to occur with a best-improving strategy.
2. Construction of the restricted candidate list
An especially appealing characteristic of GRASP is the ease with which it can
be implemented. Few parameters need to be set and tuned. Therefore, develop-
ment can focus on implementing efficient data structures to assure quick iterations.
GRASP has two main parameters: one related to the stopping criterion and another
to the quality of the elements in the restricted candidate list.
The stopping criterion used in the pseudo-code described in Figure 1 is deter-
mined by the number Max
Iterations of iterations. Although the probability of
finding a new solution improving the currently best decreases with the number of
iterations, the quality of the best solution found may only improve with the latter.
Since the computation time does not vary much from iteration to iteration, the
total computation time is predictable and increases linearly with the number of
iterations. Consequently, the larger the number of iterations, the larger will be the
computation time and the better will be the solution found.
For the construction of the RCL used in the first phase we consider, without
loss of generality, a minimization problem as the one formulated in Section 1. We
denote by c(e) the incremental cost associated with the incorporation of element
e E into the solution under construction. At any GRASP iteration, let c
min
and
c
max
be, respectively, the smallest and the largest incremental costs.
The restricted candidate list RCL is made up of elements e E with the best
(i.e., the smallest) incremental costs c(e). This list can be limited either by the
number of elements (cardinality-based) or by their quality (value-based). In the
first case, it is made up of the p elements with the best incremental costs, where

4 M.G.C. RESENDE AND C.C. RIBEIRO
p is a parameter. In this chapter, the RCL is associated with a threshold param-
eter α [0, 1]. The restricted candidate list is formed by all “feasible” elements
e E which can be inserted into the partial solution under construction with-
out destroying feasibility and whose quality is superior to the threshold value, i.e.,
c(e) [c
min
, c
min
+ α(c
max
c
min
)]. The case α = 0 corresponds to a pure greedy
algorithm, while α = 1 is equivalent to a random construction. The pseudo code in
Figure 4 is a refinement of the greedy randomized contruction pseudo-code shown
in Figure 2. It shows that the parameter α controls the amounts of greediness and
randomness in the algorithm.
procedure Greedy Randomized Construction(α, Seed)
1 Solution ;
2 Initialize the candidate set: C E;
3 Evaluate the incremental cost c(e) for all e C;
4 while C 6= do
5 c
min
min{c(e) | e C};
6 c
max
max{c(e) | e C};
7 RCL {e C | c(e) c
min
+ α(c
max
c
min
)};
8 Select an element s from the RCL at random;
9 Solution Solution {s};
10 Update the candidate set C;
11 Reevaluate the incremental costs c(e) for all e C;
12 end;
13 return Solution;
end Greedy Randomized Construction.
Figure 4. Refined pseudo-code of the construction phase.
GRASP may be viewed as a repetitive sampling technique. Each iteration pro-
duces a sample solution from an unknown distribution, whose mean and variance
are functions of the restrictive nature of the RCL. For example, if the RCL is re-
stricted to a single element, then the same solution will be produced at all iterations.
The variance of the distribution will be zero and the mean will be equal to the value
of the greedy solution. If the RCL is allowed to have more elements, then many
different solutions will be produced, implying a larger variance. Since greediness
plays a smaller role in this case, the mean solution value should be worse. However,
the value of the best solution found outperforms the mean value and very often
is optimal. The histograms in Figure 5 illustrate this situation on an instance of
MAXSAT with 100 variables and 850 clauses, depicting results obtained with 1000
independent constructions using the first phase of the GRASP described in [83, 84].
Since this is a maximization problem, the purely greedy construction corresponds
to α = 1, whereas the random construction occurs with α = 0. We notice that when
the value of α increases from 0 to 1, the mean solution value increases towards the
purely greedy solution value, while the variance approaches zero.
For each value of α, we present in Figure 6 histograms with the results obtained
by applying local search to each of the 1000 constructed solutions. Figure 7 sum-
marizes the overall results of this experiment in terms of solution diversity, solution
quality, and computation time. We first observe that the larger the variance of the

GRASP 5
0
100
200
300
400
500
600
405000 410000 415000 420000 425000 430000 435000
occurrences
cost of constructed solution
(a) RCL parameter alpha = 0.0 (random
construction)
0
100
200
300
400
500
600
405000 410000 415000 420000 425000 430000 435000
occurrences
cost of constructed solution
(b) RCL parameter alpha = 0.2
0
100
200
300
400
500
600
405000 410000 415000 420000 425000 430000 435000
occurrences
cost of constructed solution
(c) RCL parameter alpha = 0.4
0
100
200
300
400
500
600
405000 410000 415000 420000 425000 430000 435000
occurrences
cost of constructed solution
(d) RCL parameter alpha = 0.6
0
100
200
300
400
500
600
405000 410000 415000 420000 425000 430000 435000
occurrences
cost of constructed solution
(e) RCL parameter alpha = 0.8
0
200
400
600
800
1000
405000 410000 415000 420000 425000 430000 435000
occurrences
cost of constructed solution
(f) RCL parameter alpha = 1.0 (greedy con-
struction)
Figure 5. Distribution of construction phase solution values as a
function of the RCL parameter α (1000 repetitions were recorded
for each value of α).

Citations
More filters
Journal ArticleDOI

Variable neighborhood search

TL;DR: This chapter presents the basic schemes of VNS and some of its extensions, and presents five families of applications in which VNS has proven to be very successful.
Journal ArticleDOI

Metaheuristics in combinatorial optimization: Overview and conceptual comparison

TL;DR: A survey of the nowadays most important metaheuristics from a conceptual point of view and introduces a framework, that is called the I&D frame, in order to put different intensification and diversification components into relation with each other.
Journal ArticleDOI

Variable neighborhood search: Principles and applications

TL;DR: In this article, a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS), is presented, which can easily be implemented using any local search algorithm as a subroutine.
Journal ArticleDOI

A survey on optimization metaheuristics

TL;DR: The components and concepts that are used in various metaheuristics are outlined in order to analyze their similarities and differences and the classification adopted in this paper differentiates between single solution based metaheURistics and population based meta heuristics.
Journal ArticleDOI

Hyper-heuristics: a survey of the state of the art

TL;DR: A critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas are presented.
References
More filters
Book

Genetic algorithms in search, optimization, and machine learning

TL;DR: In this article, the authors present the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields, including computer programming and mathematics.
Journal ArticleDOI

Optimization by Simulated Annealing

TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book

Random Graphs

Journal ArticleDOI

Combinatorial optimization: algorithms and complexity

TL;DR: This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NPcomplete problems, more.
Book

Tabu Search

TL;DR: This book explores the meta-heuristics approach called tabu search, which is dramatically changing the authors' ability to solve a host of problems that stretch over the realms of resource planning, telecommunications, VLSI design, financial analysis, scheduling, spaceplanning, energy distribution, molecular engineering, logistics, pattern classification, flexible manufacturing, waste management,mineral exploration, biomedical analysis, environmental conservation and scores of other problems.
Frequently Asked Questions (14)
Q1. What contributions have the authors mentioned in the paper "Greedy randomized adaptive search procedures" ?

In this chapter, the authors first describe the basic components of GRASP. Successful implementation techniques and parameter tuning strategies are discussed and illustrated by numerical results obtained for different applications. The authors also discuss in detail implementation strategies of memory-based intensification and post-optimization techniques using path-relinking. 

Cooperative-thread strategies may be implemented using path-relinking, by combining elite solutions stored in a central pool with the local optima found by each processor at the end of each GRASP iteration. 

For a given computation time, the probability of finding a solution at least as good as the target value increases from G to GPRf, from GPRf to GPRfb, and from GPRfb to GPRb. 

Information gathered from good solutions can be used to implement memory-based procedures to influence the construction phase, by modifying the selection probabilities associated with each element of the RCL. 

The efficiency of multiple-walk independent-thread parallel implementations of metaheuristics, based on running multiple copies of the same sequential algorithm, has been addressed by some authors. 

Among these, the authors highlight: reactive GRASP, which automates the adjustments of the restricted candidate list parameter; variable neighborhoods, which permit accelerated and intensified local search; and path-relinking, which beyond allowing the implementation of intensification strategies based on the memory of elite solutions, opens the way for development of very effective cooperative parallel strategies. 

Each processor starts performing one packet of dMax Iterations/qe iterations and informs the master when it finishes its packet of iterations. 

The use of path-relinking within a GRASP procedure, as an intensification strategy applied to each locally optimal solution, was first proposed by Laguna and Mart́ı [62]. 

The hybrid GRASP with path-relinking using this cost perturbation strategy is among the most effective heuristics currently available for the Steiner problem in graphs. 

The authors consider in this chapter a combinatorial optimization problem, defined by a finite ground set E = {1, . . . , n}, a set of feasible solutions F ⊆ 2E, and an objective function f : 2E → . 

This is indeed the case for the shortest-path heuristic of Takahashi and Matsuyama [95], used as one of the main building blocks of the construction phase of the hybrid GRASP procedure proposed by Ribeiro et al. [90] for the Steiner problem in graphs. 

Prais and Ribeiro [77] have shown that using a single fixed value for the value of RCL parameter α very often hinders finding a high-quality solution, which eventually could be found if another value was used. 

Alvim and Ribeiro [6, 7] have shown that multiple-walk independent-thread approaches for the parallelization of GRASP may benefit much from load balancing techniques, whenever heterogeneous processors are used or if the parallel machine is simultaneously shared by several users. 

The reactive approach leads to improvements over the basic GRASP in terms of robustness and solution quality, due to greater diversification and less reliance on parameter tuning.