scispace - formally typeset
Open AccessJournal ArticleDOI

A WOA-Based Optimization Approach for Task Scheduling in Cloud Computing Systems

Reads0
Chats0
TLDR
The proposed Improved WOA for Cloud task scheduling (IWC) has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms, and can also achieve better performance on system resource utilization.
Abstract
Task scheduling in cloud computing can directly affect the resource usage and operational cost of a system. To improve the efficiency of task executions in a cloud, various metaheuristic algorithms, as well as their variations, have been proposed to optimize the scheduling. In this article, for the first time, we apply the latest metaheuristics whale optimization algorithm (WOA) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, we propose an advanced approach called I mproved W OA for C loud task scheduling (IWC) to further improve the optimal solution search capability of the WOA-based method. We present the detailed implementation of IWC and our simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks.

read more

Content maybe subject to copyright    Report

1
A WOA-Based Optimization Approach for Task
Scheduling in Cloud Computing Systems
Xuan Chen, Long Cheng Member, IEEE, Cong Liu, Qingzhi Liu, Jinwei Liu Member, IEEE, Ying
Mao Member, IEEE, and John Murphy Senior Member, IEEE
Abstract—Task scheduling in cloud computing can directly
affect the resource usage and operational cost of a system. To
improve the efficiency of task executions in a cloud, various
metaheuristic algorithms, as well as their variations, have been
proposed to optimize the scheduling. In this work, for the
first time, we apply the latest metaheuristics WOA (the whale
optimization algorithm) for cloud task scheduling with a multi-
objective optimization model, aiming at improving the perfor-
mance of a cloud system with given computing resources. On that
basis, we propose an advanced approach called IWC (Improved
WOA for Cloud task scheduling) to further improve the optimal
solution search capability of the WOA-based method. We present
the detailed implementation of IWC and our simulation-based
experiments show that the proposed IWC has better convergence
speed and accuracy in searching for the optimal task scheduling
plans, compared to the current metaheuristic algorithms. More-
over, it can also achieve better performance on system resource
utilization, in the presence of both small and large-scale tasks.
Index Terms—Cloud computing; task scheduling; whale opti-
mization algorithm; metaheuristics; multi-objective optimization
I. INTRODUCTION
W
Ith the ubiquitous growth of Internet access and big
data, cloud computing becomes more and more popu-
lar in today’s business world [1]. Compared to other distributed
computing techniques (e.g., cluster and grid computing), cloud
computing has provided an elastic and scalable way on de-
livering services to consumers. Namely, consumers do not
need to possess the underlying technology and they can make
use of computing resources and platforms in a pay-per-use
fashion [2], [3].
The basic mechanism of cloud computing is to dispatch
computing tasks to a resource pooling constituting of a
X. Chen is with Zhejiang Industry Polytechnic College, Zhejiang, China.
E-mail: cxuan762@gmail.com
L. Cheng is with the School of Computing, Dublin City University, Ireland.
E-mail: long.cheng@dcu.ie (Corresponding Author)
C. Liu is with the School of Computer Science and Technology, Shandong
University of Technology, China. E-mail: liucongchina@sdust.edu.cn
Q. Liu is with the Information Technology Group, Wageningen University,
Netherlands. E-mail: qingzhi.liu@wur.nl
J. Liu is with the Department of Computer and Information Sciences at
Florida A&M University, USA. E-mail: jinwei.liu@famu.edu
Y. Mao is with the Department of Computer and Information Science at
Fordham University in the New York City. E-mail: ymao41@fordham.edu
J. Murphy is with the School of Computer Science, University College
Dublin, Ireland. E-mail: j.murphy@ucd.ie
Part of this work was supported by the European Union’s Horizon 2020
research and innovation programme under the Marie Sklodowska-Curie
grant agreement No 799066, the National Science Foundation of China
(61902222), and the Taishan Scholar Youth Program of Shandong Province
(tsqn201909109).
large number of heterogeneous virtualized servers or virtual
machines (VMs) [4], [5]. As cloud computing is a market-
oriented utility, to allow cloud providers and users to maximize
their profit and return on investment [6], advanced strategies
on resource scheduling, which can support software and user
applications, tasks and workflows, etc., are always required.
In fact, scheduling can directly affect the performance of a
system such as resource usage efficiency and operational cost,
and it has been seen as of paramount importance to cloud
computing [7].
As VMs can be dynamically provisioned, allocated and
managed [8], the scheduling problems in cloud computing
can be generally divided into two main layers: the first is the
scheduling of the tasks submitted by a user and mapping them
to a set of available VM resources; and the second is a VM and
host mapping which makes a VM in a suitable host to create or
migrate [9]. We focus on optimizing the former problem in this
work, because it directly affects the processing capability of
a cloud computing system, and an optimized task scheduling
will greatly improve the efficiency of the whole system such
as the time and price cost [10]. However, the complexity of
the optimization problem is NP-hard [11]. This means that
the problem solving time will be in exponential time, and an
algorithms will suffer from a dimensionality breakdown when
the size of the problem grows.
To slove complex optimization problems in an acceptable
time, using metaheuristics algorithms has received increasing
attention in recent years [12]. The reason is that they are
shown to be highly effective and can find approximately
optimal solutions in polynomial time rather than exponential
time, compared to conventional methods [3], [13]. In fact,
various metaheuristics as well as their variations have been
used to solve scheduling problems in many fields [14], [15],
[16], [17], [18], [19], [20], which also include the cloud
computing. As summarized by the latest survey [21], currently
metahuristics used in cloud task scheduling mainly include
the genetic algorithm (GA) [22] and swarm intelligence algo-
rithms, such as the ant colony optimization (ACO) [23] and the
particle swarm optimization (PSO) [24]. These optimization
algorithms are derived from the simulations of biological
population evolutions, and they can solve complex global
optimization problems through cooperation and competition
among individuals [25].
The whale optimization algorithm (WOA) is one of the lat-
est metaheuristics [26] that is nature-inspired by the humpback
hunting method (i.e., bubble-net predation). Because of this
unique optimization mechanism, WOA can provide a good

2
global search capability, which makes it become popular in
various engineering problems. In this work, we will try to
explore the application of the WOA approach to a multi-
objective task scheduling optimization problem in cloud com-
puting. Specifically, we focus on optimizing the task execution
time, load and price cost of a cloud computing system for
given tasks, and these measures will be essential to ensure that
the entire configuration of the VMs is as optimal as possible.
In general, we first map our task scheduling scheme to the
whale foraging model, and thus we can get an approximately
optimal solution using the WOA algorithm. On that basis, we
propose an advanced approach called IWC (Improved WOA
for Cloud task scheduling), which aims to further improve the
optimal solution search capability of WOA. We provide the
detailed implementation of IWC and conduct a performance
evaluation using a large number of simulations with up to
10000 tasks. We summarize the contributions of this work as
follows:
To improve the efficiency of task executions in a cloud
computing system, we introduce a multi-objective opti-
mization model for task scheduling and apply the WOA
approach to solve the problem.
We propose a new approach called IWC for more efficient
task scheduling by incorporating advanced optimization
strategies to improve both the convergence speed and
accuracy of the WOA-based approach.
We present the detailed design and implementation of
IWC and compare it with some existing metaheuris-
tics including ACO and PSO. Our experimental results
demonstrate that IWC can achieve better performance on
system resource utilization for both small and large-scale
tasks in cloud computing.
The rest of this paper is organized as follows. In Section II,
we report the related work. In Section III, we introduce our
task scheduling optimization model. We present the proposed
IWC approach and its implementation details in Section IV.
We carry out extensive evaluation of our approach in Section V
and conclude this paper in Section VI.
II. RELATED WORK
Task scheduling strategies which can efficiently allocate
resources to required tasks under constraints are still challeng-
ing current cloud computing techniqus. This is because the
requirements such as bandwidth, storage, resource expenses,
and response time may differ for each task, which makes
the optimization problem very complex, and the heterogeneity
and dynamicity of the cloud computing environment will also
further complicate the problem [4].
In order to efficiently use cloud resources, a lot of mathe-
matical task scheduling solutions have been proposed. For ex-
ample, Malawski et al. [27] modeled the relationship between
the deadline and cost on hybrid clouds as a mixed integer
nonlinear programming problem with an implementation in
AMPL (a mathematical programming language). To optimize
the makespan, the total average waiting time and the used hosts
on homogeneous cloud computing environments, Grandinetti
et al. [28] solved their optimization problem based on the -
constraint method. The approaches have been shown to be
efficient. However, their implementation could be complex.
The AMPL-based implementation requires to specify input
data sets and variables to define the search space, and the -
constraint method needs to choose suitable values. Compared
to these, we will apply heuristic techniques to our optimization
problem, which would make our approach simpler and easier
to implement and deploy in a cloud computing system.
A large number of heuristics have been devised for cloud
task scheduling in the past years. For instance, Su et al. [29]
employed a cost-efficient task-scheduling algorithm by means
of two heuristic strategies based on the idea of Pareto dom-
inance. Besides that, some typical heuristic techniques such
as clustering scheduling algorithm (e.g., DSC [30] and list
scheduling algorithm (e.g., DSL [31]), have also been used
in optimizating resource allocation in cloud. In contrast to
these schemes, we focus on using metahuristics for cloud
task scheduling, which is designed to find, generate, or select
a heuristic that may provide a sufficiently good solution,
especially with incomplete or imperfect information [32].
In fact, a trend of using metaheuristic algorithms is
emerging rapidly in cloud computing [12], [33]. Various
metaheuristic-based methods such as GA-based, ACO-based,
PSO-based task scheduling algorithms have been proposed.
Examples include but not limited to the following. Aziza et
al. [34] proposed a time-shared and a space-shared genetic
algorithm which are demonstrated to be able to outperform
competed scheduling methods in terms of makespan and
processing cost. Based on the ACO algorithm, Li et al. [35]
introduced a load balancing algorithm for task scheduling
in cloud computing. For PSO, Wang et al. [36] used an
improved PSO algorithm to develop an optimal VM placement
approach involving a tradeoff between energy consumption
and global QoS guarantee for data-intensive services. To
further improve the accuracy and efficiency of the above
described metaheuristics in cloud computing, some works have
tried to propose hybrid methods to leverage the strengths
of the existing ones. Chen et al. [37] proposed a PSO-
ACO method for task scheduling, showing it performs better
than a standalone algorithm on makespan. To minimize task
execution time, Liu et al. [38] presented a algorithm that makes
use of the global search capability of genetic algorithm, and
then converts the achieved results into the initial pheromone
of ACO for further optimization. Moreover, Tsai et al. [39]
proposed hyper-heuristic scheduling algorithm by integrating
the GA, ACO and PSO, etc. into a single framework to reduce
the makespan in cloud. Although all the approaches have
demonstrated their advantages, different from them, we focus
on exploring the application of the latest metaheuristics, the
whale optimization algorithm [26], for cloud task scheduling.
Moreover, we will try to use it on a multi-objective model to
improve the performance of underlying computing systems.
Multi-objective optimization (MOO) is the process of si-
multaneously optimizing two or more conflicting objectives
subject to a number of constraints [40]. In the context of
cloud computing, the multi-objective optimization mainly in-
cludes the completion time, the constraints of QoS, energy
consumption, economic cost, and the system performance [41].
Sheikhalishahi et al. [42] presented a scheduling system based

3
on treating multi-resource optimization as multi-capacity bin
packing. The solution is able to minimize the waiting time and
the slowdown metrics. Ramezani et al. [43] tried to minimize
task execution time, task transferring time, task execution cost
and increase the QoS, using a multi-objective particle swarm
optimization (MOPSO). Zuo et al. [41] introduced a model to
optimize the makespan and resource cost on the basis of the
ACO algorithm. In comparison, we will try to minimize the
task execution time, system load and price cost using WOA.
To date, a lot of efforts have also been put on the designs
of cloud scheduling systems. For example, Mao et al. [44]
proposed an advanced scheduling strategy which could effec-
tively shorten the time and maintain the stability of a system.
Liu et al. [45] proposed a dependency-aware and resource-
efficient scheduling which can achieve low response time and
high resource utilization. In contrast to these, we focus on
an algorithm design rather than system designs. On the other
aspect, our approach can be applied to all the above designs
to process tasks in cloud computing.
Generally, with the significant advantages on implementa-
tion, deployment as well as performance, metaheuristic algo-
rithms have been widely studied on the optimization of cloud
task scheduling in the past years. Although some research
works have used the techniques on MOO in cloud computing,
few of them focus on improving the performance of underlying
computing systems. Moreover, none of them have ever applied
the latest WOA on the MOO problem yet. In this work, we
will try to minimize the task execution time, system load
and price cost with a WOA-based method for cloud task
scheduling. Moreover, to further improve the optimal solution
search capability, we have also proposed several specified
optimization for the proposed approach. To the best of our
knowledge, this is the first work on applying WOA to multi-
objective task scheduling problem in cloud computing.
III. MULTI-OBJECTIVE TASK SCHEDULING MODEL
In cloud computing, task scheduling policy will directly
affect the efficiency of resource usage for underlying systems.
Therefore, the allocation of input tasks to computing resources
(e.g., VMs) becomes the key issue for cloud task scheduling.
The logical view of a typical task scheduling process in
a cloud computing system is illustrated in Fig. 1. There,
the submitted jobs by users will be decomposed into a set
of computing tasks first. We focus on the performance of
different scheduling approaches in this paper, therefore we
assume that all the tasks are logically independent of each
other. Based on this, the process of task scheduling in cloud
environment can be summarized as the following three steps.
Firstly, based on the detailed information of the input tasks
and the underlying available computing resources (e.g., VMs),
tasks and resources will be mapped in accordance with a
certain strategy. Then, following the mapping, the task sched-
uler at the schedule/control layer will generate an optimized
task execution plan to meet the assigned requirements (i.e.,
the optimization objectives). Finally, the optimized plan is
delivered to the underlying task processing layer (e.g., a cloud
computing system) for execution, and the output results will
be sent to the users.
TABLE I
MAIN NOTATIONS IN TASK SCHEDULING MODEL
Notation Meaning
N number of tasks to be processed
M number of VMs
a
ij
decision variable to indicate whether the i-th
task is assigned to the j-th VM
E
n
(E
t
) processing capability vector for VMs (tasks)
S
n
(S
t
) load capability vector for VMs (tasks)
C
n
(C
t
) resource bandwidth vector for VMs (tasks)
P price unit
w
i
weight of each cost function, i {1, 2, 3}
A. Task and Computing Resource Models
To describe the detailed optimization process of the sched-
uler, we use the following model under a cloud com-
puting setting. There are a set of M computing nodes
(VMs) {N
1
, N
2
, ..., N
m
} and a set of N computing task
{T
1
, T
2
, ..., T
n
} with N > M, and the final scheduling result
can be represented by a matrix A as following:
A
nm
=
a
11
a
12
· · · a
1m
a
21
a
22
· · · a
2m
· · · · · · · · · · · ·
a
n1
a
n2
· · · a
nm
where a
ij
is a decision variable that a
ij
= 1 means that the
i-th task is performed on the j-th VM , otherwise a
ij
= 0,
and there is
P
M
j=1
a
ij
= 1 for each i [1, N].
To characterize the general processing capability and re-
source consumption of a cloud computing system in a task
scheduling scenario, we represent each resource node using
three attributes. The first two are processing capability and
load capability, which can be indicated by the CPU computing
power and the memory size of a node respectively [41].
We employ the concept of resource bandwidth as the third
attribute, to abstract the general recourse that a node can
provide. The resource bandwidth of a node can be described
by a function of its first two attributes, i.e., the larger the CPU
power and memory size are, the larger the bandwidth will be.
In terms of the values of the three attributes, memory
resources can be represented using megabytes. For the quan-
tification of CPU resources, we specify the amount of CPU
resources with a point-based system [46], such as that setting
the full capacity of a single core with 100 points. Similarly,
each computing task can be characterized by three attributes
as well, i.e., the required CPU power, memory and resource
bandwidth. On all these basis, we can model the underlying
computing system as three vectors, i.e., the processing capabil-
ity vector E
n
, the load capability vector S
n
and the resource
bandwidth vector C
n
. Similarly, three vectors are used for the
tasks, i.e., E
t
, S
t
and C
t
. For our presentation in the following,
we use the notations as listed in Table I.

4
submitted
jobs by users
tasks
task scheduler
resource nodes
schedule/control layer
task processing layer
Fig. 1. A logical view of the task scheduling process in a cloud computing system.
B. Objective Functions
For a given set of tasks, it is expected that the underlying
computing system can process the tasks in a highly efficient
way, in terms of performance and resource consumption.
Namely, the CPU power and memory of the system can be
effectively used while the whole resource utilization cost can
be minimized. Similar to the models with constrains on CPU
and memory [47], the time cost function f
1
and the load
cost function f
2
in our objectives are represented by Eq. (1)
and Eq. (2), respetively. Moreover, the resource cost can be
represented by some metrics such as energy consumption
and economical cost [41]. Since they can be computed from
resource bandwidth (such as with a very complex function),
we just choose the price cost and use a price unit P in this
work. Then, the price cost function f
3
can be represented by
Eq. (3), where E
t,i
means the E
t
value of the i-th task and
E
n,j
is the E
n
value of j-th VM. This representation is also
similarly applied to the symbols S and C.
f
1
=
N
X
i=1
M
X
j=1
a
ij
E
t,i
E
n,j
(1)
f
2
=
N
X
i=1
M
X
j=1
a
ij
S
t,i
S
n,j
(2)
f
3
=
N
X
i=1
M
X
j=1
a
ij
E
t,i
E
n,j
×
C
t,i
C
n,j
× P (3)
In f
1
, the time cost is calculated by summarizing the execu-
tion time of each task, which depends on the CPU power. We
use the whole execution time rather than the makespan here,
because we are more interested in task processing capability
from a system angle rather than a service angle, and we
assume that our system is highly efficient that a VM will be
put into sleep when its assigned tasks have been done. The
f
2
is computed on the basis of the required memory over
the provided memory on each VM, the value of which is
commonly used in simulation software to represent the load
capability of a system, and a great value indicates a bad system
load performance [47]. For a computing system, the price cost
will not only depend on the task execution time, but also the
ratio of the resource utilization at each time point. Therefore,
in f
3
, we add such a factor C
t,i
/C
n,j
for each task on each
VM when we calculate the whole cost. Namely, the price cost
per time unit of a lightweight task (with a small value on
resource bandwidth) will be less than a heavyweight task.
Obviously, our target to minimize the values of the above
three functions is a MOO problem. The reason is that each of
the functions has a different objective that can be conflicting.
For example, we can speed up the processing of a task by
using a powerful CPU, but the price cost would be increased.
Also, for a case that a VM with a huge memory will be able
to load a large number of tasks, but the whole task execution
time could be long if its CPU computing power is low.
C. Optimization Model
To solve our MOO problem, we first normalize the ma-
trices using the min–max normalization approach, and then
represent the above three objective functions as F
1
, F
2
and
F
3
respectively, which are shown in below. The reason for
this normalization is the values in E
n
, S
n
and C
n
(also E
t
,
S
t
and C
t
) are in different scales, and the searching path for
an optimal solution in this condition will be skewed, i.e., large
values in a f
i
will dominate the optimization process and the
small ones would be totally ignored.
F
1
=
1
N
N
X
i=1
M
X
j=1
a
ij
E
t,i
/E
n,j
max
i,j
{E
t,i
/E
n,j
}
(4)
F
2
=
1
N
N
X
i=1
M
X
j=1
a
ij
S
t,ij
/S
n,j
max
i,j
{S
t,i
/S
n,j
}
(5)
F
3
=
1
N
N
X
i=1
M
X
j=1
a
ij
(P E
t,i
C
t,i
) / (E
n,j
C
n,j
)
max
i,j
{(P E
t,i
C
t,i
) / (E
n,j
C
n,j
)}
(6)
Different cloud computing systems (or computing resource
providers) could have different requirements on the perfor-
mance of task executions. Therefore, similar to some recent
works [34], [41], we employ some weight values (i.e., w
i
) for
the above three functions to make our target function tunable,
which leads the final optimization objective function as:
F
opt
= min {w
1
F
1
+ w
2
F
2
+ w
3
F
3
} (7)
The value of the weight w
i
(i {1, 2, 3}) in Eq. 7 can be
adjusted based on the requirements in practice. For example,
in the scenarios such as the ones with a lightweight workload,

5
we could be more interested in reducing the price cost of a
computing system rather than the time and load cost. Then,
we can set w
1
= 0.25, w
2
= 0.25 and w
3
= 0.5. In
this condition, from a scheduling point of view, it is highly
possible that a large number of input tasks will be allocated
on economical VMs rather than the VMs with powerful CPU
and large memory, since the improvement of time cost could
be very limited for the latter case.
From the perspective of an optimization algorithm, to mini-
mize the value of (w
1
F
1
+w
2
F
2
+w
3
F
3
), the larger the value
w
i
is, the higher the priority of the algorithm on reducing
the value of F
i
will be. Specifically, when w
1
is much larger
than w
2
and w
3
, to reduce the value of F
1
, it is more likely
that all the tasks will be assigned to the VMs with more
powerful CPUs. Similarly, if w
2
or w
3
is obviously larger,
an optimization algorithm would assign the input tasks to the
VMs with larger memory or resource bandwidth respectively.
This kind of configuration could speed up the convergence of
the searching process of an optimization algorithm, especially
at its beginning phase, since the algorithm has the knowledge
on priority for task assignment already. For an extreme case
such as the setting with w
1
= 1, w
2
= 0 and w
3
= 0, the
searching process on an optimal solution will be much simpler
than other settings, since the scheduling problem is simplified
to a single objective optimization problem. In this paper, for a
general case, we just simply set w
1
= w
2
= w
3
=
1
3
. With this
configuration, our optimization on the task scheduling problem
in a cloud computing system can be represented as the Eq. (8)
below:
F
opt
= min {
1
3
(F
1
+ F
2
+ F
3
)} (8)
IV. THE PROPOSED APPROACH - IWC
In this section, we introduce how to apply the WOA
algorithm to solve the optimization problem. Then, we propose
the IWC with two optimization strategies to strengthen the
searching capability of the WOA-based method.
A. The Whale Optimization Algorithm
In the WOA algorithm, a humpback whale in the search
space is a candidate solution in the optimization problem, also
called search agent, and the WOA utilizes a set of search
agents to determine the possible or approximately global
optimal solution. The searching process for a given problem
begins with a set of random solutions, and the candidate
solution is updated by the optimization rules until the end
condition is met. The WOA algorithm can be divided into
three main stages: encircling preying, bubble-net attack and
search for prey. There mathematical representations are given
as below.
1) Encircling Preying: In the initial stage, humpback
whales do not know the optimal location in the search space
when the prey is surrounded. In WOA, the current best solution
is considered as the target prey and the whale closest to
the prey is considered as the best search agent. Then, other
individual whales may approach the target prey and gradually
update their locations. This behavior is represented in the two
functions below.
~
D = |C ×
~
X
(t)
~
X(t)| (9)
and
~
X(t + 1) =
~
X
(t) A ×
~
D (10)
Here,
~
D indicates the distance vector from the search agent
to the target prey, t is the current iteration number,
~
X
is the
local optimal solution and
~
X is the position vector.
~
C and
~
A
are the coefficient vectors and their calculations are defined
as:
C = 2 × r (11)
and
A = 2a × r a (12)
where r is a random number between 0 and 1, and a represents
a linear decremented value from 2 to 0 based on the number
of iteration t over the number of maximum iterations t
max
, as
shown below:
a = 2
2t
t
max
(13)
2) Bubble-net Attack (exploitation phase): The behavior
of whales’ bubble-net attack is modeled based on the ideas
of shrinking encircling and spiral position updating. We just
briefly introduce their principles as below.
Shrinking encircling. From Eq. (10), we can see that the
whales will shrink their encircling when |A| < 1. This means
that the individual whales will approach the whale in the
current best position, i.e., swim around the prey in a gradual
contraction of a circle. The larger the value of |A| is, the bigger
steps the whales will take, and vice versa.
Spiral position updating. Each individual humpback whale
first calculates its distance from the current optimal whale and
then moves in a spiral shaped path. The mathematical model
of the position update process is described as:
~
X(t + 1) =
~
D
0
× e
lb
× cos(2πl) +
~
X
(t) (14)
where
~
D
0
= |
~
X
(t)
~
X(t)| is a vector indicating the distance
from the individual whale to the best whale (current best
found), b is a constant and l is a random number with the
value between -1 and 1.
In order to mimic the two behaviors in a simultaneous
way, it is assumed that the possibility of a whale updating
its location based on the contraction path and the spiral path
is 0.5 respectively, which can be described as
~
X(t + 1) =
~
X
(t) A ×
~
D p < 0.5
~
D
0
× e
lb
× cos(2πl) +
~
X
(t) p 0.5
(15)
where p is a randomly generated number between 0 and 1.
3) Search for Prey (exploration phase): To ensure that an
approximately global optimal solution can be achieved, the
search agents are pushed away from each other when |A| > 1.
In this case, the position of the current optimal search agent
will be replaced by a randomly selected search agent, and the
responsible mathematical model is expressed as

Citations
More filters
Journal ArticleDOI

Task Scheduling in Cloud Computing based on Meta-heuristics: Review, Taxonomy, Open Challenges, and Future Trends

TL;DR: In this paper, the authors provide a brief on traditional and heuristic scheduling methods before diving deeply into the most popular meta-heuristics for cloud task scheduling followed by a detailed systematic review featuring a novel taxonomy of those techniques, along with their advantages and limitations.
Journal ArticleDOI

Metaheuristic algorithms for PID controller parameters tuning: review, approaches and open problems

TL;DR: A thorough review of state-of-the-art and classical strategies for PID controller parameters tuning using metaheuristic algorithms can be found in this article , where the primary objectives of PID control parameters are to achieve minimal overshoot in steady state response and lesser settling time.
Journal ArticleDOI

Cost-aware job scheduling for cloud instances using deep reinforcement learning

TL;DR: A Deep Reinforcement Learning (DRL) based job scheduler that dispatches the jobs in real time to tackle the problem of dynamic and complex cloud workloads and can significantly outperform the commonly used real-time scheduling algorithms.
Journal ArticleDOI

An urban short-term traffic flow prediction model based on wavelet neural network with improved whale optimization algorithm

TL;DR: An improved whale optimization algorithm (IWOA) is presented and it appears to predict traffic flow in a more effective manner and it can remedy the defects of wavelet neural network which usually leads to low prediction accuracy and slow response.
Journal ArticleDOI

Bee optimization based random double adaptive whale optimization model for task scheduling in cloud computing environment

TL;DR: In this article , a hybrid whale optimization algorithm-based MBA algorithm is proposed for solving the multi-objective task scheduling problems in cloud computing environments, which decreases the makespan by maximizing the resource utilization.
References
More filters
Journal ArticleDOI

A view of cloud computing

TL;DR: The clouds are clearing the clouds away from the true potential and obstacles posed by this computing capability.
Journal ArticleDOI

The Whale Optimization Algorithm

TL;DR: Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods.
Journal ArticleDOI

Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility

TL;DR: This paper defines Cloud computing and provides the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs), and provides insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA) oriented resource allocation.
Journal ArticleDOI

Metaheuristics in combinatorial optimization: Overview and conceptual comparison

TL;DR: A survey of the nowadays most important metaheuristics from a conceptual point of view and introduces a framework, that is called the I&D frame, in order to put different intensification and diversification components into relation with each other.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What are the contributions in "A woa-based optimization approach for task scheduling in cloud computing systems" ?

In this work, for the first time, the authors apply the latest metaheuristics WOA ( the whale optimization algorithm ) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, the authors propose an advanced approach called IWC ( Improved WOA for Cloud task scheduling ) to further improve the optimal solution search capability of the WOA-based method. The authors present the detailed implementation of IWC and their simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. 

The authors have presented the detailed implementation of IWC and their experimental results have shown that the proposed IWC is indeed efficient in terms of searching optimal scheduling plans. As the future work, to achieve better performance on convergence speed and accuracy in task scheduling, the authors will consider proposing more advanced strategies to further improve the balance between exploration and exploitation in the IWC approach. Additionally, the authors also plan to use their approach to handle more complex task jobs, such as workflows [ 52 ], the tasks in which are not independent from each other, and cloud-based deap learning workloads [ 53 ]. For example, the authors will try to optimize the QoS problems, in which some tasks have higher priorities than others. 

Trending Questions (1)
Which tool provides information about currently running tasks use of system resources and system performance?

Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks.