scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2021"


Journal ArticleDOI
Wenqi Shi1, Sheng Zhou1, Zhisheng Niu1, Miao Jiang2, Lu Geng2 
TL;DR: In this paper, a joint device scheduling and resource allocation policy is proposed to maximize the model accuracy within a given total training time budget for latency constrained wireless FL, where a lower bound on the reciprocal of the training performance loss is derived.
Abstract: In federated learning (FL), devices contribute to the global training by uploading their local model updates via wireless channels. Due to limited computation and communication resources, device scheduling is crucial to the convergence rate of FL. In this paper, we propose a joint device scheduling and resource allocation policy to maximize the model accuracy within a given total training time budget for latency constrained wireless FL. A lower bound on the reciprocal of the training performance loss, in terms of the number of training rounds and the number of scheduled devices per round, is derived. Based on the bound, the accuracy maximization problem is solved by decoupling it into two sub-problems. First, given the scheduled devices, the optimal bandwidth allocation suggests allocating more bandwidth to the devices with worse channel conditions or weaker computation capabilities. Then, a greedy device scheduling algorithm is introduced, which selects the device consuming the least updating time obtained by the optimal bandwidth allocation in each step, until the lower bound begins to increase, meaning that scheduling more devices will degrade the model accuracy. Experiments show that the proposed policy outperforms state-of-the-art scheduling policies under extensive settings of data distributions and cell radius.

228 citations


Journal ArticleDOI
TL;DR: This work presents a novel hybrid antlion optimization algorithm with elite-based differential evolution for solving multi-objective task scheduling problems in cloud computing environments and reveals that MALO outperformed other well-known optimization algorithms.
Abstract: Efficient task scheduling is considered as one of the main critical challenges in cloud computing. Task scheduling is an NP-complete problem, so finding the best solution is challenging, particularly for large task sizes. In the cloud computing environment, several tasks may need to be efficiently scheduled on various virtual machines by minimizing makespan and simultaneously maximizing resource utilization. We present a novel hybrid antlion optimization algorithm with elite-based differential evolution for solving multi-objective task scheduling problems in cloud computing environments. In the proposed method, which we refer to as MALO, the multi-objective nature of the problem derives from the need to simultaneously minimize makespan while maximizing resource utilization. The antlion optimization algorithm was enhanced by utilizing elite-based differential evolution as a local search technique to improve its exploitation ability and to avoid getting trapped in local optima. Two experimental series were conducted on synthetic and real trace datasets using the CloudSim tool kit. The results revealed that MALO outperformed other well-known optimization algorithms. MALO converged faster than the other approaches for larger search spaces, making it suitable for large scheduling problems. Finally, the results were analyzed using statistical t-tests, which showed that MALO obtained a significant improvement in the results.

223 citations


Journal ArticleDOI
TL;DR: A self-adaptive differential evolution algorithm is developed for addressing a single BPM scheduling problem with unequal release times and job sizes and results demonstrate that the proposed self- Adaptive algorithm is more effective than other algorithms for this scheduling problem.
Abstract: Batch-processing machines (BPMs) can process a number of jobs at a time, which can be found in many industrial systems. This article considers a single BPM scheduling problem with unequal release times and job sizes. The goal is to assign jobs into batches without breaking the machine capacity constraint and then sort the batches to minimize the makespan. A self-adaptive differential evolution algorithm is developed for addressing the problem. In our proposed algorithm, mutation operators are adaptively chosen based on their historical performances. Also, control parameter values are adaptively determined based on their historical performances. Our proposed algorithm is compared to CPLEX, existing metaheuristics for this problem and conventional differential evolution algorithms through comprehensive experiments. The experimental results demonstrate that our proposed self-adaptive algorithm is more effective than other algorithms for this scheduling problem.

137 citations


Journal ArticleDOI
TL;DR: This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates.
Abstract: We study federated learning (FL) at the wireless edge, where power-limited devices with local datasets collaboratively train a joint model with the help of a remote parameter server (PS) We assume that the devices are connected to the PS through a bandwidth-limited shared wireless channel At each iteration of FL, a subset of the devices are scheduled to transmit their local model updates to the PS over orthogonal channel resources, while each participating device must compress its model update to accommodate to its link capacity We design novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates We then establish convergence of a wireless FL algorithm with device scheduling, where devices have limited capacity to convey their messages The results of numerical experiments show that the proposed scheduling policy, based on both the channel conditions and the significance of the local model updates, provides a better long-term performance than scheduling policies based only on either of the two metrics individually Furthermore, we observe that when the data is independent and identically distributed (iid) across devices, selecting a single device at each round provides the best performance, while when the data distribution is non-iid, scheduling multiple devices at each round improves the performance This observation is verified by the convergence result, which shows that the number of scheduled devices should increase for a less diverse and more biased data distribution

135 citations


Journal ArticleDOI
TL;DR: A many-objective intelligent algorithm with sine function is presented to implement the model, which considers the variation tendency of diversity strategy in the population is similar to the sinefunction, and demonstrates excellent scheduling efficiency and hence enhancing the security.
Abstract: Internet of Things (IoT) is a huge network and establishes ubiquitous connections between smart devices and objects. The flourishing of IoT leads to an unprecedented data explosion, traditional data storing or processing techniques have the problem of low efficiency, and if the data are used maliciously, the security loss may be further caused. Multicloud is a high-performance secure computing platform, which combines multiple cloud providers for data processing, and the distributed multicloud platform ensures the security of data to some extent. Based on multicloud and task scheduling in IoT, this article constructs a many-objective distributed scheduling model, which includes six objectives of total time, cost, cloud throughput, energy consumption, resource utilization, and balancing load. Furthermore, this article presents a many-objective intelligent algorithm with sine function to implement the model, which considers the variation tendency of diversity strategy in the population is similar to the sine function. The experimental results demonstrate excellent scheduling efficiency and hence enhancing the security. This work provides a new idea for addressing the difficult problem of data processing in IoT.

132 citations


Journal ArticleDOI
TL;DR: In this paper, the analysis and synthesis issues have gained widespread attention for complex dynamical networks (CDNs) over the past few years, and some challenges including protocol-based scheduling, s...
Abstract: The analysis and synthesis issues have gained widespread attention for complex dynamical networks (CDNs) over the past few years. Accordingly, some challenges including protocol-based scheduling, s...

125 citations


Journal ArticleDOI
TL;DR: A two-stage cooperative evolutionary algorithm with problem-specific knowledge called TS-CEA is proposed to address energy-efficient scheduling of the no-wait flow-shop problem (EENWFSP) with the criteria of minimizing both makespan and total energy consumption.
Abstract: Green scheduling in the manufacturing industry has attracted increasing attention in academic research and industrial applications with a focus on energy saving. As a typical scheduling problem, the no-wait flow-shop scheduling has been extensively studied due to its wide industrial applications. However, energy consumption is usually ignored in the study of typical scheduling problems. In this article, a two-stage cooperative evolutionary algorithm with problem-specific knowledge called TS-CEA is proposed to address energy-efficient scheduling of the no-wait flow-shop problem (EENWFSP) with the criteria of minimizing both makespan and total energy consumption. In TS-CEA, two constructive heuristics are designed to generate a desirable initial solution after analyzing the properties of the problem. In the first stage of TS-CEA, an iterative local search strategy (ILS) is employed to explore potential extreme solutions. Moreover, a hybrid neighborhood structure is designed to improve the quality of the solution. In the second stage of TS-CEA, a mutation strategy based on critical path knowledge is proposed to extend the extreme solutions to the Pareto front. Moreover, a co-evolutionary closed-loop system is generated with ILS and mutation strategies in the iteration process. Numerical results demonstrate the effectiveness and efficiency of TS-CEA in solving the EENWFSP.

123 citations


Journal ArticleDOI
TL;DR: This study develops an unceRtainty-aware Online Scheduling Algorithm (ROSA) to schedule dynamic and multiple workflows with deadlines that performs better than the five compared algorithms with respect to costs, deviations, deviation, resource utilization, and fairness.
Abstract: Scheduling workflows in cloud service environment has attracted great enthusiasm, and various approaches have been reported up to now. However, these approaches often ignored the uncertainties in the scheduling environment, such as the uncertain task start/execution/finish time, the uncertain data transfer time among tasks, the sudden arrival of new workflows. Ignoring these uncertain factors often leads to the violation of workflow deadlines and increases service renting costs of executing workflows. This study devotes to improving the performance for cloud service platforms by minimizing uncertainty propagation in scheduling workflow applications that have both uncertain task execution time and data transfer time. To be specific, a novel scheduling architecture is designed to control the count of workflow tasks directly waiting on each service instance (e.g., virtual machine and container). Once a task is completed, its start/execution/finish time are available, which means its uncertainties disappearing, and will not affect the subsequent waiting tasks on the same service instance. Thus, controlling the count of waiting tasks on service instances can prohibit the propagation of uncertainties. Based on this architecture, we develop an unce R tainty-aware O nline S cheduling A lgorithm ( ROSA ) to schedule dynamic and multiple workflows with deadlines. The proposed ROSA skillfully integrates both the proactive and reactive strategies. During the execution of the generated baseline schedules, the reactive strategy in ROSA will be dynamically called to produce new proactive baseline schedules for dealing with uncertainties. Then, on the basis of real-world workflow traces, five groups of simulation experiments are carried out to compare ROSA with five typical algorithms. The comparison results reveal that ROSA performs better than the five compared algorithms with respect to costs (up to 56 percent), deviation (up to 70 percent), resource utilization (up to 37 percent), and fairness (up to 37 percent).

116 citations


Journal ArticleDOI
TL;DR: An energy-aware model basis on the marine predators algorithm (MPA) is proposed for tackling the task scheduling in fog computing (TSFC) to improve the quality of service (QoS) required by users.
Abstract: To improve the quality of service (QoS) needed by several applications areas, the Internet of Things (IoT) tasks are offloaded into the fog computing instead of the cloud. However, the availability of ongoing energy heads for fog computing servers is one of the constraints for IoT applications because transmitting the huge quantity of the data generated using IoT devices will produce network bandwidth overhead and slow down the responsive time of the statements analyzed. In this article, an energy-aware model basis on the marine predators algorithm (MPA) is proposed for tackling the task scheduling in fog computing (TSFC) to improve the QoSs required by users. In addition to the standard MPA, we proposed the other two versions. The first version is called modified MPA (MMPA), which will modify MPA to improve their exploitation capability by using the last updated positions instead of the last best one. The second one will improve MMPA by the ranking strategy based reinitialization and mutation toward the best, in addition to reinitializing, the half population randomly after a predefined number of iterations to get rid of local optima and mutated the last half toward the best-so-far solution. Accordingly, MPA is proposed to solve the continuous one, whereas the TSFC is considered a discrete one, so the normalization and scaling phase will be used to convert the standard MPA into a discrete one. The three versions are proposed with some other metaheuristic algorithms and genetic algorithms based on various performance metrics such as energy consumption, makespan, flow time, and carbon dioxide emission rate. The improved MMPA could outperform all the other algorithms and the other two versions.

110 citations


Journal ArticleDOI
TL;DR: A novel two-stage GPHH framework with feature selection is designed to evolve scheduling heuristics only with the selected features for DFJSS automatically, and the proposed algorithm can reach comparable scheduling heuristic quality with much shorter training time.
Abstract: Dynamic flexible job-shop scheduling (DFJSS) is a challenging combinational optimization problem that takes the dynamic environment into account. Genetic programming hyperheuristics (GPHH) have been widely used to evolve scheduling heuristics for job-shop scheduling. A proper selection of the terminal set is a critical factor for the success of GPHH. However, there is a wide range of features that can capture different characteristics of the job-shop state. Moreover, the importance of a feature is unclear from one scenario to another. The irrelevant and redundant features may lead to performance limitations. Feature selection is an important task to select relevant and complementary features. However, little work has considered feature selection in GPHH for DFJSS. In this article, a novel two-stage GPHH framework with feature selection is designed to evolve scheduling heuristics only with the selected features for DFJSS automatically. Meanwhile, individual adaptation strategies are proposed to utilize the information of both the selected features and the investigated individuals during the feature selection process. The results show that the proposed algorithm can successfully achieve more interpretable scheduling heuristics with fewer unique features and smaller sizes. In addition, the proposed algorithm can reach comparable scheduling heuristic quality with much shorter training time.

105 citations


Journal ArticleDOI
TL;DR: This research proposes a method to conduct calculations in a collaborative way to alleviate the huge computing pressure caused by the single mobile edge server computing mode as the amount of data increases.

Journal ArticleDOI
TL;DR: An edge caching and computation management problem that jointly optimizes the service caching, the request scheduling, and the resource allocation strategies is formulated that achieves a close-to-optimal delay performance without relying on any prior knowledge of the future network information.
Abstract: Vehicular Edge Computing (VEC) is expected to be an effective solution to meet the ultra-low delay requirements of many emerging Internet of Vehicles (IoV) services by shifting the service caching and the computation capacities to the network edge. However, due to the constraints of the multidimensional (storage-computing-communication) resources capacities and the cost budgets of vehicles, there are two main issues need to be addressed: 1) How to collaboratively optimize the service caching decision among edge nodes to better reap the benefits of the storage resource and save the time-correlated service reconfiguration cost? 2) How to allocate resources among various vehicles and where vehicular requests are scheduled to improve the efficiency of the computing and communication resources utilization? In this paper, we formulate an edge caching and computation management problem that jointly optimizes the service caching, the request scheduling, and the resource allocation strategies. Our focus is to minimize the time-average service response delay of the random arriving service requests in a cost-efficient way. To cope with the dynamic and unpredictable challenges of IoVs, we leverage the combined power of Lyapunov optimization, matching theory, and consensus alternating direction method of multipliers to solve the problem in an online and distributed manner. Theoretical analysis shows that the developed approach achieves a close-to-optimal delay performance without relying on any prior knowledge of the future network information. Moreover, simulation results validate the theoretical analysis and demonstrate that our algorithm outperforms the baselines substantially.

Journal ArticleDOI
TL;DR: A container scheduling system that enables serverless platforms to make efficient use of edge infrastructures and a method to automatically fine-tune the weights of scheduling constraints to optimize high-level operational objectives such as minimizing task execution time, uplink usage, or cloud execution cost is presented.

Journal ArticleDOI
TL;DR: An outage management strategy is proposed to enhance distribution system resilience through network reconfiguration and distributed energy resources (DERs) scheduling and has advantages when applied to the distribution systems with several normally-open tie lines and low DER penetration.

Journal ArticleDOI
TL;DR: A multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC.
Abstract: The industry of data centers is the fifth largest energy consumer in the world. Distributed green data centers (DGDCs) consume 300 billion kWh per year to provide different types of heterogeneous services to global users. Users around the world bring revenue to DGDC providers according to actual quality of service (QoS) of their tasks. Their tasks are delivered to DGDCs through multiple Internet service providers (ISPs) with different bandwidth capacities and unit bandwidth price. In addition, prices of power grid, wind, and solar energy in different GDCs vary with their geographical locations. Therefore, it is highly challenging to schedule tasks among DGDCs in a high-profit and high-QoS way. This work designs a multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC. A problem is formulated and solved with a simulated-annealing-based biobjective differential evolution (SBDE) algorithm to obtain an approximate Pareto-optimal set. The method of minimum Manhattan distance is adopted to select a knee solution that specifies the Pareto-optimal task service rates and task split among ISPs for DGDCs in each time slot. Real-life data-based experiments demonstrate that the proposed method achieves lower task loss of all applications and larger profit than several existing scheduling algorithms. Note to Practitioners —This work aims to maximize the profit and minimize the task loss for DGDCs powered by renewable energy and smart grid by jointly determining the split of tasks among multiple ISPs. Existing task scheduling algorithms fail to jointly consider and optimize the profit of DGDC providers and QoS of tasks. Therefore, they fail to intelligently schedule tasks of heterogeneous applications and allocate infrastructure resources within their response time bounds. In this work, a new method that tackles drawbacks of existing algorithms is proposed. It is achieved by adopting the proposed SBDE algorithm that solves a multiobjective optimization problem. Simulation experiments demonstrate that compared with three typical task scheduling approaches, it increases profit and decreases task loss. It can be readily and easily integrated and implemented in real-life industrial DGDCs. The future work needs to investigate the real-time green energy prediction with historical data and further combine prediction and task scheduling together to achieve greener and even net-zero-energy data centers.

Journal ArticleDOI
TL;DR: This article considers joint charging scheduling, order dispatching, and vehicle rebalancing for large-scale shared EV fleet operator and model the joint decision making as a partially observable Markov decision process (POMDP) and applies deep reinforcement learning (DRL) combined with binary linear programming (BLP) to develop a near-optimal solution.
Abstract: With the emerging concept of sharing-economy, shared electric vehicles (EVs) are playing a more and more important role in future mobility-on-demand traffic system. This article considers joint charging scheduling, order dispatching, and vehicle rebalancing for large-scale shared EV fleet operator. To maximize the welfare of fleet operator, we model the joint decision making as a partially observable Markov decision process (POMDP) and apply deep reinforcement learning (DRL) combined with binary linear programming (BLP) to develop a near-optimal solution. The neural network is used to evaluate the state value of EVs at different times, locations, and states of charge. Based on the state value, dynamic electricity prices and order information, the online scheduling is modeled as a BLP problem where the decision variables representing whether an EV will 1) take an order, 2) rebalance to a position, or 3) charge. We also propose a constrained rebalancing method to improve the exploration efficiency of training. Moreover, we provide a tabular method with proved convergence as a fallback option to demonstrate the near-optimal characteristics of the proposed approach. Simulation experiments with real-world data from Haikou City verify the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: An improved artificial immune system (IAIS) algorithm is proposed to solve a special case of the flexible job shop scheduling problem (FJSP), where the processing time of each job is a nonsymmetric triangular interval T2FS (IT2FS) value.
Abstract: In practical applications, particularly in flexible manufacturing systems, there is a high level of uncertainty. A type-2 fuzzy logic system (T2FS) has several parameters and an enhanced ability to handle high levels of uncertainty. This article proposes an improved artificial immune system (IAIS) algorithm to solve a special case of the flexible job shop scheduling problem (FJSP), where the processing time of each job is a nonsymmetric triangular interval T2FS (IT2FS) value. First, a novel affinity calculation method considering the IT2FS values is developed. Then, four problem-specific initialization heuristics are designed to enhance both quality and diversity. To enhance the exploitation abilities, six local search approaches are conducted for the routing and scheduling vectors, respectively. Next, a simulated annealing method is embedded to accept antibodies with low affinity, which can enhance the exploration abilities of the algorithm. Moreover, a novel population diversity heuristic is presented to eliminate antibodies with high crowding values. Five efficient algorithms are selected for a detailed comparison, and the simulation results demonstrate that the proposed IAIS algorithm is effective for IT2FS FJSPs.

Journal ArticleDOI
TL;DR: An asynchronous federated learning (AFL) framework for multi-UAV-enabled networks is developed, which can provide asynchronous distributed computing by enabling model training locally without transmitting raw sensitive data to UAV servers.
Abstract: Unmanned aerial vehicles (UAVs) are capable of serving as flying base stations (BSs) for supporting data collection, machine learning (ML) model training, and wireless communications. However, due to the privacy concerns of devices and limited computation or communication resource of UAVs, it is impractical to send raw data of devices to UAV servers for model training. Moreover, due to the dynamic channel condition and heterogeneous computing capacity of devices in UAV-enabled networks, the reliability and efficiency of data sharing require to be further improved. In this paper, we develop an asynchronous federated learning (AFL) framework for multi-UAV-enabled networks, which can provide asynchronous distributed computing by enabling model training locally without transmitting raw sensitive data to UAV servers. The device selection strategy is also introduced into the AFL framework to keep the low-quality devices from affecting the learning efficiency and accuracy. Moreover, we propose an asynchronous advantage actor-critic (A3C) based joint device selection, UAVs placement, and resource management algorithm to enhance the federated convergence speed and accuracy. Simulation results demonstrate that our proposed framework and algorithm achieve higher learning accuracy and faster federated execution time compared to other existing solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated a computing task scheduling problem in space-air-ground integrated network (SAGIN) for delay-oriented Internet of Things (IoT) services.
Abstract: In this article, we investigate a computing task scheduling problem in space-air-ground integrated network (SAGIN) for delay-oriented Internet of Things (IoT) services. In the considered scenario, an unmanned aerial vehicle (UAV) collects computing tasks from IoT devices and then makes online offloading decisions, in which the tasks can be processed at the UAV or offloaded to the nearby base station or the remote satellite. Our objective is to design a task scheduling policy that minimizes offloading and computing delay of all tasks given the UAV energy capacity constraint. To this end, we first formulate the online scheduling problem as an energy-constrained Markov decision process (MDP). Then, considering the task arrival dynamics, we develop a novel deep risk-sensitive reinforcement learning algorithm. Specifically, the algorithm evaluates the risk, which measures the energy consumption that exceeds the constraint, for each state and searches the optimal parameter weighing the minimization of delay and risk while learning the optimal policy. Extensive simulation results demonstrate that the proposed algorithm can reduce the task processing delay by up to 30% compared to probabilistic configuration methods while satisfying the UAV energy capacity constraint.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed algorithm can achieve the stability and efficiency of task scheduling and effectively improve the throughput of the cloud computing system.

Journal ArticleDOI
TL;DR: This work proposes device-to-device (D2D) cooperation based MEC to expedite the task execution of mobile user by leveraging proximity-aware task offloading, and proposes a heuristic named mobility- Aware task scheduling (MATS) to obtain effective task assignment with low complexity.
Abstract: Mobile edge computing (MEC) has emerged as a new paradigm to assist low latency services by enabling computation offloading at the network edge. Nevertheless, human mobility can significantly impact the offloading decision and performance in MEC networks. In this context, we propose device-to-device (D2D) cooperation based MEC to expedite the task execution of mobile user by leveraging proximity-aware task offloading. However, user mobility in such distributed architecture results in dynamic offloading decision that instigates mobility-aware task scheduling in our proposed framework. We jointly formulate task assignment and power allocation to minimize the total task execution latency by taking account of user mobility, distributed resources, tasks properties, and energy constraint of the user device. We first propose Genetic Algorithm (GA)-based evolutionary scheme to solve our formulated mixed-integer non-linear programming (MINLP) problem. Then we propose a heuristic named mobility-aware task scheduling (MATS) to obtain effective task assignment with low complexity. The extensive evaluation under realistic human mobility trajectories provides useful insights into the performance of our schemes and demonstrates that, both GA and MATS achieve better latency than other baseline schemes while satisfying the energy constraint of mobile device.

Journal ArticleDOI
TL;DR: A multi-start iterated greedy (MSIG) algorithm is proposed to minimize the makespan and has many promising advantages in solving the PM/DPFSP under consideration.
Abstract: In recent years, distributed scheduling problems have been well studied for their close connection with multi-factory production networks. However, the maintenance operations that are commonly carried out on a system to restore it to a specific state are seldom taken into consideration. In this paper, we study a distributed permutation flowshop scheduling problem with preventive maintenance operation (PM/DPFSP). A multi-start iterated greedy (MSIG) algorithm is proposed to minimize the makespan. An improved heuristic is presented for the initialization and re-initialization by adding a dropout operation to NEH2 to generate solutions with a high level of quality and disperstiveness. A destruction phase with the tournament selection and a construction phase with an enhanced strategy are introduced to avoid local optima. A local search based on three effective operators is integrated into the MSIG to reinforce local neighborhood solution exploitation. In addition, a restart strategy is adpoted if a solution has not been improved in a certain number of consecutive iterations. We conducted extensive experiments to test the performance of the presented MSIG. The computational results indicate that the presented MSIG has many promising advantages in solving the PM/DPFSP under consideration.

Journal ArticleDOI
TL;DR: A discrete variation of the Distributed Grey Wolf Optimizer (DGWO) for scheduling dependent tasks to VMs for maximizing the utilization of Virtual Machines (VMs) in cloud computing environments.

Journal ArticleDOI
TL;DR: In this article, a distributed flow shop group scheduling problem is considered and a cooperative co-evolutionary algorithm (CCEA) with a novel collaboration model and a reinitialization scheme is proposed.
Abstract: This article addresses a novel scheduling problem, a distributed flowshop group scheduling problem, which has important applications in modern manufacturing systems. The problem considers how to arrange a variety of jobs subject to group constraints at a number of identical manufacturing cellulars, each one with a flowshop structure, with the objective of minimizing makespan. We explore the problem-specific knowledge and present a mixed-integer linear programming model, a counterintuitive paradox, and two suites of accelerations to save computational efforts. Due to the complexity of the problem, we consider a decomposition strategy and propose a cooperative co-evolutionary algorithm (CCEA) with a novel collaboration model and a reinitialization scheme. A comprehensive and thorough computational and statistical campaign is carried out. The results show that the proposed collaboration model and reinitialization scheme are very effective. The proposed CCEA outperforms a number of metaheuristics adapted from closely related scheduling problems in the literature by a significantly considerable margin.

Journal ArticleDOI
TL;DR: The study devises Mobility Aware Blockchain-Enabled offloading scheme (MABOS), which extends blockchain enable multi-side offloading with proof of work (PoW), proof of creditability (PoC), and fault-tolerant techniques to offload all tasks under the secure network without any violation.
Abstract: The development of vehicular Internet of Things (IoT) applications, such as E-Transport, Augmented Reality, and Virtual Reality are growing progressively. The mobility aware services and network-based security are fundamental requirements of these applications. However, multi-side offloading enabling blockchain and cost-efficient scheduling in heterogeneous vehicular fog cloud nodes network become a challenging task. The study formulates this problem as a convex optimization problem, where all constraints are the convex set. The goal of the study is to minimize communication cost and computation cost of applications under mobility, security, deadline, and resource constraints. Initially, we propose a novel vehicular fog cloud network (VFCN) which consists of different components and heterogeneous computing nodes. The ensure mobility privacy, the study devises Mobility Aware Blockchain-Enabled offloading scheme (MABOS). It extends blockchain enable multi-side offloading (e.g., offline offloading and online offloading) with proof of work (PoW), proof of creditability (PoC) and fault-tolerant techniques. The purpose is to offload all tasks under the secure network without any violation. Furthermore, to ensure Quality of Service (QoS) of applications, this work suggests linear search based task scheduling (LSBTS) method, which maps all tasks onto appropriate computing nodes. The experimental results show that devise schemes outperform all existing baseline approaches to the considered problem.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a cloud workflow scheduling approach which combines particle swarm optimization and idle time slot-aware rules, to minimize the execution cost of a workflow application under a deadline constraint.
Abstract: Workflow scheduling is a key issue and remains a challenging problem in cloud computing. Faced with the large number of virtual machine (VM) types offered by cloud providers, cloud users need to choose the most appropriate VM type for each task. Multiple task scheduling sequences exist in a workflow application. Different task scheduling sequences have a significant impact on the scheduling performance. It is not easy to determine the most appropriate set of VM types for tasks and the best task scheduling sequence. Besides, the idle time slots on VM instances should be used fully to increase resources' utilization and save the execution cost of a workflow. This paper considers these three aspects simultaneously and proposes a cloud workflow scheduling approach which combines particle swarm optimization (PSO) and idle time slot-aware rules, to minimize the execution cost of a workflow application under a deadline constraint. A new particle encoding is devised to represent the VM type required by each task and the scheduling sequence of tasks. An idle time slot-aware decoding procedure is proposed to decode a particle into a scheduling solution. To handle tasks' invalid priorities caused by the randomness of PSO, a repair method is used to repair those priorities to produce valid task scheduling sequences. The proposed approach is compared with state-of-the-art cloud workflow scheduling algorithms. Experiments show that the proposed approach outperforms the comparative algorithms in terms of both of the execution cost and the success rate in meeting the deadline.

Journal ArticleDOI
TL;DR: A cloud-assisted joint charging scheduling and energy management framework for unmanned aerial vehicle (UAV) networks and the cooperative energy sharing among towers is designed and implemented with multi-agent deep reinforcement learning and then intelligent energy sharing can be realized.
Abstract: This paper proposes a cloud-assisted joint charging scheduling and energy management framework for unmanned aerial vehicle (UAV) networks. For charging the UAVs those are extremely power hungry, charging towers are considered for plug-and-play charging during run-time operations. The charging towers should be cost-effective, thus it is equipped with photovoltaic power generation and energy storage systems functionalities. Furthermore, the towers should be cooperative for more cost-effectiveness by intelligent energy sharing. Based on the needs and setting, this paper proposes 1) charging scheduling between UAVs and towers and 2) cooperative energy managements among towers. For charging scheduling, the UAVs and towers should be scheduled for maximizing charging energy amounts and the scheduled pairs should determine charging energy allocation amounts. Here, two decisions are correlated, i.e. , it is a non-convex problem. We re-formulate the non-convex to convex for guaranteeing optimal solutions. Lastly, the cooperative energy sharing among towers is designed and implemented with multi-agent deep reinforcement learning and then intelligent energy sharing can be realized. We can observe that the two methods are related and it should be managed, coordinated, and harmonized by a centralized orchestration manager under the consideration of fairness, energy-efficiency, and cost-effectiveness. Our data-intensive performance evaluation verifies that our proposed framework achieves desired performance.

Journal ArticleDOI
TL;DR: In this article, a multi-round allocation (MMA) algorithm is proposed to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints in multi-cloud systems.
Abstract: The rise of multi-cloud systems has been spurred. For safety-critical missions, it is important to guarantee their security and reliability. To address trust constraints in a heterogeneous multi-cloud environment, this work proposes a novel scheduling method called matching and multi-round allocation (MMA) to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints. The method is divided into two phases for task scheduling. The first phase is to find the best matching candidate resources for the tasks to meet their preferential demands including performance, security, and reliability in a multi-cloud environment; the second one iteratively performs multiple rounds of re-allocating to optimize tasks execution time and cost by minimizing the variance of the estimated completion time. The proposed algorithm, the modified cuckoo search (MCS), hybrid chaotic particle search (HCPS), modified artificial bee colony (MABC), max-min, and min-min algorithms are implemented in CloudSim to create simulations. The simulations and experimental results show that our proposed method achieves shorter makespan, lower cost, higher resource utilization, and better trade-off between time and economic cost. It is more stable and efficient.

Journal ArticleDOI
TL;DR: An alternating optimization algorithm with guaranteed convergence is developed to minimize the maximum computation delay among IoT devices with the joint scheduling for association control, computation task allocation, transmission power and bandwidth allocation, UAV computation resource, and deployment position optimization.
Abstract: Space-aerial-assisted computation offloading has been recognized as a promising technique to provide ubiquitous computing services for remote Internet of Things (IoT) applications, such as forest fire monitoring and disaster rescue. This article considers a space-aerial-assisted mixed cloud-edge computing framework, where the flying unmanned aerial vehicles (UAVs) provide IoT devices with low-delay edge computing service and satellites provide ubiquitous access to cloud computing. We aim to minimize the maximum computation delay among IoT devices with the joint scheduling for association control, computation task allocation, transmission power and bandwidth allocation, UAV computation resource, and deployment position optimization. Through exploiting block coordinate descent and successive convex approximation, we develop an alternating optimization algorithm with guaranteed convergence, to solve the formulated problem. Extensive simulation results are provided to demonstrate the remarkable delay reduction of the proposed scheme than existing benchmark methods.

Journal ArticleDOI
Shuran Sheng1, Peng Chen1, Zhimin Chen, Lenan Wu1, Yuxuan Yao 
28 Feb 2021-Sensors
TL;DR: Simulation results show that the proposed DRL-based task scheduling algorithm outperforms the existing methods in the literature in terms of the average task satisfaction degree and success ratio.
Abstract: Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application tasks. In this paper, a task scheduling problem is studied in the EC scenario, and multiple tasks are scheduled to virtual machines (VMs) configured at the edge server by maximizing the long-term task satisfaction degree (LTSD). The problem is formulated as a Markov decision process (MDP) for which the state, action, state transition, and reward are designed. We leverage deep reinforcement learning (DRL) to solve both time scheduling (i.e., the task execution order) and resource allocation (i.e., which VM the task is assigned to), considering the diversity of the tasks and the heterogeneity of available resources. A policy-based REINFORCE algorithm is proposed for the task scheduling problem, and a fully-connected neural network (FCN) is utilized to extract the features. Simulation results show that the proposed DRL-based task scheduling algorithm outperforms the existing methods in the literature in terms of the average task satisfaction degree and success ratio.