scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2022"


Journal ArticleDOI
TL;DR: Simulation results are provided to substantiate the effectiveness and merits of the proposed co-design approach for guaranteeing a trade-off between robust platooning control performance and communication efficiency.
Abstract: This paper deals with the co-design problem of event-triggered communication scheduling and platooning control over vehicular ad-hoc networks (VANETs) subject to finite communication resource. First, a unified model is presented to describe the coordinated platoon behavior of leader-follower vehicles in the simultaneous presence of unknown external disturbances and an unknown leader control input. Under such a platoon model, the central aim is to achieve robust platoon formation tracking with desired inter-vehicle spacing and same velocities and accelerations guided by the leader, while attaining improved communication efficiency. Toward this aim, a novel bandwidth-aware dynamic event-triggered scheduling mechanism is developed. One salient feature of the scheduling mechanism is that the threshold parameter in the triggering law is dynamically adjusted over time based on both vehicular state variations and bandwidth status. Then, a sufficient condition for platoon control system stability and performance analysis as well as a co-design criterion of the admissible event-triggered platooning control law and the desired scheduling mechanism are derived. Finally, simulation results are provided to substantiate the effectiveness and merits of the proposed co-design approach for guaranteeing a trade-off between robust platooning control performance and communication efficiency.

124 citations


Journal ArticleDOI
TL;DR: In this article, a Pareto-based collaborative multi-objective optimization algorithm (CMOA) is proposed to solve the distributed permutation flow shop problem with limited buffers (DPFSP-LB).
Abstract: Energy-efficient scheduling of distributed production systems has become a common practice among large companies with the advancement of economic globalization and green manufacturing. Nevertheless, energy-efficient scheduling of distributed permutation flow-shop problem with limited buffers (DPFSP-LB) does not receive adequate attention in the relevant literature. This paper is therefore the first attempt to study this DPFSP-LB with objectives of minimizing makespan and total energy consumption ( T E C ). To solve this energy-efficient DPFSP-LB, a Pareto-based collaborative multi-objective optimization algorithm (CMOA) is proposed. In the proposed CMOA, first, the speed scaling strategy based on problem property is designed to reduce T E C . Second, a collaborative initialization strategy is presented to generate a high-quality initial population. Third, three properties of DPFSP-LB are utilized to develop a collaborative search operator and a knowledge-based local search operator. Finally, we verify the effectiveness of each improvement component of CMOA and compare it against other well-known multi-objective optimization algorithms on instances. Experiment results demonstrate the effectiveness of CMOA in solving this energy-efficient DPFSP-LB. Especially, the CMOA is able to obtain excellent results on all problems regarding the comprehensive metric, and is also competitive to its rivals regarding the convergence metric.

33 citations


Journal ArticleDOI
TL;DR: In this article, an interference-aware and prediction-based resource manager for DL systems is proposed, which proactively predicts GPU utilization of heterogeneous DL jobs extrapolated from the DL model's computation graph features, removing the need for online profiling and isolated reserved GPUs.
Abstract: To accelerate the training of Deep Learning (DL) models, clusters of machines equipped with hardware accelerators such as GPUs are leveraged to reduce execution time. State-of-the-art resource managers are needed to increase GPU utilization and maximize throughput. While co-locating DL jobs on the same GPU has been shown to be effective, this can incur interference causing slowdown. In this article we propose Horus: an interference-aware and prediction-based resource manager for DL systems. Horus proactively predicts GPU utilization of heterogeneous DL jobs extrapolated from the DL model’s computation graph features, removing the need for online profiling and isolated reserved GPUs. Through micro-benchmarks and job co-location combinations across heterogeneous GPU hardware, we identify GPU utilization as a general proxy metric to determine good placement decisions, in contrast to current approaches which reserve isolated GPUs to perform online profiling and directly measure GPU utilization for each unique submitted job. Our approach promotes high resource utilization and makespan reduction; via real-world experimentation and large-scale trace driven simulation, we demonstrate that Horus outperforms other DL resource managers by up to 61.5 percent for GPU resource utilization, 23.7–30.7 percent for makespan reduction and 68.3 percent in job wait time reduction.

32 citations


Journal ArticleDOI
TL;DR: An efficient algorithm is developed that does not use any third-party solver and is based on the search in the space of partial schedules while being guided by the discovered conflicts and calculates schedules for problem instances consisting of 2000 network nodes and more than 10 000 flows.

23 citations


Journal ArticleDOI
TL;DR: A new task scheduling policy is presented that uses the notion of “virtual real-time task” and two-phase scheduling and shows that the proposed policy reduces the energy consumption by 66.8% on average without deadline misses and also supports the waiting time of less than 3 (s) for interactive tasks.
Abstract: With the recent advances in Internet of Things and cyber-physical systems technologies, smart industrial systems support configurable processes consisting of human interactions as well as hard real-time functions. This implies that irregularly arriving interactive tasks and traditional hard real-time tasks coexist. As the characteristics of the tasks are heterogeneous, it is not an easy matter to schedule them all at once. To cope with this situation, this article presents a new task scheduling policy that uses the notion of “virtual real-time task” and two-phase scheduling. As hard real-time tasks must keep their deadlines, we perform offline scheduling based on genetic algorithms beforehand. This determines the processor's voltage level and memory location of each task and also reserves the virtual real-time tasks for interactive tasks. When interactive tasks arrive during the execution, online scheduling is performed on the time slot of the virtual real-time tasks. As interactive workloads evolve over time, we monitor them and periodically update the offline scheduling. Experimental results show that the proposed policy reduces the energy consumption by 66.8% on average without deadline misses and also supports the waiting time of less than 3 (s) for interactive tasks.

20 citations


Journal ArticleDOI
TL;DR: A Genetic Programming approach for solving flexible shop scheduling problems with a consistent advantage compared to the existing advanced priority rules from the literature with considerably increased performance under the presence of unrelated parallel machines and larger instances in general.

16 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a reinforcement learning algorithm called SchedRL with a delta reward scheme and an episodic guided sampling strategy to solve the problem efficiently, which outperforms FirstFit and BestFit on the fulfill number and allocation rate.

15 citations


Journal ArticleDOI
TL;DR: In this article, a data-driven algorithmic approach was proposed to solve single-leader single-follower bi-level optimization problems to guarantee feasibility. But the results of this approach are limited to a single-product process scheduling problem.

10 citations


Journal ArticleDOI
Anne Bouillard1
TL;DR: In this article, the authors propose a new algorithm based on linear programming that presents a trade-off between accuracy and tractability for computing deterministic performance bounds in FIFO networks.

10 citations


Journal ArticleDOI
TL;DR: In this paper, an effective stochastic bag-of-tasks (BoT) scheduling algorithm based on the distribution of task duration variations is proposed to maximize the profit of the private cloud provider while guaranteeing the QoS provided by the cloud platform.

10 citations


Journal ArticleDOI
TL;DR: This work presents an improved channel access granting mechanism for data routing and scheduling via software defined vehicular networks (SDVN) and presents two computational improvement strategies based on incremental optimization and maximum independent sets (MIS) for the two application scenarios to shift the computational complexity to a realizable level for real-time communication.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a novel integrated approach to crew re-planning, i.e., the construction of new duties and rosters for the employees given changes in the timetable and rolling stock schedule.
Abstract: Planned maintenance and construction activities are crucial for heavily used railway networks to cope with the ever increasing demand. These activities lead to changes in the timetable and rolling stock schedule (often for multiple days) and can have a major impact on the crew schedule, as the changes can render many planned duties infeasible. In this paper, we propose a novel integrated approach to crew re-planning, i.e., the construction of new duties and rosters for the employees given changes in the timetable and rolling stock schedule. In current practice, the feasibility of the new rosters is ‘assured’ by allowing the new duties to deviate only slightly from the original ones. This allows the problem to be solved on a day-by-day basis. In the Integrated Crew Re-Planning Problem (ICRPP), we loosen this requirement and allow for more flexibility: The ICRPP considers the re-scheduling of crew for multiple days simultaneously, thereby explicitly taking the feasibility of the rosters into account. By integrating the scheduling and rostering decisions, we can allow for larger deviations from the original duties. We propose a mathematical formulation for the ICRPP, strengthen it using a family of valid cover inequalities, and develop a column generation approach to solve the problem. We apply our solution approach to practical instances from Netherlands Railways, and show the benefits of integrating the re-planning process.

Journal ArticleDOI
TL;DR: In this article, a novel hybrid collaborative multi-objective fruit fly optimization algorithm (HCMFOA) is developed to optimize both the execution time and cost of workflows in cloud environment.
Abstract: Scheduling the complex workflows in cloud environment have drawn enormous attentions because the distinct features of the cloud resources. Most of the previous approaches ignored the multiple conflicting objectives of workflow scheduling and resources provisioning. In this paper, a novel hybrid collaborative multi-objective fruit fly optimization algorithm (HCMFOA) is developed to optimize both the execution time and cost. In the proposed HCMFOA, a reference points-based cluster strategy is introduced to dynamic divide the swarm into multiple sub-swarms. Moreover, a hybrid initial strategy is designed based on non-linear weight vector and two assignment rules of tasks to initialize the location of all the fruit flies in the problem space. In the collaborative smell-based foraging, three effective problem-specific neighborhood operators are employed to collaborative explore the global scope. In multi-objective vision-based foraging, the sub-swarms based crossover operator is designed to perform exploitation in local region. Finally, an extensive computational experiment is conducted to validate the performance of HCMFOA. The statistical results reveal that HCMFOA significantly outperforms the existing state-of-the-art approaches.

Journal ArticleDOI
TL;DR: A novel single-stage scheduling problem with jobs being delivered in batches on flexibly definable customer-dependent delivery dates and a randomized adaptive search procedure is proposed that attains promising heuristic solutions under tight time restrictions.

Journal ArticleDOI
TL;DR: An efficient online control algorithm named ODGWS (Online Delay-Guaranteed Workload Scheduling) which makes online scheduling decisions achieve a bounded guarantee from the worst scheduling delay for delay tolerant workload.

Journal ArticleDOI
TL;DR: In this paper, an optimal multi-path routing and scheduling strategy is proposed to achieve the best possible network performance for all concurrent jobs, based on the formulation of an optimization problem that can be transformed into an equivalent linear programming (LP) problem to be efficiently solved.
Abstract: It has become a recent trend that large volumes of data are generated, stored, and processed across geographically distributed datacenters. When popular data parallel frameworks, such as MapReduce and Spark, are employed to process such geo-distributed data, optimizing the network transfer in communication stages becomes increasingly crucial to application performance, as the inter-datacenter links have much lower bandwidth than intra-datacenter links. In this article, we focus on exploiting the flexibility of multi-path routing for inter-datacenter flows of data analytic jobs, with the hope of better utilizing inter-datacenter links and thus improve job performance. We design an optimal multi-path routing and scheduling strategy to achieve the best possible network performance for all concurrent jobs, based on our formulation of an optimization problem that can be transformed into an equivalent linear programming (LP) problem to be efficiently solved. As a highlight of this article, we have implemented our proposed algorithm in the controller of an application-layer software-defined inter-datacenter overlay testbed, designed to provide transfer optimization service for Spark jobs. With extensive evaluations of our real-world implementation on Google Cloud, we have shown convincing evidence that our optimal multi-path routing and scheduling strategies have achieved significant improvements in terms of job performance.

Journal ArticleDOI
TL;DR: In this article, a BIM4D-based Intelligent Assembly Scheduler (BIAS) is designed in conjunction with the Computer-Aided Lifting Planner developed at Nanyang Technological University, Singapore.

Journal ArticleDOI
TL;DR: In this paper, a task scheduling scheme for large-scale factory access under cloud-edge collaborative computing architecture is proposed, and the experimental results prove the effectiveness and correctness of the worst-case execution time analysis method and the idea of big data-driven CPS proposed in this paper.

Journal ArticleDOI
Yi-wen Zhang1
TL;DR: A novel algorithm called EAU is presented, which applies the actual execution time to re-compute the utilization of the task when a job is completed early or is released and can save up to 46.84% of energy compared with existing algorithms.

Journal ArticleDOI
TL;DR: In this article, a speed-up procedure for makespan minimization, which can be incorporated in insertion-based neighbourhoods using a complete representation of the solutions, has been proposed.
Abstract: During the last decades, hundreds of approximate algorithms have been proposed in the literature addressing flow-shop-based scheduling problems. In the race for finding the best proposals to solve these problems, speed-up procedures to compute objective functions represent a key factor in the efficiency of the algorithms. This is the case of the well-known Taillard’s accelerations proposed for the traditional flow shop with makespan minimisation or several other accelerations proposed for related scheduling problems. Despite the interest in proposing such methods to improve the efficiency of approximate algorithms, to the best of our knowledge, no speed-up procedure has been proposed so far in the hybrid flow shop literature. To tackle this challenge, we propose in this paper a speed-up procedure for makespan minimisation, which can be incorporate in insertion-based neighbourhoods using a complete representation of the solutions. This procedure is embedded in the traditional iterated greedy algorithm. The computational experience shows that even incorporating the proposed speed-up procedure in this simple metaheuristic results in outperforming the best metaheuristic for the problem under consideration.


Book ChapterDOI
01 Jan 2022
TL;DR: A comparative investigation of meta-heuristic centric task scheduling algorithms, such as ant colony optimization (ACO), particle swarm optimization (PSO), gray wolf optimization (GWO), whale optimization algorithm (WOA) and flower pollination algorithm (FPA) which are being used by many researchers for developing new techniques from last decade are presented.
Abstract: The massive advances in web, mobile and computer technologies and its users are exponentially growing. Now the era has come where every user is very much conscious about the word “cloud computing”. The users are rolling in cloud services (storage space, computational power and standalone applications). Hence, the cloud service providers (CSP) are concerned about the quality of service (QoS) to their clients. To make it into action, task scheduling was introduced. The principle goal of task scheduling is to carry out successfully the objectives of both server and its clients. As the traditional task scheduling is not enough to attain the best performance. So, meta-heuristic techniques are required, which can produce a solution close to optimal. This optimal solution decides the mapping of tasks on resources and comes up with results that match the desirable objectives. This paper presented a comparative investigation of meta-heuristic centric task scheduling algorithms, such as ant colony optimization (ACO), particle swarm optimization (PSO), gray wolf optimization (GWO), whale optimization algorithm (WOA) and flower pollination algorithm (FPA) which are being used by many researchers for developing new techniques from last decade.

Journal ArticleDOI
TL;DR: LB4OMP as mentioned in this paper is an open-source dynamic load balancing library that implements successful scheduling algorithms from the literature, which is designed to spur and support present and future scheduling research, for the benefit of multithreaded applications performance.
Abstract: Exascale computing systems will exhibit high degrees of hierarchical parallelism, with thousands of computing nodes and hundreds of cores per node. Efficiently exploiting hierarchical parallelism is challenging due to load imbalance that arises at multiple levels. OpenMP is the most widely-used standard for expressing and exploiting the ever-increasing node-level parallelism. The scheduling options in OpenMP are insufficient to address the load imbalance that arises during the execution of multithreaded applications. The limited scheduling options in OpenMP hinder research on novel scheduling techniques which require comparison with others from the literature. This work introduces LB4OMP, an open-source dynamic load balancing library that implements successful scheduling algorithms from the literature. LB4OMP is a research infrastructure designed to spur and support present and future scheduling research, for the benefit of multithreaded applications performance. Through an extensive performance analysis campaign, we assess the effectiveness and demystify the performance of all loop scheduling techniques in the library. We show that, for numerous applications-systems pairs, the scheduling techniques in LB4OMP outperform the scheduling options in OpenMP. Node-level load balancing using LB4OMP leads to reduced cross-node load imbalance and to improved MPI+OpenMP applications performance, which is critical for Exascale computing.

Journal ArticleDOI
TL;DR: In this paper, a two-stage hybrid energy allocation (HEA) strategy was proposed for minimizing the schedule length of energy-constrained parallel applications on heterogeneous computing systems.

Journal ArticleDOI
TL;DR: Computational results reveal that variations in service durations and fluctuations as well as customer unpunctual times have significant impacts on the system performance, while optimizing appointment sequencing decision can help reduce operational cost.
Abstract: In this paper, we consider the problem of sequencing and scheduling appointments on multiple servers with stochastic service durations and customer arrivals. The objective is to minimize the weighted sum of server staffing cost and total expected cost of customer waiting, server idleness and overtime. To solve the problem, we first formulate it as a two-stage integer program, where the second stage involves multiple stochastic linear programs. Based on this, we then derive a deterministic mixed-integer linear program for the problem via sample average approximation and further strengthen the formulation by exploiting problem properties. Due to the high complexity of the problem, we also propose an efficient integer L-shaped based heuristic, which is further enhanced by variable neighborhood descent. Our computational experiments show that the proposed integer L-shaped heuristic dominates the strengthened deterministic program and integer L-shaped method especially for large-scale problems, while the incorporation of variable neighborhood descent can significantly improve the performance of the heuristic. Our computational results also reveal that variations in service durations and fluctuations as well as customer unpunctual times have significant impacts on the system performance, while optimizing appointment sequencing decision can help reduce operational cost.

Journal ArticleDOI
Jian Zhu1, Qian Li, Shi Ying1
TL;DR: In this paper, a real-time task scheduling method based on deep reinforcement learning is proposed, which automatically and intelligently allocates user task requests that continually reach SaaS applications to appropriate resources for execution.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a multi-level collaborative framework (called Mc-Stream) for elastic stream computing systems, which is optimized at multiple levels (user level, instance level, scheduling level, and resource level).

DOI
01 Jan 2022
TL;DR: In this article, two scheduling strategies based on Genetic Algorithm and Tabu Search algorithm are proposed to solve the multi-traveling salesman problem for power inspection of multi-machines and multi-tasks in UAVs power inspection.
Abstract: Transmission line inspection and maintenance are the foundation to ensure the normal operation of the power grid. However, transmission line inspection and maintenance in remote regions are expensive, especially in unmanned regions. Unmanned Aerial Vehicle (UAV) power inspection is significant development towards the space-ground collaborative smart grid. The UAV is used to patrol the power system, and the security problems and fault types can be detected in time by remote video transmission. But in a multi-tasks scenario how to schedule UAVs to complete tasks with the least cost (which can be energy consumption, distance, time) is rarely studied. This paper studies the scheduling problem of multi-machines and multi-tasks in UAVs power inspection, formulates the schedule of UAVs as a multi-traveling salesman problem applied in this scenarios. Two scheduling strategies based on Genetic algorithm and Tabu Search algorithm are proposed to solve the multi-traveling salesman problem. In particular, the Genetic algorithm has a fast convergence speed and a better objective function value. Therefore, it is easier to meet the rapid emergency requirements in the multi-tasks scenarios of the UAVs.

Book ChapterDOI
01 Jan 2022
TL;DR: In this article, the authors proposed an algorithm SLA-GTMax-Min which schedules the tasks efficiently to the heterogeneous multi-cloud environment satisfying SLA and balances makespan, gain, and penalty/violation cost.
Abstract: Cloud is a distributed heterogeneous computing paradigm that facilitates on-demand delivery of IT heterogeneous resources to the customer based on their needs over the Internet with a pay-as-per service they use. Service level agreement (SLA) specifies the customer’s expected service levels through cloud service provider (CSP) and the remedies or penalties if any of the CSP does not meet agreed-on service levels. Before providing the requested services to the customer, CSP and customer negotiate and sign on an SLA. CSP earns money for the service provided to the customer on satisfying the agreed-on service levels. Otherwise, CSP pays the penalty cost to the customer for the violation of SLA. Task scheduling minimizes task execution time and maximizes resource usage rate. Scheduling objective tends to improve quality of service (QoS) parameters like resource usage, with a minimum execution time and cost (without violating SLA). The proposed algorithm SLA-GTMax-Min schedules the tasks efficiently to the heterogeneous multi-cloud environment satisfying SLA and balances makespan, gain, and penalty/violation cost. Proposed SLA-GTMax-Min represents three levels of SLA provided with three types of services expected by the customers. The services are namely tasks minimum execution time, tasks minimum gain cost, and tasks both minimum execution time and gain cost in percentage, respectively. Makespan is termed as tasks minimum execution time. Gain cost represents minimum execution cost for completing tasks execution. The proposed algorithm SLA-GTMax-Min incorporates the SLA gain cost for providing service successfully and SLA violation cost for providing service unsuccessfully. Performance analysis of algorithm SLA-GTMax-Min and existing algorithm is measured based on the benchmark dataset values. The experimental results of SLA-GTMax-Min algorithm and the existing scheduling algorithms, namely, SLA-MCT, Execution-MCT, Profit-MCT, SLA-Min-Min, Execution-Min-Min, and Profit-Min-Min, are compared by evaluation metrics. Evaluation measure considered for evaluating the performance of the proposed SLA-GTMax-Min algorithm are makespan, cloud utilization ratio, gain cost is the cost earned by the CSP for successful completion of the tasks, and penalty cost the CSP pays to the customer for violation of SLA. The experimental results illustrate clearly algorithm SLA-GTMax-Min performs a better balance among makespan, gain cost, and penalty cost than existing algorithms.

Book ChapterDOI
01 Jan 2022
TL;DR: The implementation comparison of PSO, ACO on the cloud, and Fog is contrasting using iFogSim toolkit to show enhancement in quality of service (QoS) parameters over cloud computing.
Abstract: In the cloud computing paradigm data, owners have to put up their data in the cloud. Due to the longest distance between devices and cloud; problem of delay, bandwidth, and jitter is there. Fog computing was introduced to the edge of the network to overcome cloud problems. During the transfer of data between the Internet of Things (IoT) devices and fog node, scheduling of resources and tasks is necessary to enrich quality of service (QoS) parameters. Various optimization and scheduling algorithms were implemented in a fog environment. Still, the fog environment is facing the problem of efficiency, latency, cost, computation time, and total execution time. Earlier PSO (particle swarm optimization) techniques or ACO (ant colony optimization) are provided the solution to NP-hard problems. Over such types of optimization techniques, various optimization algorithms are provided like Dolphin Partner optimization, Grey wolf, Moth-Flame, Firefly, crow, etc. Priority queue, round robin scheduling algorithm implemented on another side for a solution to the problem. In this paper, the implementation comparison of PSO, ACO on the cloud, and Fog is contrasting using iFogSim toolkit. The results of QoS parameters makespan and cost in fog computing are showing enhancement in QoS over cloud computing.