scispace - formally typeset
Search or ask a question

Showing papers on "Fair-share scheduling published in 2010"


Proceedings ArticleDOI
13 Apr 2010
TL;DR: This work proposes a simple algorithm called delay scheduling, which achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness.
Abstract: As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.

1,514 citations


Journal ArticleDOI
TL;DR: This work improves the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles and simultaneously scheduling all DER schedules, to investigate the potential consumer value added by coordinated DER scheduling.
Abstract: We describe algorithmic enhancements to a decision-support tool that residential consumers can utilize to optimize their acquisition of electrical energy services. The decision-support tool optimizes energy services provision by enabling end users to first assign values to desired energy services, and then scheduling their available distributed energy resources (DER) to maximize net benefits. We chose particle swarm optimization (PSO) to solve the corresponding optimization problem because of its straightforward implementation and demonstrated ability to generate near-optimal schedules within manageable computation times. We improve the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles. The improved DER schedules are then used to investigate the potential consumer value added by coordinated DER scheduling. This is computed by comparing the end-user costs obtained with the enhanced algorithm simultaneously scheduling all DER, against the costs when each DER schedule is solved separately. This comparison enables the end users to determine whether their mix of energy service needs, available DER and electricity tariff arrangements might warrant solving the more complex coordinated scheduling problem, or instead, decomposing the problem into multiple simpler optimizations.

824 citations


Journal ArticleDOI
01 Jun 2010
TL;DR: An adaptive carrier sense multiple access (CSMA) scheduling algorithm that can achieve the maximal throughput distributively and is combined with congestion control to achieve the optimal utility and fairness of competing flows.
Abstract: In multihop wireless networks, designing distributed scheduling algorithms to achieve the maximal throughput is a challenging problem because of the complex interference constraints among different links. Traditional maximal-weight scheduling (MWS), although throughput-optimal, is difficult to implement in distributed networks. On the other hand, a distributed greedy protocol similar to IEEE 802.11 does not guarantee the maximal throughput. In this paper, we introduce an adaptive carrier sense multiple access (CSMA) scheduling algorithm that can achieve the maximal throughput distributively. Some of the major advantages of the algorithm are that it applies to a very general interference model and that it is simple, distributed, and asynchronous. Furthermore, the algorithm is combined with congestion control to achieve the optimal utility and fairness of competing flows. Simulations verify the effectiveness of the algorithm. Also, the adaptive CSMA scheduling is a modular MAC-layer algorithm that can be combined with various protocols in the transport layer and network layer. Finally, the paper explores some implementation issues in the setting of 802.11 networks.

697 citations


Journal ArticleDOI
13 Mar 2010
TL;DR: This study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling, and finds a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware.
Abstract: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.

532 citations


Journal ArticleDOI
TL;DR: A distributed algorithm based on the distributed coloring of the nodes, that increases the delay by a factor of 10–70 over centralized algorithms for 1000 nodes, and obtain upper bound for these schedules as a function of the total number of packets generated in the network.
Abstract: Algorithms for scheduling TDMA transmissions in multi-hop networks usually determine the smallest length conflict-free assignment of slots in which each link or node is activated at least once. This is based on the assumption that there are many independent point-to-point flows in the network. In sensor networks however often data are transferred from the sensor nodes to a few central data collectors. The scheduling problem is therefore to determine the smallest length conflict-free assignment of slots during which the packets generated at each node reach their destination. The conflicting node transmissions are determined based on an interference graph, which may be different from connectivity graph due to the broadcast nature of wireless transmissions. We show that this problem is NP-complete. We first propose two centralized heuristic algorithms: one based on direct scheduling of the nodes or node-based scheduling, which is adapted from classical multi-hop scheduling algorithms for general ad hoc networks, and the other based on scheduling the levels in the routing tree before scheduling the nodes or level-based scheduling, which is a novel scheduling algorithm for many-to-one communication in sensor networks. The performance of these algorithms depends on the distribution of the nodes across the levels. We then propose a distributed algorithm based on the distributed coloring of the nodes, that increases the delay by a factor of 10---70 over centralized algorithms for 1000 nodes. We also obtain upper bound for these schedules as a function of the total number of packets generated in the network.

381 citations


Journal ArticleDOI
TL;DR: In this paper, a survey of deterministic scheduling problems with availability constraints motivated by preventive maintenance is presented, where complexity results, exact algorithms and approximation algorithms in single machine, parallel machine, flow shop, open shop, job shop scheduling environment with different criteria are surveyed briefly.

376 citations


Journal ArticleDOI
TL;DR: The paper reveals the complexity of the scheduling problem in Computational Grids when compared to scheduling in classical parallel and distributed systems and shows the usefulness of heuristic and meta-heuristic approaches for the design of efficient Grid schedulers.

364 citations


Proceedings ArticleDOI
30 Nov 2010
TL;DR: Extensive simulations based on both random topologies and real network topologies of a physical testbed demonstrate that C-LLF is highly effective in meeting end-to-end deadlines in WirelessHART networks, and significantly outperforms common real-time scheduling policies.
Abstract: WirelessHART is an open wireless sensor-actuator network standard for industrial process monitoring and control that requires real-time data communication between sensor and actuator devices. Salient features of a WirelessHART network include a centralized network management architecture, multi-channel TDMA transmission, redundant routes, and avoidance of spatial reuse of channels for enhanced reliability and real-time performance. This paper makes several key contributions to real-time transmission scheduling in WirelessHART networks: (1) formulation of the end-to-end real-time transmission scheduling problem based on the characteristics of WirelessHART, (2) proof of NP-hardness of the problem, (3) an optimal branch-and-bound scheduling algorithm based on a necessary condition for schedulability, and (4) an efficient and practical heuristic-based scheduling algorithm called Conflict-aware Least Laxity First (C-LLF). Extensive simulations based on both random topologies and real network topologies of a physical testbed demonstrate that C-LLF is highly effective in meeting end-to-end deadlines in WirelessHART networks, and significantly outperforms common real-time scheduling policies.

276 citations


Proceedings ArticleDOI
13 Apr 2010
TL;DR: This paper implemented bias scheduling over the Linux scheduler on a real system that models microarchitectural differences accurately and found that it can improve system performance significantly, and in proportion to the application bias diversity present in the workload.
Abstract: Heterogeneous architectures that integrate a mix of big and small cores are very attractive because they can achieve high single-threaded performance while enabling high performance thread-level parallelism with lower energy costs. Despite their benefits, they pose significant challenges to the operating system software. Thread scheduling is one of the most critical challenges.In this paper we propose bias scheduling for heterogeneous systems with cores that have different microarchitectures and performance.We identify key metrics that characterize an application bias, namely the core type that best suits its resource needs. By dynamically monitoring application bias, the operating system is able to match threads to the core type that can maximize system throughput. Bias scheduling takes advantage of this by influencing the existing scheduler to select the core type that bests suits the application when performing load balancing operations.Bias scheduling can be implemented on top of most existing schedulers since its impact is limited to changes in the load balancing code. In particular, we implemented it over the Linux scheduler on a real system that models microarchitectural differences accurately and found that it can improve system performance significantly, and in proportion to the application bias diversity present in the workload. Unlike previous work, bias scheduling does not require sampling of CPI on all core types or offline profiling. We also expose the limits of dynamic voltage/frequency scaling as an evaluation vehicle for heterogeneous systems.

273 citations


Book
01 Dec 2010
TL;DR: This updated edition offers an indispensable exposition on real-time computing, with particular emphasis on predictable scheduling algorithms.
Abstract: This updated edition offers an indispensable exposition on real-time computing, with particular emphasis on predictable scheduling algorithms. It introduces the fundamental concepts of real-time computing, demonstrates...

263 citations


Journal ArticleDOI
TL;DR: A weekly surgery schedule in an operating theatre where time blocks are reserved for surgeons rather than specialities is designed, which has less idle time between surgical cases, much higher utilisation of operating rooms and produce less overtime.

Journal ArticleDOI
TL;DR: An Improved Genetic Algorithm to solve the Distributed and Flexible Job-shop Scheduling problem is proposed and has been compared with other algorithms for distributed scheduling and evaluated with satisfactory results on a large set of distributed-and-flexible scheduling problems derived from classical job-shop scheduling benchmarks.

Journal ArticleDOI
TL;DR: The paper shows the application of the proposed approach to a medium-voltage 120 buses network with five wind plants, one photovoltaic field, ten dispatchable generators, and two transformers equipped with on-load tap changers.
Abstract: Among the innovative contributions to electric distribution systems, one of the most promising and qualified is the possibility to manage and control distributed generation. Therefore, the latest distribution management systems tend to incorporate optimization functions for the short-term scheduling of the various energy and control resources available in the network (e.g., embedded generators, reactive power compensators and transformers equipped with on-load tap changers). The short-term scheduling procedure adopted in the paper is composed by two stages: a day-ahead scheduler for the optimization of distributed resources production during the following day, an intra-day scheduler that every 15 min adjusts the scheduling in order to take into account the operation requirements and constraints of the distribution network. The intra-day scheduler solves a non-linear multi-objective optimization problem by iteratively applying a mixed-integer linear programming (MILP) algorithm. The linearization of the optimization function and the constraints is achieved by the use of sensitivity coefficients obtained from the results of a three-phase power flow calculation. The paper shows the application of the proposed approach to a medium-voltage 120 buses network with five wind plants, one photovoltaic field, ten dispatchable generators, and two transformers equipped with on-load tap changers.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: The proposed scheduling approach in cloud employs an improved cost-based scheduling algorithm for making efficient mapping of tasks to available resources in cloud that measures both resource cost and computation performance and improves the computation/communication ratio.
Abstract: Cloud computing has been build upon the development of distributed computing, grid computing and virtualization. Since cost of each task in cloud resources is different with one another, scheduling of user tasks in cloud is not the same as in traditional scheduling methods. The objective of this paper is to schedule task groups in cloud computing platform, where resources have different resource costs and computation performance. Due to job grouping, communication of coarse-grained jobs and resources optimizes computation/communication ratio. For this purpose, an algorithm based on both costs with user task grouping is proposed. The proposed scheduling approach in cloud employs an improved cost-based scheduling algorithm for making efficient mapping of tasks to available resources in cloud. This scheduling algorithm measures both resource cost and computation performance, it also improves the computation/communication ratio by grouping the user tasks according to a particular cloud resource's processing capability and sends the grouped jobs to the resource.

Book ChapterDOI
23 Oct 2010
TL;DR: This paper discusses a two levels task scheduling mechanism based on load balancing in cloud computing that can not only meet user's requirements, but also get high resource utilization, which was proved by the simulation results in the CloudSim toolkit.
Abstract: Efficient task scheduling mechanism can meet users' requirements, and improve the resource utilization, thereby enhancing the overall performance of the cloud computing environment. But the task scheduling in grid computing is often about the static task requirements, and the resources utilization rate is also low. According to the new features of cloud computing, such as flexibility, virtualization and etc, this paper discusses a two levels task scheduling mechanism based on load balancing in cloud computing. This task scheduling mechanism can not only meet user's requirements, but also get high resource utilization, which was proved by the simulation results in the CloudSim toolkit.

Journal ArticleDOI
TL;DR: A simple cross-CC packet scheduling algorithm is proposed that improves the coverage performance and the resource allocation fairness among users, as compared to independent scheduling per CC.
Abstract: -In this paper we focus on resource allocation for next generation wireless communication systems with aggregation of multiple Component Carriers (CCs), i.e., how to assign the CCs to each user, and how to multiplex multiple users in each CC. We first investigate two carrier load balancing methods for allocating the CCs to the users- Round Robin (RR) and Mobile Hashing (MH) balancing by means of a simple theoretical formulation, as well as system level simulations. At Layer-2 we propose a simple cross-CC packet scheduling algorithm that improves the coverage performance and the resource allocation fairness among users, as compared to independent scheduling per CC. The Long Term Evolution (LTE)-Advanced is selected for the case study of a multi-carrier system. In such a system, RR provides better performance than MH balancing, and the proposed simple scheduling algorithm is shown to be effective in providing up to 90% coverage gain with no loss of the overall cell throughput, as compared to independent scheduling per CC.

Journal ArticleDOI
TL;DR: A parallel variable neighborhood search (PVNS) algorithm that solves the FJSP to minimize makespan time and uses various neighborhood structures which carry the responsibility of making changes in assignment and sequencing of operations for generating neighboring solutions.
Abstract: Flexible job-shop scheduling problem (FJSP) is an extension of the classical job-shop scheduling problem. FJSP is NP-hard and mainly presents two difficulties. The first one is to assign each operation to a machine out of a set of capable machines, and the second one deals with sequencing the assigned operations on the machines. This paper proposes a parallel variable neighborhood search (PVNS) algorithm that solves the FJSP to minimize makespan time. Parallelization in this algorithm is based on the application of multiple independent searches increasing the exploration in the search space. The proposed PVNS uses various neighborhood structures which carry the responsibility of making changes in assignment and sequencing of operations for generating neighboring solutions. The results obtained from the computational study have shown that the proposed algorithm is a viable and effective approach for the FJSP.

Journal ArticleDOI
TL;DR: This survey reviews the current complexity status of basic cyclic scheduling models, paying special attention to recent results on the unsolvability (NP-hardness) of various cyclic problems arising from the scheduling of robotic cells.

Journal ArticleDOI
TL;DR: The framework for the two-agent scheduling problem is enlarged by including the total tardiness objective, allowing for preemptions, and considering jobs with different release dates; the relationships between two- agent scheduling problems and other areas within the scheduling field, namely rescheduling and scheduling subject to availability constraints are established.
Abstract: We consider a scheduling environment with m (m ≥ 1) identical machines in parallel and two agents. Agent A is responsible for n1 jobs and has a given objective function with regard to these jobs; agent B is responsible for n2 jobs and has an objective function that may be either the same or different from the one of agent A. The problem is to find a schedule for the n1 + n2 jobs that minimizes the objective of agent A (with regard to his n1 jobs) while keeping the objective of agent B (with regard to his n2 jobs) below or at a fixed level Q. The special case with a single machine has recently been considered in the literature, and a variety of results have been obtained for two-agent models with objectives such as fmax, Σ wjCj, and Σ Uj. In this paper, we generalize these results and solve one of the problems that had remained open. Furthermore, we enlarge the framework for the two-agent scheduling problem by including the total tardiness objective, allowing for preemptions, and considering jobs with different release dates; we consider also identical machines in parallel. We furthermore establish the relationships between two-agent scheduling problems and other areas within the scheduling field, namely rescheduling and scheduling subject to availability constraints.

Proceedings ArticleDOI
06 Jul 2010
TL;DR: This paper develops a unifying theory with the DP- FAIR scheduling policy and examines how it overcomes problems faced by greedy scheduling algorithms, and presents a simple DP-FAIR scheduling algorithm, DP-WRAP, which serves as a least common ancestor to many recent algorithms.
Abstract: We consider the problem of optimal real-time scheduling of periodic and sporadic tasks for identical multiprocessors. A number of recent papers have used the notions of fluid scheduling and deadline partitioning to guarantee optimality and improve performance. In this paper, we develop a unifying theory with the DP-FAIR scheduling policy and examine how it overcomes problems faced by greedy scheduling algorithms. We then present a simple DP-FAIR scheduling algorithm, DP-WRAP, which serves as a least common ancestor to many recent algorithms. We also show how to extend DP-FAIR to the scheduling of sporadic tasks with arbitrary deadlines.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: This work proposes GGen -- a unified and standard implementation of classical task graph generation methods used in the scheduling domain, and provides an in-depth analysis of each generation method, emphasizing important graph properties that may influence scheduling algorithms.
Abstract: In parallel and distributed systems, validation of scheduling heuristics is usually done by simulation on randomly generated synthetic workloads, typically represented by task graphs. Since there is no single generation method that models all possible workloads for scheduling problems, researchers often re-implement the classical generation algorithms or even implement ad hoc ones. A bad choice of generation method can mislead the validation of the algorithm due to biases it can induce. Moreover, different implementations of the same randomized generation method may produce slightly different graphs. These problems can harm the experimental comparison of scheduling algorithms. In order to provide a comparison basis we propose GGen -- a unified and standard implementation of classical task graph generation methods used in the scheduling domain. We also provide an in-depth analysis of each generation method, emphasizing important graph properties that may influence scheduling algorithms.

Journal ArticleDOI
TL;DR: In this article, a multi-objective genetic algorithm (MOGA) based on immune and entropy principle was proposed to solve the flexible job-shop scheduling problem (FJSP).
Abstract: Flexible job-shop scheduling problem (FJSP) is an extended traditional job-shop scheduling problem, which more approximates to practical scheduling problems. This paper presents a multi-objective genetic algorithm (MOGA) based on immune and entropy principle to solve the multi-objective FJSP. In this improved MOGA, the fitness scheme based on Pareto-optimality is applied, and the immune and entropy principle is used to keep the diversity of individuals and overcome the problem of premature convergence. Efficient crossover and mutation operators are proposed to adapt to the special chromosome structure. The proposed algorithm is evaluated on some representative instances, and the comparison with other approaches in the latest papers validates the effectiveness of the proposed algorithm.

Proceedings ArticleDOI
16 Jul 2010
TL;DR: This paper investigates the possibility to allocate the Virtual Machines (VMs) in a flexible way to permit the maximum usage of physical resources and uses an Improved Genetic Algorithm (IGA) for the automated scheduling policy.
Abstract: Based on the deep research on Infrastructure as a Service (IaaS) cloud systems of open-source, we propose an optimized scheduling algorithm to achieve the optimization or sub-optimization for cloud scheduling problems. In this paper, we investigate the possibility to allocate the Virtual Machines (VMs) in a flexible way to permit the maximum usage of physical resources. We use an Improved Genetic Algorithm (IGA) for the automated scheduling policy. The IGA uses the shortest genes and introduces the idea of Dividend Policy in Economics to select an optimal or suboptimal allocation for the VMs requests. The simulation experiments indicate that our dynamic scheduling policy performs much better than that of the Eucalyptus, Open Nebula, Nimbus IaaS cloud, etc. The tests illustrate that the speed of the IGA almost twice the traditional GA scheduling method in Grid environment and the utilization rate of resources always higher than the open-source IaaS cloud systems.

Journal ArticleDOI
TL;DR: This study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling, and finds a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware.
Abstract: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2p of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications and in optimizing system energy consumption.

Journal ArticleDOI
TL;DR: Three novel heuristics for scheduling parallel applications on Utility Grids that manage and optimize the trade-off between time and cost constraints are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a heuristic solution procedure based on genetic algorithms to counter the large running times inherent in tackling this kind of hard optimization problem and integrated the human factor into the optimization procedure by stressing the human resources' availabilities in the design of the schedules.

Journal ArticleDOI
TL;DR: In this article, a construction heuristic and an adaptive large neighborhood search heuristic for the technician and task scheduling problem arising in a large telecommunications company were proposed, which tied for second place in the 2007 ROADEF challenge.
Abstract: This paper proposes a construction heuristic and an adaptive large neighborhood search heuristic for the technician and task scheduling problem arising in a large telecommunications company. This problem was solved within the framework of the 2007 challenge set up by the French Operational Research Society (ROADEF). The paper describes the authors' entry in the competition which tied for second place.

Proceedings ArticleDOI
11 Sep 2010
TL;DR: This paper presents a range of scheduling and power management algorithms and performs a detailed evaluation of their effectiveness and scalability on heterogeneous many-core architectures with up to 256 cores and proposes a Hierarchical Hungarian Scheduling Algorithm that dramatically reduces the scheduling overhead without loss of accuracy.
Abstract: Future many-core microprocessors are likely to be heterogeneous, by design or due to variability and defects. The latter type of heterogeneity is especially challenging due to its unpredictability. To minimize the performance and power impact of these hardware imperfections, the runtime thread scheduler and global power manager must be nimble enough to handle such random heterogeneity. With hundreds of cores expected on a single die in the future, these algorithms must provide high power-performance efficiency, yet remain scalable with low runtime overhead. This paper presents a range of scheduling and power management algorithms and performs a detailed evaluation of their effectiveness and scalability on heterogeneous many-core architectures with up to 256 cores. We also conduct a limit study on the potential benefits of coordinating scheduling and power management and demonstrate that coordination yields little benefit. We highlight the scalability limitations of previously proposed thread scheduling algorithms that were designed for small-scale chip multiprocessors and propose a Hierarchical Hungarian Scheduling Algorithm that dramatically reduces the scheduling overhead without loss of accuracy. Finally, we show that the high computational requirements of prior global power management algorithms based on linear programming make them infeasible for many-core chips, and that an algorithm that we call Steepest Drop achieves orders of magnitude lower execution time without sacrificing power-performance efficiency.

Journal ArticleDOI
TL;DR: This study strives to minimize multi-objective flexible flowshop considering sequence-dependent setup times using Pareto archive concepts, and investigates the performance of the algorithm through comparing two last stage of it with a distinguished benchmark, multi-Objective genetic algorithm (MOGA).
Abstract: This study strives to minimize multi-objective flexible flowshop considering sequence-dependent setup times. The flowshop scheduling problem made up of n jobs that have to be processed on m machine. But a flexible flowshop scheduling problem should have more than one machine in at least one stage. As this problem is proven to be NP-hard, a multi-phase approach is developed to solve it. Both phases two and three improve their previous phase solutions, in order to tackle with the complexity of being multi-objective optimization, Pareto archive concepts have been implemented here. The parameters of the proposed algorithm are calibrated using a design of experiment (DOE) method. We investigate the performance of our algorithm through comparing two last stage of it with a distinguished benchmark, multi-objective genetic algorithm (MOGA). The computational results support the high performance of our innovative algorithm.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: An optimized algorithm for task scheduling based on genetic simulated annealing algorithm in cloud computing and its implementation, which efficiently completes tasks scheduling in the cloud computing environment computing.
Abstract: Scheduling is a very important part of the cloud computing system. This paper introduces an optimized algorithm for task scheduling based on genetic simulated annealing algorithm in cloud computing and its implementation. Algorithm considers the QOS requirements of different type tasks, the QOS parameters are dealt with dimensionless. The algorithm efficiently completes tasks scheduling in the cloud computing environment computing.