scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 2009"


Proceedings ArticleDOI
11 Oct 2009
TL;DR: It is argued that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures.
Abstract: This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.

949 citations


Posted Content
TL;DR: This paper proposes CloudSim: a new generalized and extensible simulation framework that enables seamless modelling, simulation, and experimentation of emerging Cloud computing infrastructures and management services.
Abstract: Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalable infrastructures for hosting Internet-based application services. These applications have different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: a new generalized and extensible simulation framework that enables seamless modelling, simulation, and experimentation of emerging Cloud computing infrastructures and management services. The simulation framework has the following novel features: (i) support for modelling and instantiation of large scale Cloud computing infrastructure, including data centers on a single physical computing node and java virtual machine; (ii) a self-contained platform for modelling data centers, service brokers, scheduling, and allocations policies; (iii) availability of virtualization engine, which aids in creation and management of multiple, independent, and co-hosted virtualized services on a data center node; and (iv) flexibility to switch between space-shared and time-shared allocation of processing cores to virtualized services.

537 citations


Journal ArticleDOI
TL;DR: The goals of scheduling are to achieve the optimal usage of resources, to assure the QoS guarantees, to maximize goodput and to minimize power consumption while ensuring feasible algorithm complexity and system scalability.
Abstract: Interest in broadband wireless access (BWA) has been growing due to increased user mobility and the need for data access at all times. IEEE 802.16e based WiMAX networks promise the best available quality of experience for mobile data service users. Unlike wireless LANs, WiMAX networks incorporate several quality of service (QoS) mechanisms at the Media Access Control (MAC) level for guaranteed services for data, voice and video. The problem of assuring QoS is basically that of how to allocate available resources among users in order to meet the QoS criteria such as delay, delay jitter and throughput requirements. IEEE standard does not include a standard scheduling mechanism and leaves it for implementer differentiation. Scheduling is, therefore, of special interest to all WiMAX equipment makers and service providers. This paper discusses the key issues and design factors to be considered for scheduler designers. In addition, we present an extensive survey of recent scheduling research. We classify the proposed mechanisms based on the use of channel conditions. The goals of scheduling are to achieve the optimal usage of resources, to assure the QoS guarantees, to maximize goodput and to minimize power consumption while ensuring feasible algorithm complexity and system scalability.

393 citations


Proceedings ArticleDOI
11 Jun 2009
TL;DR: This work evaluates the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests.
Abstract: In this paper, we investigate the benefits that organisations can reap by using "Cloud Computing" providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests. Requests for virtual machines are submitted to the organisation's cluster, but additional virtual machines are instantiated in the remote provider and added to the local cluster when there are insufficient resources to serve the users' requests. Naive scheduling strategies can have a great impact on the amount paid by the organisation for using the remote resources, potentially increasing the overall cost with the use of IaaS. Therefore, in this work we investigate six scheduling strategies that consider the use of resources from the "Cloud", to understand how these strategies achieve a balance between performance and usage cost, and how much they improve the requests' response times.

374 citations


01 Jan 2009
TL;DR: Two simple techniques, delay scheduling and copy-compute splitting, are developed which improve throughput and response times in multi-user MapReduce workloads by factors of 2 to 10 and can also raise throughput in a single-user, FIFO workload by a factor of 2.
Abstract: Sharing a MapReduce cluster between users is attractive because it enables statistical multiplexing (lowering costs) and allows users to share a common large data set. However, we find that traditional scheduling algorithms can perform very poorly in MapReduce due to two aspects of the MapReduce setting: the need for data locality (running computation where the data is) and the dependence between map and reduce tasks. We illustrate these problems through our experience designing a fair scheduler for MapReduce at Facebook, which runs a 600-node multiuser data warehouse on Hadoop. We developed two simple techniques, delay scheduling and copy-compute splitting, which improve throughput and response times by factors of 2 to 10. Although we focus on multi-user workloads, our techniques can also raise throughput in a single-user, FIFO workload by a factor of 2.

373 citations


Journal ArticleDOI
01 Jan 2009
TL;DR: This paper proposes an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters and designs seven new heuristics for the ACO approach and proposes an adaptive scheme that allows artificial ants to select heuristic based on pheromone values.
Abstract: Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm.

355 citations


Journal ArticleDOI
TL;DR: The algorithm is implemented in TinyOS and shown to be effective in adapting to local topology changes without incurring global overhead in the scheduling, and the effect of the time-varying nature of wireless links on the conflict-free property of DRAND-assigned time slots is evaluated.
Abstract: This paper presents a distributed implementation of RAND, a randomized time slot scheduling algorithm, called DRAND. DRAND runs in O(delta) time and message complexity where delta is the maximum size of a two-hop neighborhood in a wireless network while message complexity remains O(delta), assuming that message delays can be bounded by an unknown constant. DRAND is the first fully distributed version of RAND. The algorithm is suitable for a wireless network where most nodes do not move, such as wireless mesh networks and wireless sensor networks. We implement the algorithm in TinyOS and demonstrate its performance in a real testbed of Mica2 nodes. The algorithm does not require any time synchronization and is shown to be effective in adapting to local topology changes without incurring global overhead in the scheduling. Because of these features, it can also be used even for other scheduling problems such as frequency or code scheduling (for FDMA or CDMA) or local identifier assignment for wireless networks where time synchronization is not enforced. We further evaluate the effect of the time-varying nature of wireless links on the conflict-free property of DRAND-assigned time slots. This experiment is conducted on a 55-node testbed consisting of the more recent MicaZ sensor nodes.

339 citations


Journal ArticleDOI
TL;DR: This work considers a multicell orthogonal frequency-division multiple-access wireless network with universal frequency reuse and treats the problem of cochannel interference mitigation via base station coordination in the downlink as a nonconvex combinatorial problem.
Abstract: We consider a multicell orthogonal frequency-division multiple-access (OFDMA) wireless network with universal frequency reuse and treat the problem of cochannel interference mitigation via base station coordination in the downlink. Assuming that coordinated access points only share channel quality measurements but not user data symbols, we propose to select the set of cochannel users and the power allocation across tones to maximize the weighted system sum rate subject to per-base-station power constraints. Since this is a nonconvex combinatorial problem, efficient suboptimal algorithms are presented and discussed, each requiring a different level of coordination among base stations and a different feedback signaling overhead. Simulation results are provided to assess the performances of the proposed strategies.

334 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper explores the fundamental problem of LTE SC-FDMA uplink scheduling by adopting the conventional time-domain proportional fair algorithm to maximize its objective (i.e. proportional fair criteria) in the frequency-domain setting and presents a set of practical algorithms fine tuned to this problem.
Abstract: With the power consumption issue of mobile handset taken into account, single-carrier FDMA (SC-FDMA) has been selected for 3GPP long-term evolution (LTE) uplink multiple access scheme. Like in OFDMA downlink, it enables multiple users to be served simultaneously in uplink as well. However, its single carrier property requires that all the subcarriers allocated to a single user must be contiguous in frequency within each time slot. This contiguous allocation constraint limits the scheduling flexibility, and frequency-domain packet scheduling algorithms in such system need to incorporate this constraint while trying to maximize their own scheduling objectives. In this paper we explore this fundamental problem of LTE SC-FDMA uplink scheduling by adopting the conventional time-domain proportional fair algorithm to maximize its objective (i.e. proportional fair criteria) in the frequency-domain setting. We show the NP-hardness of the frequency-domain scheduling problem under this contiguous allocation constraint and present a set of practical algorithms fine tuned to this problem. We demonstrate that competitive performance can be achieved in terms of system throughput as well as fairness perspective, which is evaluated using 3GPP LTE system model simulations.

318 citations


Journal ArticleDOI
TL;DR: This work proposes a gradient-based scheduling framework for OFDM scheduling that has prohibitively high computational complexity but reveals guiding principles that are used to generate lower complexity sub-optimal algorithms.
Abstract: Orthogonal frequency division multiplexing (OFDM) with dynamic scheduling and resource allocation is a key component of most emerging broadband wireless access networks such as WiMAX and LTE (long term evolution) for 3GPP. However, scheduling and resource allocation in an OFDM system is complicated, especially in the uplink due to two reasons: (i) the discrete nature of subchannel assignments, and (ii) the heterogeneity of the users' subchannel conditions, individual resource constraints and application requirements. We approach this problem using a gradient-based scheduling framework. Physical layer resources (bandwidth and power) are allocated to maximize the projection onto the gradient of a total system utility function which models application-layer Quality of Service (QoS). This is formulated as a convex optimization problem and solved using a dual decomposition approach. This optimal solution has prohibitively high computational complexity but reveals guiding principles that we use to generate lower complexity sub-optimal algorithms. We analyze the complexity and compare the performance of these algorithms via extensive simulations.

311 citations


Proceedings ArticleDOI
16 Oct 2009
TL;DR: This paper presents the design and implementation of an efficient scheduling algorithm to allocate virtual machines in a DVFS-enabled cluster by dynamically scaling the supplied voltages via the technique of Dynamic Voltage Frequency Scaling (DVFS).
Abstract: With the advent of Cloud computing, large-scale virtualized compute and data centers are becoming common in the computing industry. These distributed systems leverage commodity server hardware in mass quantity, similar in theory to many of the fastest Supercomputers in existence today. However these systems can consume a cities worth of power just to run idle, and require equally massive cooling systems to keep the servers within normal operating temperatures. This produces CO 2 emissions and significantly contributes to the growing environmental issue of Global Warming. Green computing, a new trend for high-end computing, attempts to alleviate this problem by delivering both high performance and reduced power consumption, effectively maximizing total system efficiency. This paper focuses on scheduling virtual machines in a compute cluster to reduce power consumption via the technique of Dynamic Voltage Frequency Scaling (DVFS). Specifically, we present the design and implementation of an efficient scheduling algorithm to allocate virtual machines in a DVFS-enabled cluster by dynamically scaling the supplied voltages. The algorithm is studied via simulation and implementation in a multi-core cluster. Test results and performance discussion justify the design and implementation of the scheduling algorithm.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This work proposes the first scheduling algorithm with approximation guarantee independent of the topology of the network, and proves that the analysis of the algorithm is extendable to higher-dimensional Euclidean spaces, and to more realistic bounded-distortion spaces, induced by non-isotropic signal distortions.
Abstract: In this work we study the problem of determining the throughput capacity of a wireless network. We propose a scheduling algorithm to achieve this capacity within an approximation factor. Our analysis is performed in the physical interference model, where nodes are arbitrarily distributed in Euclidean space. We consider the problem separately from the routing problem and the power control problem, i.e., all requests are single-hop, and all nodes transmit at a fixed power level. The existing solutions to this problem have either concentrated on special-case topologies, or presented optimality guarantees which become arbitrarily bad (linear in the number of nodes) depending on the network's topology. We propose the first scheduling algorithm with approximation guarantee independent of the topology of the network. The algorithm has a constant approximation guarantee for the problem of maximizing the number of links scheduled in one time-slot. Furthermore, we obtain a O(log n) approximation for the problem of minimizing the number of time slots needed to schedule a given set of requests. Simulation results indicate that our algorithm does not only have an exponentially better approximation ratio in theory, but also achieves superior performance in various practical network scenarios. Furthermore, we prove that the analysis of the algorithm is extendable to higher-dimensional Euclidean spaces, and to more realistic bounded-distortion spaces, induced by non-isotropic signal distortions. Finally, we show that it is NP-hard to approximate the scheduling problem to within n 1-epsiv factor, for any constant epsiv > 0, in the non-geometric SINR model, in which path-loss is independent of the Euclidean coordinates of the nodes.

Journal ArticleDOI
TL;DR: The limitations of the traditional CPU-oriented batch schedulers in handling the challenging data management problem of large-scale distributed applications are discussed; the vision for the new paradigm in data-intensive scheduling is given; and a case study is detailed on the Stork data placement scheduler.

Journal ArticleDOI
TL;DR: This work considers scheduling and resource allocation for the downlink of a OFDM-based wireless network, and gives optimal and sub-optimal algorithms for its solution.
Abstract: We consider scheduling and resource allocation for the downlink of a cellular OFDM system, with various practical considerations including integer tone allocations, different sub-channelization schemes, maximum SNR constraint per tone, and "self-noise" due to channel estimation errors and phase noise. During each time-slot a subset of users must be scheduled, and the available tones and transmission power must be allocated among them. Employing a gradient-based scheduling scheme presented in earlier papers reduces this to an optimization problem to be solved in each time-slot. Using a dual formulation, we give an optimal algorithm for this problem when multiple users can time-share each tone. We then give several low complexity heuristics that enforce integer tone allocations. Simulations are used to compare the performance of different algorithms.

Journal ArticleDOI
TL;DR: A nonlinear mathematical model to consider production scheduling and vehicle routing with time windows for perishable food products in the same framework to maximize the expected total profit of the supplier is proposed.

Journal ArticleDOI
TL;DR: Numerical results show that the proposed PF scheduler provides a superior fairness performance with a modest loss in throughput, as long as the user average SINRs are fairly uniform.
Abstract: The challenge of scheduling user transmissions on the downlink of a long term evolution (LTE) cellular communication system is addressed. A maximum rate algorithm which does not consider fairness among users was proposed in . Here, a multiuser scheduler with proportional fairness (PF) is proposed. Numerical results show that the proposed PF scheduler provides a superior fairness performance with a modest loss in throughput, as long as the user average SINRs are fairly uniform. A suboptimal PF scheduler is also proposed, which has a much lower complexity at the cost of some throughput degradation.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the use of binary particle swarm optimization (BPSO) to schedule a significant number of varied interruptible loads over 16 hours and achieved near-optimal solutions in manageable computational time-frames for this relatively complex, nonlinear and noncontinuous problem.
Abstract: Interruptible loads represent highly valuable demand side resources within the electricity industry. However, maximizing their potential value in terms of system security and scheduling is a considerable challenge because of their widely varying and potentially complex operational characteristics. This paper investigates the use of binary particle swarm optimization (BPSO) to schedule a significant number of varied interruptible loads over 16 h. The scheduling objective is to achieve a system requirement of total hourly curtailments while satisfying the operational constraints of the available interruptible loads, minimizing the total payment to them and minimizing the frequency of interruptions imposed upon them. This multiobjective optimization problem was simplified by using a single aggregate objective function. The BPSO algorithm proved capable of achieving near-optimal solutions in manageable computational time-frames for this relatively complex, nonlinear and noncontinuous problem. The effectiveness of the approach was further improved by dividing the swarm into several subswarms. The proposed scheduling technique demonstrated useful performance for a relatively challenging scheduling task, and would seem to offer some potential advantages in scheduling significant numbers of widely varied and technically complex interruptible loads.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: To the best of the knowledge, the proposed algorithm is the first distributed algorithm for data aggregation scheduling, and an adaptive strategy for updating the schedule when nodes fail or new nodes join in a network is proposed.
Abstract: Data aggregation is an essential operation in wireless sensor network applications. This paper focuses on the data aggregation scheduling problem. Based on maximal independent sets, a distributed algorithm to generate a collision-free schedule for data aggregation in wireless sensor networks is proposed. The time latency of the aggregation schedule generated by the proposed algorithm is minimized using a greedy strategy. The latency bound of the schedule is 24D + 6 Delta + 16, where D is the network diameter and Delta is the maximum node degree. The previous data aggregation algorithm with least latency has the latency bound (Delta- Delta 1)R, where R is the network radius. Thus in our algorithm Delta contributes to an additive factor instead of a multiplicative factor, which is a significant improvement. To the best of our knowledge, the proposed algorithm is the first distributed algorithm for data aggregation scheduling. This paper also proposes an adaptive strategy for updating the schedule when nodes fail or new nodes join in a network. The analysis and simulation results show that the proposed algorithm outperforms other aggregation scheduling algorithms.

Journal ArticleDOI
TL;DR: This work proposes a Heterogeneity-Aware Signature-Supported scheduling algorithm that does the matching using per-thread architectural signatures, which are compact summaries of threads' architectural properties collected offline, and is comparatively simple and scalable.
Abstract: Future heterogeneous single-ISA multicore processors will have an edge in potential performance per watt over comparable homogeneous processors. To fully tap into that potential, the OS scheduler needs to be heterogeneity-aware, so it can match jobs to cores according to characteristics of both. We propose a Heterogeneity-Aware Signature-Supported scheduling algorithm that does the matching using per-thread architectural signatures, which are compact summaries of threads' architectural properties collected offline. The resulting algorithm does not rely on dynamic profiling, and is comparatively simple and scalable. We implemented HASS in OpenSolaris, and achieved average workload speedups of up to 13%, matching best static assignment, achievable only by an oracle. We have also implemented a dynamic IPC-driven algorithm proposed earlier that relies on online profiling. We found that the complexity, load imbalance and associated performance degradation resulting from dynamic profiling are significant challenges to using this algorithm successfully. As a result it failed to deliver expected performance gains and to outperform HASS.

Proceedings ArticleDOI
18 Aug 2009
TL;DR: This paper introduces a Multiple QoS Constrained Scheduling Strategy of Multi-Workflows (MQMW) which can schedule multiple workflows which are started at any time and the QoS requirements are taken into account.
Abstract: Cloud computing has gained popularity in recent times. As a cloud must provide services to many users at the same time and different users have different QoS requirements, the scheduling strategy should be developed for multiple workflows with different QoS requirements. In this paper, we introduce a Multiple QoS Constrained Scheduling Strategy of Multi-Workflows (MQMW) to address this problem. The strategy can schedule multiple workflows which are started at any time and the QoS requirements are taken into account. Experimentation shows that our strategy is able to increase the scheduling success rate significantly.

Journal ArticleDOI
TL;DR: This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP) using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed adaptive multisensor scheduling scheme can achieve superior energy efficiency and tracking reliability while satisfying the tracking accuracy requirement, and is also robust to the uncertainty of the process noise.
Abstract: Due to uncertainties in target motion and limited sensing regions of sensors, single-sensor-based collaborative target tracking in wireless sensor networks (WSNs), as addressed in many previous approaches, suffers from low tracking accuracy and lack of reliability when a target cannot be detected by a scheduled sensor. Generally, actuating multiple sensors can achieve better tracking performance but with high energy consumption. Tracking accuracy, reliability, and energy consumed are affected by the sampling interval between two successive time steps. In this paper, an adaptive energy-efficient multisensor scheduling scheme is proposed for collaborative target tracking in WSNs. It calculates the optimal sampling interval to satisfy a specification on predicted tracking accuracy, selects the cluster of tasking sensors according to their joint detection probability, and designates one of the tasking sensors as the cluster head for estimation update and sensor scheduling according to a cluster head energy measure (CHEM) function. Simulation results show that, compared with existing single-sensor scheduling and multisensor scheduling with a uniform sampling interval, the proposed adaptive multisensor scheduling scheme can achieve superior energy efficiency and tracking reliability while satisfying the tracking accuracy requirement. It is also robust to the uncertainty of the process noise.

Journal ArticleDOI
TL;DR: A general method to derive schedulability conditions for multiprocessor real-time systems will be presented and the analysis will be applied to two typical scheduling algorithms: earliest deadline first (EDF) and fixed priority (FP).
Abstract: This paper addresses the schedulability problem of periodic and sporadic real-time task sets with constrained deadlines preemptively scheduled on a multiprocessor platform composed by identical processors. We assume that a global work-conserving scheduler is used and migration from one processor to another is allowed during a task lifetime. First, a general method to derive schedulability conditions for multiprocessor real-time systems will be presented. The analysis will be applied to two typical scheduling algorithms: earliest deadline first (EDF) and fixed priority (FP). Then, the derived schedulability conditions will be tightened, refining the analysis with a simple and effective technique that significantly improves the percentage of accepted task sets. The effectiveness of the proposed test is shown through an extensive set of synthetic experiments.

Journal ArticleDOI
TL;DR: This work uses the technique of Lyapunov optimization to design an online flow control, scheduling, and resource allocation algorithm that meets the desired objectives and provides explicit performance guarantees.
Abstract: We develop opportunistic scheduling policies for cognitive radio networks that maximize the throughput utility of the secondary (unlicensed) users subject to maximum collision constraints with the primary (licensed) users. We consider a cognitive network with static primary users and potentially mobile secondary users. We use the technique of Lyapunov optimization to design an online flow control, scheduling, and resource allocation algorithm that meets the desired objectives and provides explicit performance guarantees.

Patent
19 Feb 2009
TL;DR: In this paper, a method for providing network access to a shared access communications medium for a plurality of users includes the steps of conducting predictive admission control by arbitrating user requests for access to the shared medium based on predicted aggregate demands.
Abstract: A method for providing network access to a shared access communications medium for a plurality of users includes the steps of conducting predictive admission control by arbitrating user requests for access to the shared medium based on predicted aggregate demands, conducting lookahead scheduling for use in making user channel assignments by forecasting schedule transmission opportunities one or more channels of the shared medium, and balancing load by making channel assignments such that a plurality users are each assigned a respective channel of the shared medium based upon a predicted need. Congestion parameters can predicted for each channel of the shared medium and mapped to a congestion measure using a mathematical function that takes into account packet loss rate, packet delay, packet delay jitter, and available capacity.

Journal ArticleDOI
TL;DR: This paper proposes new results on necessary and sufficient schedulability analysis for EDF scheduling; the new results reduce, exponentially, the calculation times, in all situations, for schedulable task sets, and in most situation, for unscheduled task sets.
Abstract: Real-time scheduling is the theoretical basis of real-time systems engineering. Earliest deadline first (EDF) is an optimal scheduling algorithm for uniprocessor real-time systems. Existing results on an exact schedulability test for EDF task systems with arbitrary relative deadlines need to calculate the processor demand of the task set at every absolute deadline to check if there is an overflow in a specified time interval. The resulting large number of calculations severely restricts the use of EDF in practice. In this paper, we propose new results on necessary and sufficient schedulability analysis for EDF scheduling; the new results reduce, exponentially, the calculation times, in all situations, for schedulable task sets, and in most situations, for unschedulable task sets. For example, a 16-task system that in the previous analysis had to check 858,331 points (deadlines) can, with the new analysis, be checked at just 12 points. There are no restrictions on the new results: each task can be periodic or sporadic, with relative deadline, which can be less than, equal to, or greater than its period, and task parameters can range over many orders of magnitude.

Journal ArticleDOI
TL;DR: This work considers the problem of designing distributed scheduling algorithms for wireless networks and presents two algorithms, both of which achieve throughput arbitrarily close to that of maximal schedules, but whose complexity is low due to the fact that they do not necessarily attempt to find maximal schedules.
Abstract: We consider the problem of designing distributed scheduling algorithms for wireless networks. We present two algorithms, both of which achieve throughput arbitrarily close to that of maximal schedules, but whose complexity is low due to the fact that they do not necessarily attempt to find maximal schedules. The first algorithm requires each link to collect local queue-length information in its neighborhood, and its complexity is otherwise independent of the size and topology of the network. The second algorithm, presented for the node-exclusive interference model, does not require nodes to collect queue-length information even in their local neighborhoods, and its complexity depends only on the maximum node degree in the network.

Journal ArticleDOI
TL;DR: The numerical results demonstrate that queue- and channel-aware QoS schedulers can and should be used in an LTE downlink to offer QoS to a diverse mix of traffic, including delay-sensitive flows.
Abstract: We present a design of a complete and practical scheduler for the 3GPP Long Term Evolution (LTE) downlink by integrating recent results on resource allocation, fast computational algorithms, and scheduling. Our scheduler has low computational complexity. We define the computational architecture and describe the exact computations that need to be done at each time step (1 milliseconds). Our computational framework is very general, and can be used to implement a wide variety of scheduling rules. For LTE, we provide quantitative performance results for our scheduler for full buffer, streaming video (with loose delay constraints), and live video (with tight delay constraints). Simulations are performed by selectively abstracting the PHY layer, accurately modeling the MAC layer, and following established network evaluation methods. The numerical results demonstrate that queue- and channel-aware QoS schedulers can and should be used in an LTE downlink to offer QoS to a diverse mix of traffic, including delay-sensitive flows. Through these results and via theoretical analysis, we illustrate the various design tradeoffs that need to be made in the selection of a specific queue-and-channel-aware scheduling policy. Moreover, the numerical results show that in many scenarios strict prioritization across traffic classes is suboptimal.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: The criticality inversion problem is characterized and a new scheduling scheme called zero-slack scheduling is presented that implements an alternative protection scheme the authors refer to as asymmetric protection, which only prevents interference from lower- criticality to higher-criticality tasks and improves the schedulable utilization.
Abstract: The functional consolidation induced by the cost reduction trends in embedded systems can force tasks of different criticality (e.g. ABS Brakes with DVD) to share a processor and interfere with each other. These systems are known as mixed criticality systems. While traditional temporal isolation techniques prevent all inter-task interference, they waste utilization because they need to reserve for the absolute worst-case execution time (WCET) for all tasks. In many mixed-criticality systems the WCET is not only rare, but at times difficult to calculate, such as the time to localize all possible objects in an obstacle avoidance algorithm. In this situation it is more appropriate to allow the execution time to grow by stealing cycles from lower-criticality tasks. Even more crucial is the fact that temporal isolation techniques can stop a high-criticality task (that was overrunning its nomimal WCET) to allow a low-criticality task to run, making the former miss its deadline. We identify this as the criticality inversion problem. In this paper, we characterize the criticality inversion problem and present a new scheduling scheme called zero-slack scheduling that implements an alternative protection scheme we refer to as asymmetric protection. This protection only prevents interference from lower-criticality to higher-criticality tasks and improves the schedulable utilization. We use an offline algorithm with two parts: a zero-slack calculation algorithm, and a slack analysis algorithm. The zero-slack calculation algorithm minimizes the utilization needed by a task set by reducing the time low-criticality tasks are preempted by high-criticality ones. This algorithm can be used with priority-based preemptive schedulers (e.g. RMS, EDF). The slack analysis algorithm is specific for each priority-based preemptive scheduler and we develop and evaluated the one for RMS. We prove that this algorithm provides the same level of protection against criticality inversion as the best known priority assignment for this purpose, criticality as priority assignment (CAPA). We also prove that zero-slack RM provides the same level of schedulable utilization as RMS when all tasks have equal criticality levels. Finally, we present our implementation of the runtime enforcement mechanisms in Linux/RK to demonstrate its practicality.

Proceedings ArticleDOI
Chenhong Zhao1, Shanshan Zhang1, Qingfeng Liu1, Jian Xie1, Jicheng Hu1 
24 Sep 2009
TL;DR: Though GA is designed to solve combinatorial optimization problem, it's inefficient for global optimization, so this paper concludes with further researches in optimized genetic algorithm.
Abstract: Task scheduling algorithm, which is an NP-completeness problem, plays a key role in cloud computing systems. In this paper, we propose an optimized algorithm based on genetic algorithm to schedule independent and divisible tasks adapting to different computation and memory requirements. We prompt the algorithm in heterogeneous systems, where resources (including CPUs) are of computational and communication heterogeneity. Dynamic scheduling is also in consideration. Though GA is designed to solve combinatorial optimization problem, it's inefficient for global optimization. So we conclude with further researches in optimized genetic algorithm.