scispace - formally typeset
Search or ask a question
Topic

Job scheduler

About: Job scheduler is a research topic. Over the lifetime, 5675 publications have been published within this topic receiving 85876 citations. The topic is also known as: process scheduler & batch scheduler.


Papers
More filters
Proceedings ArticleDOI
23 Oct 1995
TL;DR: This paper proposes a simple model of job scheduling aimed at capturing some key aspects of energy minimization, and gives an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule.
Abstract: The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s/sup p/ where p/spl ges/2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.

1,525 citations

Journal ArticleDOI
TL;DR: The main goal of this paper is to provide an up-to-date review of the state-of-the-art in this challenging area of short-term batch scheduling, with a general classification for scheduling problems of batch processes as well as for the corresponding optimization models.

781 citations

Journal ArticleDOI
Onur Mutlu1, Thomas Moscibroda1
01 Jun 2008
TL;DR: A parallelism-aware batch scheduler that seamlessly incorporates support for system-level thread priorities and can provide different service levels, including purely opportunistic service, to threads with different priorities, and is also simpler to implement than STFM.
Abstract: In a chip-multiprocessor (CMP) system, the DRAM system isshared among cores. In a shared DRAM system, requests from athread can not only delay requests from other threads by causingbank/bus/row-buffer conflicts but they can also destroy other threads’DRAM-bank-level parallelism. Requests whose latencies would otherwisehave been overlapped could effectively become serialized. As aresult both fairness and system throughput degrade, and some threadscan starve for long time periods.This paper proposes a fundamentally new approach to designinga shared DRAM controller that provides quality of service to threads,while also improving system throughput. Our parallelism-aware batchscheduler (PAR-BS) design is based on two key ideas. First, PARBSprocesses DRAM requests in batches to provide fairness and toavoid starvation of requests. Second, to optimize system throughput,PAR-BS employs a parallelism-aware DRAM scheduling policythat aims to process requests from a thread in parallel in the DRAMbanks, thereby reducing the memory-related stall-time experienced bythe thread. PAR-BS seamlessly incorporates support for system-levelthread priorities and can provide different service levels, includingpurely opportunistic service, to threads with different priorities.We evaluate the design trade-offs involved in PAR-BS and compareit to four previously proposed DRAM scheduler designs on 4-, 8-, and16-core systems. Our evaluations show that, averaged over 100 4-coreworkloads, PAR-BS improves fairness by 1.11X and system throughputby 8.3% compared to the best previous scheduling technique, Stall-Time Fair Memory (STFM) scheduling. Based on simple request prioritizationrules, PAR-BS is also simpler to implement than STFM.

575 citations

Book ChapterDOI
05 Apr 1997
TL;DR: The scheduling of jobs on parallel supercomputer is becoming the subject of much research, however, there is concern about the divergence of theory and practice, and a proposal for standard interfaces among the components of a scheduling system is proposed.
Abstract: The scheduling of jobs on parallel supercomputer is becoming the subject of much research. However, there is concern about the divergence of theory and practice. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the field.

514 citations

Proceedings ArticleDOI
24 Jul 2002
TL;DR: This work develops a family of algorithms and uses simulation studies to evaluate various combinations of these algorithms to suggest that while it is necessary to consider the impact of replication, it is not always necessary to couple data movement and computation scheduling.
Abstract: In high-energy physics, bioinformatics, and other disciplines, we encounter applications involving numerous, loosely coupled jobs that both access and generate large data sets. So-called Data Grids seek to harness geographically distributed resources for such large-scale data-intensive problems. Yet effective scheduling in such environments is challenging, due to a need to address a variety of metrics and constraints while dealing with multiple, potentially independent sources of jobs and a large number of storage, compute, and network resources. We describe a scheduling framework that addresses these problems. Within this framework, data movement operations may be either tightly bound to job scheduling decisions or, alternatively, performed by a decoupled, asynchronous process on the basis of observed data access patterns and load. We develop a family of algorithms and use simulation studies to evaluate various combinations. Our results suggest that while it is necessary to consider the impact of replication, it is not always necessary to couple data movement and computation scheduling. Instead, these two activities can be addressed separately, thus significantly simplifying the design and implementation.

504 citations


Network Information
Related Topics (5)
Scheduling (computing)
78.6K papers, 1.3M citations
89% related
Server
79.5K papers, 1.4M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
82% related
Genetic algorithm
67.5K papers, 1.2M citations
80% related
Network packet
159.7K papers, 2.2M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202331
202289
2021176
2020254
2019259
2018281