scispace - formally typeset
Book ChapterDOI

Towards Convergence in Job Schedulers for Parallel Supercomputers

Reads0
Chats0
TLDR
It is argued that by identifying these assumptions explicitly, it is possible to reach a level of convergence in the space of job schedulers for parallel supercomputers by associating a suitable cost function with the execution of each job.
Abstract
The space of job schedulers for parallel supercomputers is rather fragmented, because different researchers tend to make different assumptions about the goals of the scheduler, the information that is available about the workload, and the operations that the scheduler may perform. We argue that by identifying these assumptions explicitly, it is possible to reach a level of convergence. For example, it is possible to unite most of the different assumptions into a common framework by associating a suitable cost function with the execution of each job. The cost function reflects knowledge about the job and the degree to which it fits the goals of the system. Given such cost functions, scheduling is done to maximize the system's profit.

read more

Citations
More filters

私の computer 環境

秀逸 原田
TL;DR: The longman elect new senior secondary theme book is a brand new task-based coursebook specially designed to meet the aims of the new high school curriculum for secondary 4 to 6 building on the solid foundation of knowledge skills values and attitudes laid down in the widely successful Longman elect junior secondary series as discussed by the authors.
Book ChapterDOI

Theory and Practice in Parallel Job Scheduling

TL;DR: The scheduling of jobs on parallel supercomputer is becoming the subject of much research, however, there is concern about the divergence of theory and practice, and a proposal for standard interfaces among the components of a scheduling system is proposed.
Journal ArticleDOI

The workload on parallel supercomputers: modeling the characteristics of rigid jobs

TL;DR: This work analyzes and model the job-level workloads with an emphasis on those aspects that are universal to all sites, and creates a synthetic workload based on the results of the analysis.
Book

Design and Evaluation of Job Scheduling Strategies for Grid Computing

TL;DR: Simulations were used to evaluate typical scheduling structures that occur in computational grids and FCFS proves to perform better than Backfill when using a central job-pool.
Book

Workload Modeling for Computer Systems Performance Evaluation

TL;DR: Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system.
References
More filters

Operating System Concepts

TL;DR: In this article, Abraham Silberschatz and Peter Galvin discuss key concepts that are applicable to a variety of operating systems and present a large number of examples taken from common operating systems, including WindowsNT and Solaris 2.
Book

Operating System Concepts

TL;DR: This best-selling book provides a solid theoretical foundation for understanding operating systems while giving the teacher and students the flexibility to choose the implementation system.
Proceedings Article

Scheduling Techniques for Concurrent Systems.

私の computer 環境

秀逸 原田
TL;DR: The longman elect new senior secondary theme book is a brand new task-based coursebook specially designed to meet the aims of the new high school curriculum for secondary 4 to 6 building on the solid foundation of knowledge skills values and attitudes laid down in the widely successful Longman elect junior secondary series as discussed by the authors.
Proceedings ArticleDOI

Scheduler activations: effective kernel support for the user-level management of parallelism

TL;DR: It is argued that the performance of kernel threads is inherently worse than that of user-level threads, rather than this being an artifact of existing implementations, and that managing parallelism at the user level is essential to high-performance parallel computing.
Related Papers (5)