scispace - formally typeset
Journal ArticleDOI

Scheduling multithreaded computations by work stealing

TLDR
This paper gives the first provably good work-stealing scheduler for multithreaded computations with dependencies, and shows that the expected time to execute a fully strict computation on P processors using this scheduler is 1:1.
Abstract
This paper studies the problem of efficiently schedulling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is “work stealing,” in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies.Specifically, our analysis shows that the expected time to execute a fully strict computation on P processors using our work-stealing scheduler is T1/P + O(T ∞ , where T1 is the minimum serial execution time of the multithreaded computation and (T ∞ is the minimum execution time with an infinite number of processors. Moreover, the space required by the execution is at most S1P, where S1 is the minimum serial space requirement. We also show that the expected total communication of the algorithm is at most O(PT ∞( 1 + nd)Smax), where Smax is the size of the largest activation record of any thread and nd is the maximum number of times that any thread synchronizes with its parent. This communication bound justifies the folk wisdom that work-stealing schedulers are more communication efficient than their work-sharing counterparts. All three of these bounds are existentially optimal to within a constant factor.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Cooperation vs. coordination for lifeline-based global load balancing in APGAS

TL;DR: This study compared the two approaches for lifeline-based global load balancing, which is the algorithm used by X10's Global Load Balancing framework GLB, with the APGAS library for Java, to which it ported GLB in a first step.
Proceedings ArticleDOI

Dag-calculus: a calculus for parallel computation

TL;DR: This paper proposes a calculus, called dag calculus, that can encode fork-join, async-finish, and futures, and possibly others, and describes dag calculus and its semantics, and establishes translations from the aforementioned paradigms into dag calculus.
Proceedings ArticleDOI

A fault tolerant self-scheduling scheme for parallel loops on shared memory systems

TL;DR: FTSS is presented, a fault tolerant self-scheduling scheme which aims to execute parallel loops efficiently in the presence of hardware faults on shared memory systems and greatly outperforms existing self- scheduling schemes in terms of performance and stability in heavy loaded runtime environment.
Proceedings ArticleDOI

Balancing Graph Processing Workloads Using Work Stealing on Heterogeneous CPU-FPGA Systems

TL;DR: The use of HWS results in better graph processing performance compared to static scheduling and a representative of existing adaptive partitioning techniques, called HAP, and can be up to 100% over static scheduling, and up to 17% over HAP.
Journal ArticleDOI

Provably space-efficient parallel functional programming

TL;DR: In this paper, the authors present space efficient memory management techniques for determinacy-race-free functional parallel programs, allowing both pure and imperative programs where memory may be destructively updated, and prove that for a program with sequential live memory of R*, any P-processor garbage-collected parallel run requires at most O(R* · P) memory.
References
More filters
Journal ArticleDOI

Cilk: An Efficient Multithreaded Runtime System

TL;DR: It is shown that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately, and it is proved that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal.
Journal ArticleDOI

Bounds for certain multiprocessing anomalies

TL;DR: In this paper, precise bounds are derived for several anomalies of this type in a multiprocessing system composed of many identical processing units operating in parallel, and they show that an increase in the number of processing units can cause an increased total length of time needed to process a fixed set of tasks.
Proceedings ArticleDOI

The implementation of the Cilk-5 multithreaded language

TL;DR: Cilk-5's novel "two-clone" compilation strategy and its Dijkstra-like mutual-exclusion protocol for implementing the ready deque in the work-stealing scheduler are presented.
Journal ArticleDOI

The Parallel Evaluation of General Arithmetic Expressions

TL;DR: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log 2 + 10(n - 1) using processors which can independently perform arithmetic operations in unit time.
Related Papers (5)