Journal ArticleDOI
Scheduling multithreaded computations by work stealing
TLDR
This paper gives the first provably good work-stealing scheduler for multithreaded computations with dependencies, and shows that the expected time to execute a fully strict computation on P processors using this scheduler is 1:1.Abstract:
This paper studies the problem of efficiently schedulling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is “work stealing,” in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies.Specifically, our analysis shows that the expected time to execute a fully strict computation on P processors using our work-stealing scheduler is T1/P + O(T ∞ , where T1 is the minimum serial execution time of the multithreaded computation and (T ∞ is the minimum execution time with an infinite number of processors. Moreover, the space required by the execution is at most S1P, where S1 is the minimum serial space requirement. We also show that the expected total communication of the algorithm is at most O(PT ∞( 1 + nd)Smax), where Smax is the size of the largest activation record of any thread and nd is the maximum number of times that any thread synchronizes with its parent. This communication bound justifies the folk wisdom that work-stealing schedulers are more communication efficient than their work-sharing counterparts. All three of these bounds are existentially optimal to within a constant factor.read more
Citations
More filters
Proceedings ArticleDOI
An Extended Work-Stealing Framework for Mixed-Mode Parallel Applications
TL;DR: This paper presents a shared-memory programming framework that allows tasks to dynamically spawn subtasks with a given degree of parallelism for implementing tightly coupled parallel parts of the algorithm, and presents a new algorithm for work-stealing with deterministic team-building.
Posted Content
Extending the Nested Parallel Model to the Nested Dataflow Model with Provably Efficient Schedulers
TL;DR: It is shown that the algorithms in this paper have increased "parallelizability" in the ND model, and that SB schedulers can use the extra parallelizability to achieve asymptotically optimal bounds on cache misses and running time on a greater number of processors than in the NP model.
Proceedings ArticleDOI
An efficient hybrid synchronization technique for scalable multi-core instruction set simulations
TL;DR: An effective hybrid technique is proposed that combines the advantage of the two approaches of conventional polling and collaborative timing synchronization and effectively resolves the scalability issue.
Journal ArticleDOI
Concurrency Analysis in Dynamic Dataflow Graphs
TL;DR: This paper presents techniques to perform concurrency analysis on generic dynamic dataflow graphs, even in the presence of cycles, and provides a set of theoretical tools for obtaining bounds and illustrate implementation of parallel dataflow runtime on aSet of representative graphs for important classes of benchmarks to compare measured performance against derived bounds.
Book ChapterDOI
A Taxonomy of Task-Based Technologies for High-Performance Computing
Peter Thoman,Khalid Hasanov,Kiril Dichev,Roman Iakymchuk,Xavier Aguilar,Philipp Gschwandtner,Pierre Lemarinier,Stefano Markidis,Herbert Jordan,Erwin Laure,Kostas Katrinis,Dimitrios S. Nikolopoulos,Thomas Fahringer +12 more
TL;DR: Despite the fact that dozens of different task- based systems exist today and are actively used for parallel and high-performance computing, no comprehensive overview or classification of task-based technologies for HPC exists.
References
More filters
Journal ArticleDOI
Cilk: An Efficient Multithreaded Runtime System
Robert D. Blumofe,Christopher F. Joerg,Bradley C. Kuszmaul,Charles E. Leiserson,Keith H. Randall,Yuli Zhou +5 more
TL;DR: It is shown that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately, and it is proved that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal.
Journal ArticleDOI
Bounds for certain multiprocessing anomalies
TL;DR: In this paper, precise bounds are derived for several anomalies of this type in a multiprocessing system composed of many identical processing units operating in parallel, and they show that an increase in the number of processing units can cause an increased total length of time needed to process a fixed set of tasks.
Proceedings ArticleDOI
The implementation of the Cilk-5 multithreaded language
TL;DR: Cilk-5's novel "two-clone" compilation strategy and its Dijkstra-like mutual-exclusion protocol for implementing the ready deque in the work-stealing scheduler are presented.
Journal ArticleDOI
The Parallel Evaluation of General Arithmetic Expressions
TL;DR: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log 2 + 10(n - 1) using processors which can independently perform arithmetic operations in unit time.