scispace - formally typeset
Search or ask a question

Showing papers by "Charles E. Leiserson published in 2015"


Journal ArticleDOI
08 Sep 2015
TL;DR: A provably efficient scheduling algorithm, the Piper algorithm, is described, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested and automatically throttles the parallelism, precluding “runaway” pipelines.
Abstract: Pipeline parallelism organizes a parallel program as a linear sequence of stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memory multiprocessors, can be expressed as pipeline parallelism. Whereas most concurrency platforms that support pipeline parallelism use a “construct-and-run” approach, this article investigates “on-the-fly” pipeline parallelism, where the structure of the pipeline emerges as the program executes rather than being specified a priori. On-the-fly pipeline parallelism allows the number of stages to vary from iteration to iteration and dependencies to be data dependent. We propose simple linguistics for specifying on-the-fly pipeline parallelism and describe a provably efficient scheduling algorithm, the Piper algorithm, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested. The Piper algorithm automatically throttles the parallelism, precluding “runaway” pipelines. Given a pipeline computation with T1 work and T∞ span (critical-path length), Piper executes the computation on P processors in TP ≤ T1/P+O(T∞+lg P) expected time. Piper also limits stack space, ensuring that it does not grow unboundedly with running time. We have incorporated on-the-fly pipeline parallelism into a Cilk-based work-stealing runtime system. Our prototype Cilk-P implementation exploits optimizations such as “lazy enabling” and “dependency folding.” We have ported the three PARSEC benchmarks that exhibit pipeline parallelism to run on Cilk-P. One of these, x264, cannot readily be executed by systems that support only construct-and-run pipeline parallelism. Benchmark results indicate that Cilk-P has low serial overhead and good scalability. On x264, for example, Cilk-P exhibits a speedup of 13.87 over its respective serial counterpart when running on 16 processors.

56 citations


Proceedings ArticleDOI
13 Jun 2015
TL;DR: Cilkprof as mentioned in this paper is a scalability profiler for multithreaded Cilk computations that collects work (serial running time) and span (critical-path length) data for each call site in the computation to assess how much each call sites contributes to the overall work and span.
Abstract: Cilkprof is a scalability profiler for multithreaded Cilk computations. Unlike its predecessor Cilkview, which analyzes only the whole-program scalability of a Cilk computation, Cilkprof collects work (serial running time) and span (critical-path length) data for each call site in the computation to assess how much each call site contributes to the overall work and span. Profiling work and span in this way enables a programmer to quickly diagnose scalability bottlenecks in a Cilk program. Despite the detail and quantity of information required to collect these measurements, Cilkprof runs with only constant asymptotic slowdown over the serial running time of the parallel computation. As an example of Cilkprof's usefulness, we used Cilkprof to diagnose a scalability bottleneck in an 1800-line parallel breadth-first search (PBFS) code. By examining Cilkprof's output in tandem with the source code, we were able to zero in on a call site within the PBFS routine that imposed a scalability bottleneck. A minor code modification then improved the parallelism of PBFS by a factor of 5. Using Cilkprof, it took us less than two hours to find and fix a scalability bug which had, until then, eluded us for months. This paper describes the Cilkprof algorithm and proves theoretically using an amortization argument that Cilkprof incurs only constant overhead compared with the application's native serial running time. Cilkprof was implemented by compiler instrumentation, that is, by modifying the LLVM compiler to insert instrumentation into user programs. On a suite of 16 application benchmarks, Cilkprof incurs a geometric-mean multiplicative overhead of only 1.9 and a maximum multiplicative overhead of only 7.4 compared with running the benchmarks without instrumentation.

34 citations


Journal ArticleDOI
16 Jul 2015-Nature
TL;DR: To drive discovery, scientists heading up research teams large and small need to learn how people operate, argue Charles E. Leiserson and Chuck McVinney.
Abstract: To drive discovery, scientists heading up research teams large and small need to learn how people operate, argue Charles E. Leiserson and Chuck McVinney.

19 citations


01 Dec 2015
TL;DR: In this paper, a deterministic contention-management algorithm for guaranteeing the forward progress of transactions is presented, which is suitable for both hardware and software transactional-memory systems and can be used as a locking protocol for implementing transactions by hand.
Abstract: This paper describes a remarkably simple deterministic (not probabilistic) contention-management algorithm for guaranteeing the forward progress of transactions - avoiding deadlocks, livelocks, and other anomalies. The transactions must be finite (no infinite loops), but on each restart, a transaction may access different shared-memory locations. The algorithm supports irrevocable transactions as long as the transaction satisfies a simple ordering constraint. In particular, a transaction that accesses only one shared-memory location is never aborted. The algorithm is suitable for both hardware and software transactional-memory systems. It also can be used in some contexts as a locking protocol for implementing transactions "by hand." HighlightsA remarkably simple algorithm can guarantee the forward progress of transactions.The algorithm supports irrevocable transactions.The algorithm is suitable for hardware or software transactional-memory systems.The algorithm can be used as a locking protocol.

3 citations


01 Oct 2015
TL;DR: This paper investigates a variant of the work-stealing algorithm that it is shown that under the “even distribution of free agents assumption”, the expected running time of the algorithm is T 1 / P + O ( T ∞ lg ⁡ P ) and gets another running-time bound based on ratios between the sizes of serial tasks in the computation.
Abstract: This paper investigates a variant of the work-stealing algorithm that we call the localized work-stealing algorithm . The intuition behind this variant is that because of locality, processors can benefit from working on their own work. Consequently, when a processor is free, it makes a steal attempt to get back its own work. We call this type of steal a steal-back . We show that the expected running time of the algorithm is T 1 / P + O ( T ∞ P ) , and that under the “even distribution of free agents assumption”, the expected running time of the algorithm is T 1 / P + O ( T ∞ lg ⁡ P ) . In addition, we obtain another running-time bound based on ratios between the sizes of serial tasks in the computation. If M denotes the maximum ratio between the largest and the smallest serial tasks of a processor after removing a total of O ( P ) serial tasks across all processors from consideration, then the expected running time of the algorithm is T 1 / P + O ( T ∞ M ) .

Journal Article
TL;DR: In this article, the authors studied the setting of work stealing in multithreaded computations and obtained tight upper bounds on the number of steals when the computation can be modeled by rooted trees, and they showed that if the computation with n processors starts with one processor having a complete k-ary tree of height h (and the remaining n? 1 processors having nothing), the maximum possible number of stealing is?i = 1n(k?1)ihi${\sum }_{i=1}^{n}(k-1)^{i}\binom
Abstract: Inspired by applications in parallel computing, we analyze the setting of work stealing in multithreaded computations. We obtain tight upper bounds on the number of steals when the computation can be modeled by rooted trees. In particular, we show that if the computation with n processors starts with one processor having a complete k-ary tree of height h (and the remaining n ? 1 processors having nothing), the maximum possible number of steals is ?i=1n(k?1)ihi${\sum }_{i=1}^{n}(k-1)^{i}\binom {h}{i}$.