scispace - formally typeset
Search or ask a question

Showing papers by "Charles E. Leiserson published in 2004"


Proceedings ArticleDOI
27 Jun 2004
TL;DR: Two algorithms are provided, one serial and one parallel, to maintain series-parallel SP relationships "on the fly" for fork-join multithreaded programs and SP-order employs an order-maintenance data structure that allows it to implement a more efficient "English-Hebrew" labeling scheme than was used in earlier race detectors.
Abstract: A key capability of data-race detectors is to determine whether one thread executes logically in parallel with another or whether the threads must operate in series. This paper provides two algorithms, one serial and one parallel, to maintain series-parallel (SP) relationships "on the fly" for fork-join multithreaded programs. The serial SP-order algorithm runs in O(1) amortized time per operation. In contrast, the previously best algorithm requires a time per operation that is proportional to Tarjan's functional inverse of Ackermann's function. SP-order employs an order-maintenance data structure that allows us to implement a more efficient "English-Hebrew" labeling scheme than was used in earlier race detectors, which immediately yields an improved determinacy-race detector. In particular, any fork-join program running in T1 time on a single processor can be checked on the fly for determinacy races in O(T1) time. Corresponding improved bounds can also be obtained for more sophisticated data-race detectors, for example, those that use locks.By combining SP-order with Feng and Leiserson's serial SP-bags algorithm, we obtain a parallel SP-maintenance algorithm, called SP-hybrid. Suppose that a fork-join program has n threads, T1 work, and a critical-path length of T∞. When executed on P processors, we prove that SP-hybrid runs in O((T1/P +PT,/i>∞)lg

70 citations


Proceedings ArticleDOI
26 Apr 2004
TL;DR: This work analyzes randomized backoff strategies using worst-case assumptions on the inputs to determine which backoff algorithms perform best in the worst case or on inputs, such as bursty inputs, that are not covered by the statistical models.
Abstract: Summary form only given. Backoff strategies have typically been analyzed by making statistical assumptions on the distribution of problem inputs. Although these analyses have provided valuable insights into the efficacy of various backoff strategies, they leave open the question as to which backoff algorithms perform best in the worst case or on inputs, such as bursty inputs, that are not covered by the statistical models. We analyze randomized backoff strategies using worst-case assumptions on the inputs.

6 citations


01 Jan 2004
TL;DR: The Dagstuhl Seminar 04301 ``Cache-Oblivious and Cache-Aware Algorithms'' was held, and several participants presented their current research, and ongoing work and open problems were discussed.
Abstract: The Dagstuhl Seminar 04301 ``Cache-Oblivious and Cache-Aware Algorithms'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl, from 18.07.2004 to 23.07.2004. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

5 citations


Book ChapterDOI
08 Jul 2004
TL;DR: Cilk minimally extends the C programming language to allow interactions among computational threads to be specified in a simple and high-level fashion, and dynamically maps a user’s program onto available physical resources using a randomized “work-stealing” scheduler.
Abstract: Dynamic multithreaded languages provide low-overhead fork-join primitives to express parallelism. One such language is Cilk [3, 5], which was developed in the MIT Laboratory for Computer Science (now the MIT Computer Science and Artificial Intelligence Laboratory). Cilk minimally extends the C programming language to allow interactions among computational threads to be specified in a simple and high-level fashion. Cilk’s provably efficient runtime system dynamically maps a user’s program onto available physical resources using a randomized “work-stealing” scheduler, freeing the programmer from concerns of communication protocols and load balancing.