scispace - formally typeset
Search or ask a question

Showing papers by "Charles E. Leiserson published in 2005"


Proceedings ArticleDOI
12 Feb 2005
TL;DR: A hardware implementation of unbounded transactional memory, called UTM, is described, which exploits the common case for performance without sacrificing correctness on transactions whose footprint can be nearly as large as virtual memory.
Abstract: Hardware transactional memory should support unbounded transactions: transactions of arbitrary size and duration. We describe a hardware implementation of unbounded transactional memory, called UTM, which exploits the common case for performance without sacrificing correctness on transactions whose footprint can be nearly as large as virtual memory. We performed a cycle-accurate simulation of a simplified architecture, called LTM. LTM is based on UTM but is easier to implement, because it does not change the memory subsystem outside of the processor. LTM allows nearly unbounded transactions, whose footprint is limited only by physical memory size and whose duration by the length of a timeslice. We assess UTM and LTM through microbenchmarking and by automatically converting the SPECjvm98 Java benchmarks and the Linux 2.4.19 kernel to use transactions instead of locks. We use both cycle-accurate simulation and instrumentation to understand benchmark behavior. Our studies show that the common case is small transactions that commit, even when contention is high, but that some applications contain very large transactions. For example, although 99.9% of transactions in the Linux study touch 54 cache lines or fewer, some transactions touch over 8000 cache lines. Our studies also indicate that hardware support is required, because some applications spend over half their time in critical regions. Finally, they suggest that hardware support for transactions can make Java programs run faster than when run using locks and can increase the concurrency of the Linux kernel by as much as a factor of 4 with no additional programming work.

447 citations


Proceedings ArticleDOI
18 Jul 2005
TL;DR: The instability results show that bursty input is close to being worst-case for exponential backoff and variants and that even small bursts can create instabilities in the channel.
Abstract: This paper analyzes the worst-case performance of randomized backoff on simple multiple-access channels Most previous analysis of backoff has assumed a statistical arrival modelFor batched arrivals, in which all n packets arrive at time 0, we show the following tight high-probability bounds Randomized binary exponential backoff has makespan Θ(nlgn), and more generally, for any constant r, r-exponential backoff has makespan Θ(nloglgrn) Quadratic backoff has makespan Θ((n/lg n)3/2), and more generally, for r>1, r-polynomial backoff has makespan Θ((n/lg n)1+1/r) Thus, for batched inputs, both exponential and polynomial backoff are highly sensitive to backoff constants We exhibit a monotone superpolynomial subexponential backoff algorithm, called loglog-iterated backoff, that achieves makespan Θ(nlglg n/lglglg n) We provide a matching lower bound showing that this strategy is optimal among all monotone backoff algorithms Of independent interest is that this lower bound was proved with a delay sequence argumentIn the adversarial-queuing model, we present the following stability and instability results for exponential backoff and loglog-iterated backoff Given a (λ,T)-stream, in which at most n=λT packets arrive in any interval of size T, exponential backoff is stable for arrival rates of λ=O(1/lgn) and unstable for arrival rates of λ=Ω(lglgn/lgn); loglog-iterated backoff is stable for arrival rates of λ=O(1/(lglgn\lgn)) and unstable for arrival rates of λ=Ω(1/lgn) Our instability results show that bursty input is close to being worst-case for exponential backoff and variants and that even small bursts can create instabilities in the channel

149 citations


07 Oct 2005
TL;DR: JCilk’s linguistic mechanisms can be used to program a solution to the “queens” problem with speculative computations and defines semantics consistent with the existing semantics of Java’'s try and catch constructs, but which handle concurrency in spawned methods.
Abstract: JCilk extends the Java language to provide call-return semantics for multithreading, much as Cilk does for C. Java’s built-in thread model does not support the passing of exceptions or return values from one thread back to the “parent” thread that created it. JCilk imports Cilk’s fork-join primitives spawn and sync into Java to provide procedure-call semantics for concurrent subcomputations. This paper shows how JCilk integrates exception handling with multithreading by defining semantics consistent with the existing semantics of Java’s try and catch constructs, but which handle concurrency in spawned methods. JCilk’s strategy of integrating multithreading with Java’s exception semantics yields some surprising semantic synergies. In particular, JCilk extends Java’s exception semantics to allow exceptions to be passed from a spawned method to its parent in a natural way that obviates the need for Cilk’s inlet and abort constructs. This extension is “faithful” in that it obeys Java’s ordinary serial semantics when executed on a single processor. When executed in parallel, however, an exception thrown by a JCilk computation signals its sibling computations to abort, which yields a clean semantics in which only a single exception from the enclosing try block is handled. The decision to implicitly abort side computations opens a Pandora’s box of subsidiary linguistic problems to be resolved, however. For instance, aborting might cause a computation to be interrupted asynchronously, causing havoc in programmer understanding of code behavior. To minimize the complexity of reasoning about aborts, JCilk signals them “semisynchronously” so that abort signals do not interrupt ordinary serial code. In addition, JCilk propagates an abort signal throughout a subcomputation naturally with a built-in CilkAbort exception, thereby allowing programmers to handle clean-up by simply catching the CilkAbort exception. The semantics of JCilk allow programs with speculative computations to be programmed easily. Speculation is essential for parThis research was supported in part by the Singapore-MIT Alliance and by NSF Grant ACI-0324974. I-Ting Angelina Lee was supported in part by a Sun Microsystems Fellowship. Copyright c © 2005 by John S. Danaher, I-Ting Angelina Lee, and Charles E. Leiserson. allelizing programs such as branch-and-bound or heuristic search. We show how JCilk’s linguistic mechanisms can be used to program a solution to the “queens” problem with speculative computations.

20 citations



Proceedings Article
01 Jan 2005
TL;DR: The Dagstuhl Seminar on Cache-Oblivious and Cache-Aware Algorithms as mentioned in this paper was held from 18.07 to 23.07.2004.
Abstract: The Dagstuhl Seminar 04301 ``Cache-Oblivious and Cache-Aware Algorithms'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl, from 18.07.2004 to 23.07.2004. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

1 citations