scispace - formally typeset
Proceedings ArticleDOI

Low depth cache-oblivious algorithms

Reads0
Chats0
TLDR
This paper describes several cache-oblivious algorithms with optimal work, polylogarithmic depth, and sequential cache complexities that match the best sequential algorithms, including the first such algorithms for sorting and for sparse-matrix vector multiply on matrices with good vertex separators.
Abstract
In this paper we explore a simple and general approach for developing parallel algorithms that lead to good cache complexity on parallel machines with private or shared caches. The approach is to design nested-parallel algorithms that have low depth (span, critical path length) and for which the natural sequential evaluation order has low cache complexity in the cache-oblivious model. We describe several cache-oblivious algorithms with optimal work, polylogarithmic depth, and sequential cache complexities that match the best sequential algorithms, including the first such algorithms for sorting and for sparse-matrix vector multiply on matrices with good vertex separators.Using known mappings, our results lead to low cache complexities on shared-memory multiprocessors with a single level of private caches or a single shared cache. We generalize these mappings to multi-level cache hierarchies of private or shared caches, implying that our algorithms also have low cache complexities on such hierarchies. The key factor in obtaining these low parallel cache complexities is the low depth of the algorithms we propose.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Fairness in responsive parallelism

TL;DR: An algorithm designed to approximate the fairly prompt scheduling principle on multicore computers is presented, implemented by extending the Standard ML language, and an empirical evaluation is presented.
Journal ArticleDOI

FFT, FMM, and multigrid on the road to exascale: Performance challenges and opportunities

TL;DR: A model-based comparison of the FFT, FMM, and multigrid methods in the context of these projected constraints is performed and performance models are used to offer predictions about the expected performance on upcoming exascale system configurations based on current technology trends.
Proceedings ArticleDOI

Parallel circuit simulation using the direct method on a heterogeneous cloud

TL;DR: The partitioning approach using heterogeneous resources has achieved an order-of-magnitude speedup over optimized multithreaded implementations of SPICE using state of the art KLU and NICSLU packages for matrix solution.
Journal ArticleDOI

Provably space-efficient parallel functional programming

TL;DR: In this paper, the authors present space efficient memory management techniques for determinacy-race-free functional parallel programs, allowing both pure and imperative programs where memory may be destructively updated, and prove that for a program with sequential live memory of R*, any P-processor garbage-collected parallel run requires at most O(R* · P) memory.

Coupling Memory and Computation for Locality Management

TL;DR: This paper proposes techniques for coupling tightly the computation (including the thread scheduler) and the memory manager so that data and computation can be positioned closely in hardware.
References
More filters
Journal ArticleDOI

A bridging model for parallel computation

TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Journal ArticleDOI

Amortized efficiency of list update and paging rules

TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Journal ArticleDOI

Cilk: An Efficient Multithreaded Runtime System

TL;DR: It is shown that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately, and it is proved that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal.
Book

An introduction to parallel algorithms

TL;DR: This book provides an introduction to the design and analysis of parallel algorithms, with the emphasis on the application of the PRAM model of parallel computation, with all its variants, to algorithm analysis.
Proceedings ArticleDOI

LogP: towards a realistic model of parallel computation

TL;DR: A new parallel machine model, called LogP, is offered that reflects the critical technology trends underlying parallel computers and is intended to serve as a basis for developing fast, portable parallel algorithms and to offer guidelines to machine designers.
Related Papers (5)