scispace - formally typeset
Proceedings ArticleDOI

Low depth cache-oblivious algorithms

Reads0
Chats0
TLDR
This paper describes several cache-oblivious algorithms with optimal work, polylogarithmic depth, and sequential cache complexities that match the best sequential algorithms, including the first such algorithms for sorting and for sparse-matrix vector multiply on matrices with good vertex separators.
Abstract
In this paper we explore a simple and general approach for developing parallel algorithms that lead to good cache complexity on parallel machines with private or shared caches. The approach is to design nested-parallel algorithms that have low depth (span, critical path length) and for which the natural sequential evaluation order has low cache complexity in the cache-oblivious model. We describe several cache-oblivious algorithms with optimal work, polylogarithmic depth, and sequential cache complexities that match the best sequential algorithms, including the first such algorithms for sorting and for sparse-matrix vector multiply on matrices with good vertex separators.Using known mappings, our results lead to low cache complexities on shared-memory multiprocessors with a single level of private caches or a single shared cache. We generalize these mappings to multi-level cache hierarchies of private or shared caches, implying that our algorithms also have low cache complexities on such hierarchies. The key factor in obtaining these low parallel cache complexities is the low depth of the algorithms we propose.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Brief announcement: paging for multicore processors

TL;DR: This paper derives bounds on the competitive ratios of natural strategies to manage the cache, and shows that the offline problem is NP-complete, but that it admits an algorithm that runs in polynomial time in the length of the request sequences.
Proceedings ArticleDOI

Histogram Sort with Sampling

TL;DR: In this paper, the authors proposed Histogram Sort with Sampling (HSS), which combines sampling and iterative histogramming to find high quality partitions with minimal data movement and high practical performance.
Proceedings ArticleDOI

Program-centric cost models for locality

TL;DR: In this position paper, it is argued that cost models for locality in parallel machines should be program-centric, not machine-centric.
Proceedings ArticleDOI

Low-Span Parallel Algorithms for the Binary-Forking Model

TL;DR: In this paper, a randomized comparison-based sorting algorithm with optimal O(log n) span and O(n log n) work was proposed for the binary-forking model.
Posted Content

Histogram Sort with Sampling

TL;DR: This work introduces Histogram sort with sampling (HSS), which combines sampling and iterative histogramming to find high quality partitions with minimal data movement and high practical performance.
References
More filters
Journal ArticleDOI

A bridging model for parallel computation

TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Journal ArticleDOI

Amortized efficiency of list update and paging rules

TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Journal ArticleDOI

Cilk: An Efficient Multithreaded Runtime System

TL;DR: It is shown that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately, and it is proved that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal.
Book

An introduction to parallel algorithms

TL;DR: This book provides an introduction to the design and analysis of parallel algorithms, with the emphasis on the application of the PRAM model of parallel computation, with all its variants, to algorithm analysis.
Proceedings ArticleDOI

LogP: towards a realistic model of parallel computation

TL;DR: A new parallel machine model, called LogP, is offered that reflects the critical technology trends underlying parallel computers and is intended to serve as a basis for developing fast, portable parallel algorithms and to offer guidelines to machine designers.
Related Papers (5)