scispace - formally typeset
Open AccessBook

An introduction to parallel algorithms

TLDR
This book provides an introduction to the design and analysis of parallel algorithms, with the emphasis on the application of the PRAM model of parallel computation, with all its variants, to algorithm analysis.
Abstract
Written by an authority in the field, this book provides an introduction to the design and analysis of parallel algorithms. The emphasis is on the application of the PRAM (parallel random access machine) model of parallel computation, with all its variants, to algorithm analysis. Special attention is given to the selection of relevant data structures and to algorithm design principles that have proved to be useful. Features *Uses PRAM (parallel random access machine) as the model for parallel computation. *Covers all essential classes of parallel algorithms. *Rich exercise sets. *Written by a highly respected author within the field. 0201548569B04062001

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Ligra: a lightweight graph processing framework for shared memory

TL;DR: This paper presents a lightweight graph processing framework that is specific for shared-memory parallel/multicore machines, which makes graph traversal algorithms easy to write and significantly more efficient than previously reported results using graph frameworks on machines with many more cores.
MonographDOI

Introduction to Parallel Computing

TL;DR: In this article, a comprehensive introduction to parallel computing is provided, discussing theoretical issues such as the fundamentals of concurrent processes, models of parallel and distributed computing, and metrics for evaluating and comparing parallel algorithms, as well as practical issues, including methods of designing and implementing shared-and distributed-memory programs, and standards for parallel program implementation.
Proceedings ArticleDOI

A lightweight infrastructure for graph analytics

TL;DR: This paper argues that existing DSLs can be implemented on top of a general-purpose infrastructure that supports very fine-grain tasks, implements autonomous, speculative execution of these tasks, and allows application-specific control of task scheduling policies.
Book

Data-Intensive Text Processing with MapReduce

TL;DR: This half-day tutorial introduces participants to data-intensive text processing with the MapReduce programming model using the open-source Hadoop implementation, with a focus on scalability and the tradeoffs associated with distributed processing of large datasets.
Book

Limits to Parallel Computation: P-Completeness Theory

TL;DR: In providing an up-to-date survey of parallel computing research from 1994, Topics in Parallel Computing will prove invaluable to researchers and professionals with an interest in the super computers of the future.
References
More filters
Proceedings ArticleDOI

Fast parallel matrix inversion algorithms

L. Csanky
TL;DR: It will be shown in the sequel that the parallel arithmetic complexity of all these four problems is upper bounded by O(log2n) and the algorithms that establish this bound use a number of processors polynomial in n, disproves I. Munro's conjecture.
Journal ArticleDOI

New Parallel-Sorting Schemes

TL;DR: A family of parallel-sorting algorithms for a multiprocessor system that is enumeration sortings and includes the use of parallel merging to implement count acquisition, matching the performance of Hirschberg's algoithm, which, however, is not free of fetch conflicts.
Proceedings ArticleDOI

A unified approach to models of synchronous parallel machines

TL;DR: Strong evidence of the general applicability of the parallel computation thesis is given and strong evidence of its truth is given in this paper by introducing the notion of “conglomerates” - a very large class of parallel machines, including all those which could feasibly be built.
Journal ArticleDOI

Fast parallel sorting algorithms

TL;DR: A parallel bucket-sort algorithm is presented that requires time O(log log n) and the use of n processors and makes use of a technique that requires more space than the product of processors and time.
Journal ArticleDOI

Parallel hashing: an efficient implementation of shared memory

TL;DR: A probabilistic scheme for implementing shared memory on a bounded-degree network of processors that enables n processors to store and retrieve an arbitrary set of n data items in O(logn) parallel steps is presented.