scispace - formally typeset
Open AccessBook

An introduction to parallel algorithms

Reads0
Chats0
TLDR
This book provides an introduction to the design and analysis of parallel algorithms, with the emphasis on the application of the PRAM model of parallel computation, with all its variants, to algorithm analysis.
Abstract
Written by an authority in the field, this book provides an introduction to the design and analysis of parallel algorithms. The emphasis is on the application of the PRAM (parallel random access machine) model of parallel computation, with all its variants, to algorithm analysis. Special attention is given to the selection of relevant data structures and to algorithm design principles that have proved to be useful. Features *Uses PRAM (parallel random access machine) as the model for parallel computation. *Covers all essential classes of parallel algorithms. *Rich exercise sets. *Written by a highly respected author within the field. 0201548569B04062001

read more

Content maybe subject to copyright    Report

Citations
More filters

Looking to Parallel Algorithms for ILP and Decentralization

TL;DR: Examining explicit multi-threading (XMT), a decentralized architecture that exploits fine-grained SPMD-style programming, shows that implementations of such an architecture tend towards decentralization and that overall performance is relatively insensitive to large on-chip delays.
Book ChapterDOI

Towards Work-Efficient Parallel Parameterized Algorithms

TL;DR: Work-efficient parallel parameterized complexity theory as mentioned in this paper studies how fixed-parameter tractable (fpt) problems can be solved in parallel, but does not take into account that when we only have a small number of processors, it is more important that the parallel algorithms are work-efficient.
Posted Content

Nested Dataflow Algorithms for Dynamic Programming Recurrences with more than O(1) Dependency

TL;DR: This paper develops the first work-efficient and sublinear-time GAP algorithm based on the closure method and Nested Dataflow method and improves the time bounds of classic work- efficient, cache-oblivious and cache-efficient algorithms for the 1D problem and GAP problem, respectively.
Journal ArticleDOI

Optimal Sequential and Parallel Algorithms for Cut Vertices and Bridges on Trapezoid Graphs

TL;DR: These algorithms can be easily parallelized on the EREW PRAM computational model so that cut vertices and bridges can be found in O(log n) time by using O(n / log n) processors.

SMASim Manual, version 1.0

TL;DR: SMASim is a software based simulator, motivated by an experimental moving threads architecture, that attempts to lower the costs of rapidly designing new architectures based on a general purpose, precise latency centric message passing framework between the described hardware architecture elements.
References
More filters
Book

Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes

TL;DR: This chapter discusses sorting on a Linear Array with a Systolic and Semisystolic Model of Computation, which automates the very labor-intensive and therefore time-heavy and expensive process of manually sorting arrays.
Book

Computer Architecture and Parallel Processing

Kai Hwang, +1 more
TL;DR: The authors have divided the use of computers into the following four levels of sophistication: data processing, information processing, knowledge processing, and intelligence processing.
Journal ArticleDOI

Data parallel algorithms

TL;DR: The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.
Proceedings ArticleDOI

Parallelism in random access machines

TL;DR: A model of computation based on random access machines operating in parallel and sharing a common memory is presented and can accept in polynomial time exactly the sets accepted by nondeterministic exponential time bounded Turing machines.
Journal ArticleDOI

The Parallel Evaluation of General Arithmetic Expressions

TL;DR: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log 2 + 10(n - 1) using processors which can independently perform arithmetic operations in unit time.