scispace - formally typeset
Search or ask a question
Book

An introduction to parallel algorithms

01 Oct 1992-
TL;DR: This book provides an introduction to the design and analysis of parallel algorithms, with the emphasis on the application of the PRAM model of parallel computation, with all its variants, to algorithm analysis.
Abstract: Written by an authority in the field, this book provides an introduction to the design and analysis of parallel algorithms. The emphasis is on the application of the PRAM (parallel random access machine) model of parallel computation, with all its variants, to algorithm analysis. Special attention is given to the selection of relevant data structures and to algorithm design principles that have proved to be useful. Features *Uses PRAM (parallel random access machine) as the model for parallel computation. *Covers all essential classes of parallel algorithms. *Rich exercise sets. *Written by a highly respected author within the field. 0201548569B04062001

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
22 Apr 2019
TL;DR: A restricted version of the Diffusion Limited Aggregation model, where particles are limited to move in a subset of the possible directions, is introduced and it is shown that the prediction problem associated with particles allowing two or more movement directions is P-Complete, as it can simulate arbitrary Boolean circuits.
Abstract: Diffusion-Limited Aggregation (DLA) is a cluster-growth model that consists in a set of particles that are sequentially aggregated over a two-dimensional grid. In this paper, we introduce a biased version of the DLA model, in which particles are limited to move in a subset of possible directions. We denote by $k$-DLA the model where the particles move only in $k$ possible directions. We study the biased DLA model from the perspective of Computational Complexity, defining two decision problems The first problem is Prediction, whose input is a site of the grid $c$ and a sequence $S$ of walks, representing the trajectories of a set of particles. The question is whether a particle stops at site $c$ when sequence $S$ is realized. The second problem is Realization, where the input is a set of positions of the grid, $P$. The question is whether there exists a sequence $S$ that realizes $P$, i.e. all particles of $S$ exactly occupy the positions in $P$. Our aim is to classify the Prediciton and Realization problems for the different versions of DLA. We first show that Prediction is P-Complete for 2-DLA (thus for 3-DLA). Later, we show that Prediction can be solved much more efficiently for 1-DLA. In fact, we show that in that case the problem is NL-Complete. With respect to Realization, we show that restricted to 2-DLA the problem is in P, while in the 1-DLA case, the problem is in L.

Cites background from "An introduction to parallel algorit..."

  • ...In [11] it is shown that computing the all-pairs-shortest-path matrix can be done by a fast-parallel algorithm....

    [...]

  • ...All of them are explained in greater detail in [11]....

    [...]

Book ChapterDOI
01 Jan 2012
TL;DR: This chapter introduces the basic concepts of parallel programming and supplies the basic tools and concepts needed to write parallel programs and covers these topics: Practical aspects of threads, the sequential “atoms” of parallel programs.
Abstract: This chapter introduces the basic concepts of parallel programming. It is based on the ParC language, which is an extension of the C programming language with block-oriented parallel constructs that allow the programmer to express fine-grain parallelism in a shared memory model. It can be used to express parallel algorithms, and it is also conducive for the parallelization of sequential C programs. The chapter covers several topics in shared memory programming. Each topic is presented with simple examples demonstrating its utility. The chapter supplies the basic tools and concepts needed to write parallel programs and covers these topics: Practical aspects of threads, the sequential “atoms” of parallel programs. Closed constructs to create parallelism. Possible bugs. The structure of the software environment that surrounds parallel programs. The extension of C scoping rules to support private variables and local memory accesses. The semantics of parallelism. The discrepancy between the limited number of physical processors and the much larger number of parallel threads used in a program.
Proceedings ArticleDOI
19 May 2014
TL;DR: Several simulations algorithms are introduced between WECPAR and well-known models such as PRAM and RMBM and show that ranking one element in a list of N elements can be solved on W(N,N, N) WEC PAR in O(1) time.
Abstract: Reconfigurable models were shown to be very powerful in solving many problems faster than non reconfigurable models. WECPAR W(M,N,k) is an M × N reconfigurable model that has point-to-point reconfigurable interconnection with k wires between neighboring processors. This paper studies several aspects of WECPAR. We first solve the list ranking problem on WECPAR. Some of the results obtained show that ranking one element in a list of N elements can be solved on W(N,N,N) WECPAR in O(1) time. Also, on W(N,N,k), ranking a list L(N) of N elements can be done in O((log N)([logk+1])) time. To transfer a large body of algorithms to work on WECPAR and to assess its relative computational power, several simulations algorithms are introduced between WECPAR and well-known models such as PRAM and RMBM. Simulations algorithms show that a PRIORITY CRCW PRAM of N processors and S shared memory locations can be simulated by an W(S, N, k) WECPAR in O([logk+1 N]+[log Sk+1]) time. Also, we show that a PRIORITY CRCW Basic-RMBM(P,B), of P processors and B buses can be simulated by an W(B, P+B, k) WECPAR in O([logk+1 (P+B)]) time. This has the effect of migrating a large number of algorithms to work directly on WECPAR with the simulation overhead.

Cites background or methods from "An introduction to parallel algorit..."

  • ...Concurrent writes are resolved using one of the resolution rule such as Common, Collision, Collision +, Arbitrary and Priority [5]....

    [...]

  • ...We first modify the famous pointer jumping [5] technique and call the new procedure, MPJ_procedure....

    [...]

  • ...Also each step of a CRCW SRMBM(P,B) can be simulated in O(1) time on a Priority CRCW PRAM(O(P2B),O(PB))....

    [...]

  • ...• Modified Pointer Jumping (MPJ_procedue): The pointer jumping technique [5] originally, in each iteration, doubles the path between a node and its parent in a list....

    [...]

  • ...On concurrent read exclusive write (CREW) PRAM, list ranking problem can be solved using pointer jumping technique in O(log N) [5]....

    [...]

01 Jan 2013
TL;DR: An automatic view access management scheme where a view is automatically acquired upon its first access, and automatically released when no longer needed, thus relieving programmers from arranging locks to protect critical sections is proposed.
Abstract: Shared-memory multicore architectures have become pervasive, and there is a pressing need for parallel programming models to facilitate both performance and convenience. However, most existing shared-memory programming models are tedious for programming and are prone to errors such as data race, which are difficult to debug. To solve this problem, this thesis proposes a data race prevention scheme in the View-Oriented Parallel Programming (VOPP) paradigm. VOPP was proposed for distributed shared memory systems. It is adapted to shared-memory multicore architectures in this thesis. VOPP is a sharedmemory data-centric parallel programming model, which uses views to bundle mutual exclusion with data access. In VOPP, programmers partition the shared memory into “views”, which are non-overlapping sets of shared data objects. The data race prevention scheme proposed for VOPP can prevent data race through the memory protection mechanism while keeping the extra overhead low. To improve the programmability of VOPP, this thesis proposes an automatic view access management scheme where a view is automatically acquired upon its first access, and automatically released when no longer needed, thus relieving programmers from arranging locks to protect critical sections. To further improve performance and programmability, this thesis proposes the View-Oriented Transactional Memory (VOTM) system, which uses Restricted Admission Control (RAC) to manage the number of processes holding each view according to its contention. In VOTM, RAC can restrict the number of processes holding the view when its contention is high, and in
Posted Content
TL;DR: In this paper, the authors present a set of parallel algorithms for computing exact k-nearest neighbors in low dimensions using a data structure called the \textit{zd-tree}.
Abstract: We present a set of parallel algorithms for computing exact k-nearest neighbors in low dimensions. Many k-nearest neighbor algorithms use either a kd-tree or the Morton ordering of the point set; our algorithms combine these approaches using a data structure we call the \textit{zd-tree}. We show that this combination is both theoretically efficient under common assumptions, and fast in practice. For point sets of size $n$ with bounded expansion constant and bounded ratio, the zd-tree can be built in $O(n)$ work with $O(n^{\epsilon})$ span for constant $\epsilon<1$, and searching for the $k$-nearest neighbors of a point takes expected $O(k\log k)$ time. We benchmark our k-nearest neighbor algorithms against existing parallel k-nearest neighbor algorithms, showing that our implementations are generally faster than the state of the art as well as achieving 75x speedup on 144 hyperthreads. Furthermore, the zd-tree supports parallel batch-dynamic insertions and deletions; to our knowledge, it is the first k-nearest neighbor data structure to support such updates. On point sets with bounded expansion constant and bounded ratio, a batch-dynamic update of size $k$ requires $O(k \log n/k)$ work with $O(k^{\epsilon} + \text{polylog}(n))$ span.
References
More filters
Book
01 Sep 1991
TL;DR: This chapter discusses sorting on a Linear Array with a Systolic and Semisystolic Model of Computation, which automates the very labor-intensive and therefore time-heavy and expensive process of manually sorting arrays.
Abstract: Preface Acknowledgments Notation 1 Arrays and Trees 1.1 Elementary Sorting and Counting 1.1.1 Sorting on a Linear Array Assessing the Performance of the Algorithm Sorting N Numbers with Fewer Than N Processors 1.1.2 Sorting in the Bit Model 1.1.3 Lower Bounds 1.1.4 A Counterexample-Counting 1.1.5 Properties of the Fixed-Connection Network Model 1.2 Integer Arithmetic 1.2.1 Carry-Lookahead Addition 1.2.2 Prefix Computations-Segmented Prefix Computations 1.2.3 Carry-Save Addition 1.2.4 Multiplication and Convolution 1.2.5 Division and Newton Iteration 1.3 Matrix Algorithms 1.3.1 Elementary Matrix Products 1.3.2 Algorithms for Triangular Matrices 1.3.3 Algorithms for Tridiagonal Matrices -Odd-Even Reduction -Parallel Prefix Algorithms 1.3.4 Gaussian Elimination 1.3.5 Iterative Methods -Jacobi Relaxation -Gauss-Seidel Relaxation Finite Difference Methods -Multigrid Methods 1.4 Retiming and Systolic Conversion 1.4.1 A Motivating Example-Palindrome Recognition 1.4.2 The Systolic and Semisystolic Model of Computation 1.4.3 Retiming Semisystolic Networks 1.4.4 Conversion of a Semisystolic Network into a Systolic Network 1.4.5 The Special Case of Broadcasting 1.4.6 Retiming the Host 1.4.7 Design by Systolic Conversion-A Summary 1.5 Graph Algorithms 1.5.1 Transitive Closure 1.5.2 Connected Components 1.5.3 Shortest Paths 1.5.4 Breadth-First Spanning Trees 1.5.5 Minimum Weight Spanning Trees 1.6 Sorting Revisited 1.6.1 Odd-Even Transposition Sort on a Linear Array 1.6.2 A Simple Root-N(log N + 1)-Step Sorting Algorithm 1.6.3 A (3 Root- N + o(Root-N))-Step Sorting Algorithm 1.6.4 A Matching Lower Bound 1.7 Packet Routing 1.7.1 Greedy Algorithms 1.7.2 Average-Case Analysis of Greedy Algorithms -Routing N Packets to Random Destinations -Analysis of Dynamic Routing Problems 1.7.3 Randomized Routing Algorithms 1.7.4 Deterministic Algorithms with Small Queues 1.7.5 An Off-line Algorithm 1.7.6 Other Routing Models and Algorithms 1.8 Image Analysis and Computational Geometry 1.8.1 Component-Labelling Algorithms -Levialdi's Algorithm -An O (Root-N)-Step Recursive Algorithm 1.8.2 Computing Hough Transforms 1.8.3 Nearest-Neighbor Algorithms 1.8.4 Finding Convex Hulls 1.9 Higher-Dimensional Arrays 1.9.1 Definitions and Properties 1.9.2 Matrix Multiplication 1.9.3 Sorting 1.9.4 Packet Routing 1.9.5 Simulating High-Dimensional Arrays on Low-Dimensional Arrays 1.10 problems 1.11 Bibliographic Notes 2 Meshes of Trees 2.1 The Two-Dimensional Mesh of Trees 2.1.1 Definition and Properties 2.1.2 Recursive Decomposition 2.1.3 Derivation from KN,N 2.1.4 Variations 2.1.5 Comparison With the Pyramid and Multigrid 2.2 Elementary O(log N)-Step Algorithms 2.2.1 Routing 2.2.2 Sorting 2.2.3 Matrix-Vector Multiplication 2.2.4 Jacobi Relaxation 2.2.5 Pivoting 2.2.6 Convolution 2.2.7 Convex Hull 2.3 Integer Arithmetic 2.3.1 Multiplication 2.3.2 Division and Chinese Remaindering 2.3.3 Related Problems -Iterated Products -Rooting Finding 2.4 Matrix Algorithms 2.4.1 The Three-Dimensional Mesh of Trees 2.4.2 Matrix Multiplication 2.4.3 Inverting Lower Triangular Matrices 2.4.4 Inverting Arbitrary Matrices -Csanky's Algorithm -Inversion by Newton Iteration 2.4.5 Related Problems 2.5 Graph Algorithms 2.5.1 Minimum-Weight Spanning Trees 2.5.2 Connected Components 2.5.3 Transitive Closure 2.5.4 Shortest Paths 2.5.5 Matching Problems 2.6 Fast Evaluation of Straight-Line Code 2.6.1 Addition and Multiplication Over a Semiring 2.6.2 Extension to Codes with Subtraction and Division 2.6.3 Applications 2.7 Higher-Dimensional meshes of Trees 2.7.1 Definitions and Properties 2.7.2 The Shuffle-Tree Graph 2.8 Problems 2.9 Bibliographic Notes 3 Hypercubes and Related Networks 3.1 The Hypercube 3.1.1 Definitions and Properties 3.1.2 Containment of Arrays -Higher-Dimensional Arrays -Non-Power-of-2 Arrays 3.1.3 Containment of Complete Binary Trees 3.1.4 Embeddings of Arbitrary Binary Trees -Embeddings with Dilation 1 and Load O(M over N + log N) -Embeddings with Dilation O(1) and Load O (M over N + 1) -A Review of One-Error-Correcting Codes -Embedding Plog N into Hlog N 3.1.5 Containment of Meshes of Trees 3.1.6 Other Containment Results 3.2 The Butterfly, Cube-Connected-Cycles , and Benes Network 3.2.1 Definitions and Properties 3.2.2 Simulation of Arbitrary Networks 3.2.3 Simulation of Normal Hypercube Algorithms 3.2.4 Some Containment and Simulation Results 3.3 The Shuffle-Exchange and de Bruijn Graphs 3.3.1 Definitions and Properties 3.3.2 The Diaconis Card Tricks 3.3.3 Simulation of Normal Hypercube Algorithms 3.3.4 Similarities with the Butterfly 3.3.5 Some Containment and Simulation Results 3.4 Packet-Routing Algorithms 3.4.1 Definitions and Routing Models 3.4.2 Greedy Routing Algorithms and Worst-Case Problems 3.4.3 Packing, Spreading, and Monotone Routing Problems -Reducing a Many-to-Many Routing Problem to a Many-to-One Routing Problem -Reducing a Routing Problem to a Sorting Problem 3.4.4 The Average-Case Behavior of the Greedy Algorithm -Bounds on Congestion -Bounds on Running Time -Analyzing Non-Predictive Contention-Resolution Protocols 3.4.5 Converting Worst-Case Routing Problems into Average-Case Routing Problems -Hashing -Randomized Routing 3.4.6 Bounding Queue Sizes -Routing on Arbitrary Levelled Networks 3.4.7 Routing with Combining 3.4.8 The Information Dispersal Approach to Routing -Using Information Dispersal to Attain Fault-Tolerance -Finite Fields and Coding Theory 3.4.9 Circuit-Switching Algorithms 3.5 Sorting 3.5.1 Odd-Even Merge Sort -Constructing a Sorting Circuit with Depth log N(log N +1)/2 3.5.2 Sorting Small Sets 3.5.3 A Deterministic O(log N log log N)-Step Sorting Algorithm 3.5.4 Randomized O(log N)-Step Sorting Algorithms -A Circuit with Depth 7.45 log N that Usually Sorts 3.6 Simulating a Parallel Random Access Machine 3.6.1 PRAM Models and Shared Memories 3.6.2 Randomized Simulations Based on Hashing 3.6.3 Deterministic Simulations using Replicated Data 3.6.4 Using Information Dispersal to Improve Performance 3.7 The Fast Fourier Transform 3.7.1 The Algorithm 3.7.2 Implementation on the Butterfly and Shuffle-Exchange Graph 3.7.3 Application to Convolution and Polynomial Arithmetic 3.7.4 Application to Integer Multiplication 3.8 Other Hypercubic Networks 3.8.1 Butterflylike Networks -The Omega Network -The Flip Network -The Baseline and Reverse Baseline Networks -Banyan and Delta Networks -k-ary Butterflies 3.8.2 De Bruijn-Type Networks -The k-ary de Bruijn Graph -The Generalized Shuffle-Exchange Graph 3.9 Problems 3.10 Bibliographic Notes Bibliography Index Lemmas, Theorems, and Corollaries Author Index Subject Index

2,895 citations


"An introduction to parallel algorit..." refers background in this paper

  • ...Multiprocessorbased computers have been around for decades and various types of computer architectures [2] have been implemented in hardware throughout the years with different types of advantages/performance gains depending on the application....

    [...]

  • ...Every location in the array represents a node of the tree: T [1] is the root, with children at T [2] and T [3]....

    [...]

  • ...The text by [2] is a good start as it contains a comprehensive description of algorithms and different architecture topologies for the network model (tree, hypercube, mesh, and butterfly)....

    [...]

Book
01 Jan 1984
TL;DR: The authors have divided the use of computers into the following four levels of sophistication: data processing, information processing, knowledge processing, and intelligence processing.
Abstract: The book is intended as a text to support two semesters of courses in computer architecture at the college senior and graduate levels. There are excellent problems for students at the end of each chapter. The authors have divided the use of computers into the following four levels of sophistication: data processing, information processing, knowledge processing, and intelligence processing.

1,410 citations


"An introduction to parallel algorit..." refers background in this paper

  • ...Parallel architectures have been described in several books (see, for example, [18, 29])....

    [...]

Journal ArticleDOI
TL;DR: The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.
Abstract: Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.

1,000 citations


"An introduction to parallel algorit..." refers background in this paper

  • ...Recent work on the mapping of PRAM algorithms on bounded-degree networks is described in [3,13,14, 20, 25], Our presentation on the communication complexity of the matrix-multiplication problem in the sharedmemory model is taken from [1], Data-parallel algorithms are described in [15]....

    [...]

Proceedings ArticleDOI
01 May 1978
TL;DR: A model of computation based on random access machines operating in parallel and sharing a common memory is presented and can accept in polynomial time exactly the sets accepted by nondeterministic exponential time bounded Turing machines.
Abstract: A model of computation based on random access machines operating in parallel and sharing a common memory is presented. The computational power of this model is related to that of traditional models. In particular, deterministic parallel RAM's can accept in polynomial time exactly the sets accepted by polynomial tape bounded Turing machines; nondeterministic RAM's can accept in polynomial time exactly the sets accepted by nondeterministic exponential time bounded Turing machines. Similar results hold for other classes. The effect of limiting the size of the common memory is also considered.

951 citations


"An introduction to parallel algorit..." refers background in this paper

  • ...Rigorous descriptions of shared-memory models were introduced later in [11,12]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log 2 + 10(n - 1) using processors which can independently perform arithmetic operations in unit time.
Abstract: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log2n + 10(n - 1)/p using p ≥ 1 processors which can independently perform arithmetic operations in unit time. This bound is within a constant factor of the best possible. A sharper result is given for expressions without the division operation, and the question of numerical stability is discussed.

864 citations


"An introduction to parallel algorit..." refers methods in this paper

  • ...The WT scheduling principle is derived from a theorem in [7], In the literature, this principle is commonly referred to as Brent's theorem or Brent's scheduling principle....

    [...]