scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 1996"


Journal ArticleDOI
TL;DR: It is proved that 1/4
Abstract: Motivated by an application to unstructured multigrid calculations, we consider the problem of asymptotically minimizing the size of dominating sets in triangulated planar graphs. Specifically, we wish to find the smallestesuch that, fornsufficiently large, everyn-vertex planar graph contains a dominating set of size at mosten.We prove that 1/4

78 citations


Proceedings ArticleDOI
24 Jun 1996
TL;DR: A randomized c’RC\Y PRALI algorithm that finds a minimum spanning forest of an n-vertex graph in O(log n ) time and linear work is described, which shaves a factor of off the best previous running time for a linear-work algorithm.
Abstract: using random sampling Richard Cole” Philip N. Kleint Robert E. Tarjan$ New }“ork ITniversity Brown Unil-ersity Princeton [University aIId ~~~ Researcl~ Institute Jve describe a randomized c’RC\Y PRALI algorithm that finds a minimum spanning forest of an n-vertex graph in O(log n ) time and linear work. This shaves a factor of ?ioK* “ off the best previous running time for a linear-work algorithm. The novelty in our approach is to divide the conlpntatiou into two phases, the first of which finds only a partial solution. This idea has been used prevlonsly in parallel connected components algorithms.

63 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: The representations presented are the first that address the issues of persistence and pure functionality, and the first for which fast implementations of catenation and split are presented, and could be efficient in applications that require worst-case time bounds or persistence.
Abstract: Purely Functional Representations of Catenable Sorted Lists. Haim Kaplanl Robert E. Tarjan2 The power of purely functional programming in the const ruction of data structures has received much attention, not only because functional languages have many desirable properties, but because structures built purely functionally are automatically fully persistersti any and all versions of a structure can coexist indefinitely. Recent results illustrate the surprising power of pure functionality. One such result was the development of a representation of double-ended queues with catenation that supports all operations, including catenation, in worst-case constant time [19]. This paper is a continuation of our study of pure functionality, especially as it relates to persistence. For our purposes, a purely functional data structure is one built only with the LISP functions car, cons, cdr. We explore purely functional representations of sorted lists, implemented as finger search trees. We describe three implementations. The most efficient of these achieves logarithmic access, insertion, and deletion time, and double-logarithmic catenation time. It uses one level of structural bootstrapping to obtain its efficiency. The bounds for access, insert, and delete are the same as the best known bounds for an ephemeral implementation of these operations using finger search trees. The representations we present are the first that address the issues of persistence and pure functionality, and the first for which fast implementations of catenation and split are presented. They are simple to implement and could be efficient in prac1 DePmtm~~t of computer sCienCe, prinCetOn university, ‘rinceton, NJ 08544 USA. Research supported by by the NSF, Grant No. CCR-8920505, the Office of Naval Research, Contract No. NOOO14-91J-1463 and a United States-Israel Educational Foundation (USIEF) Fulbright Grant. hklQcs.princeton .edu. 2Department of Computer %Si!WICe,%inCdOn University! ‘rinceton, NJ 08544 USA and NEC Institute, Princeton, NJ. Research at Princeton University partially supported by the NSF, Grant No. CCFL8920505 and the Office of Naval Research, Contract No. NOO01491-J-1463. Some of the writing of this paper was done during a visit to M. I. T., partially supported by ARPA Contract No. NOOOIZk 95-1-1246. ret @cs.princeton.edu. tice, especially for applications that require worst-case time bounds or persistence.

46 citations


Journal ArticleDOI
TL;DR: An elliptic model problem on simple domains, discretized with finite difference techniques on block-structured meshes in two or three dimensions with up to 106 or 109 points, is studied and performance is analyzed using three models of parallel computation: the PRAM and two bridging models.
Abstract: Multigrid methods are powerful techniques to accelerate the solution of computationally-intensive problems arising in a broad range of applications. Used in conjunction with iterative processes for solving partial differential equations, multigrid methods speed up iterative methods by moving the computation from the original mesh covering the problem domain through a series of coarser meshes. But this hierarchical structure leaves domain-parallel versions of the standard multigrid algorithms with a deficiency of parallelism on coarser grids. To compensate, several parallel multigrid strategies with more parallelism, but also more work, have been designed. We examine these parallel strategies and compare them to simpler standard algorithms to try to determine which techniques are more efficient and practical. We consider three parallel multigrid strategies: (1) domain-parallel versions of the standard V-cycle and F-cycle algorithms; (2) a multiple coarse grid algorithm, proposed by Fredrickson and McBryan, which generates several coarse grids for each fine grid; and (3) two Rosendale algorithm, which allow computation on all grids simultaneously. We study an elliptic model problem on simple domains, discretized with finite difference techniques on block-structured meshes in two or three dimensions with up to 106 or 109 points, respectively. We analyze performance using three models of parallel computation: the PRAM and two bridging models. The bridging models reflect the salient characteristics of two kinds of parallel computers: SIMD fine-grain computers, which contain a large number of small (bitserial) processors, and SPMD medium-grain computers, which have a more modest number of powerful (single chip) processors. Our analysis suggests that the standard algorithms are substantially more efficient than algorithms utilizing either parallel strategy. Both parallel strategies need too much extra work to compensate for their extra parallelism. They require a highly impractical number of processors to be competitive with simpler, standard algorithms. The analysis also suggests that the F-cycle, with the appropriate optimization techniques, is more efficient than the V-cycle under a broad range of problem, implementation, and machine characteristics, despite the fact that it exhibits even less parallelism than the V-cycle.

9 citations


Journal ArticleDOI
TL;DR: A set of models of parallel computation which reflect the computing characteristics of the current generation of massively parallel multicomputers are developed, based on an interconnection network of 256 to 16,384 message passing, “workstation size” processors executing in a SPMD mode.

6 citations


Book ChapterDOI
19 Aug 1996
TL;DR: The large computational requirements of realistic PDEs, accurately discretized on unstructured meshes, make such computations candidates for parallel or distributed processing, adding problem partitioning as a preprocessing task.
Abstract: The multigrid method is a general and powerful means of accelerating the convergence of discrete iterative methods for solving partial differential equations (PDEs) and similar problems. The adaptation of the multigrid method to unstructured meshes is important in the solution of problems with complex geometries. Unfortunately, multigrid schemes on unstructured meshes require significantly more preprocessing than on structured meshes. In fact, preprocessing can be a major part of the solution task, and for many applications, must be done repeatedly. In addition, the large computational requirements of realistic PDEs, accurately discretized on unstructured meshes, make such computations candidates for parallel or distributed processing, adding problem partitioning as a preprocessing task.

2 citations