scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 1995"


Journal ArticleDOI
TL;DR: A randomized linear-time algorithm to find a minimum spanning tree in a connected graph with edge weights is presented, a unit-cost random-access machine with the restriction that the only operations allowed on edge weights are binary comparisons.
Abstract: We present a randomized linear-time algorithm to find a minimum spanning tree in a connected graph with edge weights. The algorithm uses random sampling in combination with a recently discovered linear-time algorithm for verifying a minimum spanning tree. Our computational model is a unit-cost random-access machine with the restriction that the only operations allowed on edge weights are binary comparisons.

450 citations


Proceedings ArticleDOI
04 Jan 1995
TL;DR: A broad range of models of parallel computation and the different roles they serve in algorithm, language and machine design are presented to better understand which model characteristics are important to each design community, in order to elucidate the requirements of a unifying paradigm.
Abstract: In the realm of sequential computing, the random access machine has successfully provided an underlying model of computation that has promoted consistency and coordination among algorithm developers, computer architects and language experts. In the realm of parallel computing, however, there has been no similar success. The need for such a unifying parallel model or set of models is heightened by the greater demand for performance and the greater diversity among machines. Yet the modeling of parallel computing still seems to be mired in controversy and chaos. This paper presents a broad range of models of parallel computation and the different roles they serve in algorithm, language and machine design. The objective is to better understand which model characteristics are important to each design community, in order to elucidate the requirements of a unifying paradigm. As an impetus for discussion, we conclude by suggesting a model of parallel computation which is consistent with a model design philosophy that balances simplicity and descriptivity with prescriptivity. We present only the survey of abstract computational models. This introduction should provide insights into the rich array of relevant issues in other disciplines. >

113 citations


Proceedings ArticleDOI
29 May 1995
TL;DR: This paper shows that within O(=) steps, the algorithm reduces the maximum dierence in tokens between any two nodes to at most O((d 2 logn)=), where is the global imbalance in tokens and n is the number of nodes in the network.
Abstract: This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d + 1 fewer tokens, where d is the maximum degree of any node in the network. We show that within O(=) steps, the algorithm reduces the maximum dierence in tokens between any two nodes to at most O((d 2 logn)=), where is the global imbalance in tokens (i.e., the maximum dierence between the number of tokens at any node initially and the average number of tokens), n is the number of nodes in the network, and is the edge expansion of the network. The time bound is tight in the sense that for any graph with edge expansion , and for any value , there exists an initial distribution of tokens with imbalance for which the time to reduce the imbalance to even =2 is at least ›(=). The bound on the nal imbalance is tight in the sense that there exists a class of networks that can be locally balanced everywhere (i.e., the maximum dierence in tokens between any two neighbors is at most 2d), while the global imbalance remains ›((d 2 logn)=). Furthermore, we show that upon reaching a state with a global imbalance of O((d 2 logn)=), the time for this algorithm to locally balance the network can be as large as ›(n 1=2 ). We extend our analysis to a variant of this algorithm for dynamic and asynchronous networks. We also present tight bounds for a randomized algorithm in which each node sends at most one token in each step.

80 citations


Proceedings ArticleDOI
29 May 1995
TL;DR: An eficient purely functional implementation of stacks with catenation is described, which has a worst-case running time of O ( 1) for each push, pop, andCatenation, and the solution is not only faster but simpler, and indeed it may be practical.
Abstract: We describe an eficient purely functional implementation of stacks with catenation. In addition to being an intriguing problem in its own right, functional implementation of catenable stacks is the tool required to add certain sophisticated programming constructs to functional programming lan.quages. Our solution has a worst-case running time of O ( 1) for each push, pop, and catenation. The best previously known solution has an O(log* k) time bound for the kth stack operation. Our solution is not only faster but simpler, and indeed we hope it may be practical. The major new ingredient in our result is a general technique that we call recursive slowdown. Recursive slow-down is an algorithmic design principle that can give constant worst-case time bounds for operations on data structures. We expect this technique to have additional applications. Indeed, we have recently been able to extend the result described here to obtain a purely functional implementation of double-ended queues with catenation that takes constant time per operation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyri ht notice and the %“” title of the publication and its date appear, an notice ISgiven that copyin ISby permission of the Association of Computing Machinery. o cop othenwse, or to republish, requires T /’ a fee and/or specl ICpermission. STOC’ 95, Las Vegas, Nevada, USA oACM 0-89791 -718-9/95/0005..$3 50 *Department of Computer Science, Princeton University, Princeton, NJ 08544 USA. Research supported by the Office of Naval Research, Contract No. Nooo14-91J-1463 and a United States-Israel Educational Foundation (USIEF) Fulbright Grant. hkl@cs.princeton. edu. t Department of Computer Science, princeton University, Princeton, NJ 08544 USA and NEC Institute, Princeton, NJ. Research at Princeton University partiaJly supported by the NSF, Grant No. CCR-8920505 and the Office of Naval Research, Contract No. NOO014-91-J-1463. ret@cs. princeton.edu. Robert E. Tarj ant 1 History of the Problem A persistent data structure is one in which a change to the structure can be made without destroying the old version, so that all versions of the structure persist and can be accessed or (possibly) modified. In the functional programming literature, persistent structures are often called immutable. Purely functional programming, without side effects, has the property that every structure created is automatically persistent. Persistent data structures arise not only in functional programming but also in text, program, and file editing and maintenance; computational geometry; and other algorithmic application areas. (See [5, 8,9, 10, 11, 12, 13, 14,21,28,29,30,31,32, 33, 35].) Several papers have dealt with the problem of adding persistence to general data structures in a way that is more efficient than the obvious solution of copying the entire structure whenever a change is made. In particular, Driscoll, Sarnak, Sleator, and Tarjan [11] described how to make pointer-based structures persistent using a technique called node-splitting, which is related to fractional cascading [6] in a way that is not yet fully understood. Dietz [10] described a method for making array-based structures persistent. Additional references on persistence can be found in those papers. The general techniques in [10] and [11] fail to work on data structures that can be combined with each other rather than just be changed locally. Perhaps the simplest and probably the 1For the purposes of this paper, a “purely functional” data structure is one built using only the LISP functions car, cons, cdr. Though we do not state our constructions explicitly in terms of these functions, it is routine to verify that our structures are purely functional.

47 citations


Journal ArticleDOI
TL;DR: This paper provides an efficient implementation of catenable heap-ordered deques, yielding constant amortized time per operation, based on data-structural bootstrapping and a special case of path compression that it is proved takes linear time.
Abstract: A deque with heap order is a linear list of elements with real-valued keys that allows insertions and deletions of elements at both ends of the list. It also allows the findmin (alternatively findmax) operation, which returns the element of least (greatest) key, but it does not allow a general deletemin (deletemax) operation. Such a data structure is also called a mindeque (maxdeque). Whereas implementing heap-ordered deques in constant time per operation is a solved problem, catenating heap-ordered deques in sublogarithmic time has until now remained open. This paper provides an efficient implementation of catenable heap-ordered deques, yielding constant amortized time per operation. The important algorithmic technique employed is an idea that we call data-structural bootstrapping: we abstract heap-ordered deques by representing them by their minimum elements, thereby reducing catenation to simple insertion. The efficiency of the resulting data structure depends upon the complexity of a special case of path compression that we prove takes linear time.

24 citations


Journal ArticleDOI
TL;DR: In this article, the worst-case running time of these algorithms was shown to be O(m+n \log n) for 2-edge-connectivity and biconnectivity.
Abstract: Let $P$ be a property of undirected graphs. We consider the following problem: given a graph $G$ that has property $P$, find a minimal spanning subgraph of $G$ with property $P$. We describe general algorithms for this problem and prove their correctness under fairly weak assumptions about $P$. We establish that the worst-case running time of these algorithms is $\Theta(m+n \log n)$ for 2-edge-connectivity and biconnectivity where $n$ and $m$ denote the number of vertices and edges, respectively, in the input graph. By refining the basic algorithms we obtain the first linear time algorithms for computing a minimal 2-edge-connected spanning subgraph and for computing a minimal biconnected spanning subgraph. We also devise general algorithms for computing a minimal spanning subgraph in directed graphs. These algorithms allow us to simplify an earlier algorithm of Gibbons, Karp, Ramachandran, Soroker, and Tarjan for computing a minimal strongly connected spanning subgraph. We also provide the first tight analysis of the latter algorithm, showing that its worst-case time complexity is $\Theta(m+n \log n).$

14 citations


Journal ArticleDOI
TL;DR: A number of strategies for implementing lazy structure sharing are investigated and upper and lower bounds on their performance are provided and only one strategy seems promising for the most general case of the problem.
Abstract: We study lazy structure sharing as a tool for optimizing equivalence testing on complex data types, We investigate a number of strategies for implementing lazy structure sharing and provide upper and lower bounds on their performance (how quickly they effect ideal configurations of our data structure). In most cases when the strategies are applied to a restricted case of the problem, the bounds provide nontrivial improvements over the naive linear-time equivalence-testing strategy that employs no optimization. Only one strategy, however, which employs path compression, seems promising for the most general case of the problem.