scispace - formally typeset
Search or ask a question
Author

Robert E. Tarjan

Bio: Robert E. Tarjan is an academic researcher from Princeton University. The author has contributed to research in topics: Time complexity & Spanning tree. The author has an hindex of 114, co-authored 400 publications receiving 67305 citations. Previous affiliations of Robert E. Tarjan include AT&T & Massachusetts Institute of Technology.


Papers
More filters
Patent
Yunhong Zhou1, Anshul Kothari1, Ram Swaminathan1, Robert E. Tarjan1, Alex Zhang1 
16 Jun 2005
TL;DR: In this article, the fractional, non-integer numbers of servers are calculated for respective tiers of a multi-tiered system using a server allocation algorithm, rounding up to integer numbers to compute allocated number of servers for each tier.
Abstract: Fractional, non-integer numbers of servers are calculated for respective tiers of a multi-tiered system using a server allocation algorithm. The fractional, non-integer numbers of servers are rounded up to integer numbers to compute allocated numbers of servers for respective tiers.

17 citations

01 Mar 1976
TL;DR: A hierarchy of global flow problem classes, each solvable by an appropriate generalization of the "node listing" method of Kennedy, are defined, and it is shown that each of these generalized methods is optimum, among all iterative algorithms, for solving problems within its class.
Abstract: This paper studies iterative methods for the global flow analsis of computer programs. We define a hierarchy of global flow problem classes, each solvable by an appropriate generalization of the "node listing" method of Kennedy. We show that each of these generalized methods is optimum, among all iterative algorithms, for solving problems within its class. We give lower bounds on the time required by iterative algorithms for each of the problem classes.

17 citations

Posted Content
TL;DR: In this article, the authors take the field back to the drawing board, with straightforward implementations of both classic and novel heap structures using only standard, well-known optimizations, and provide observations about which characteristics are most correlated to performance.
Abstract: The theory community has proposed several new heap variants in the recent past which have remained largely untested experimentally. We take the field back to the drawing board, with straightforward implementations of both classic and novel structures using only standard, well-known optimizations. We study the behavior of each structure on a variety of inputs, including artificial workloads, workloads generated by running algorithms on real map data, and workloads from a discrete event simulator used in recent systems networking research. We provide observations about which characteristics are most correlated to performance. For example, we find that the L1 cache miss rate appears to be strongly correlated with wallclock time. We also provide observations about how the input sequence affects the relative performance of the different heap variants. For example, we show (both theoretically and in practice) that certain random insertion-deletion sequences are degenerate and can lead to misleading results. Overall, our findings suggest that while the conventional wisdom holds in some cases, it is sorely mistaken in others.

17 citations

Proceedings ArticleDOI
06 Jan 2019
TL;DR: The search path to the accessed element s is rebuilt as follows and Cost(T,X) is used to denote the cost of serving X with initial tree T by splay.
Abstract: Consider the task of performing a sequence of searches in a binary search tree. After each search, an algorithm is allowed to arbitrarily restructure the tree, at a cost proportional to the amount of restructuring performed. The cost of an execution is the sum of the time spent searching and the time spent optimizing those searches with restructuring operations. This notion was introduced by Sleator and Tarjan in 1985 [27], along with an algorithm and a conjecture. The algorithm, Splay, is an elegant procedure for performing adjustments while moving searched items to the top of the tree. The conjecture, called dynamic optimality, is that the cost of splaying is always within a constant factor of the optimal algorithm for performing searches. The conjecture stands to this day.We offer the first systematic proposal for settling the dynamic optimality conjecture. At the heart of our methods is what we term a simulation embedding: a mapping from executions to lists of keys that induces a target algorithm to simulate the execution. We build a simulation embedding for Splay by inducing it to perform arbitrary subtree transformations, and use this to show that if the cost of splaying a sequence of items is an upper bound on the cost of splaying every subsequence thereof, then Splay is dynamically optimal. We call this the subsequence property. Building on this machinery, we show that if Splay is dynamically optimal, then with respect to optimal costs, its additive overhead is at most linear in the sum of initial tree size and number of requests. As a corollary, the subsequence property is also a necessary condition for dynamic optimality. The subsequence property also implies both the traversal [27] and deque [30] conjectures.The notions of simulation embeddings and bounding additive overheads should be of general interest in competitive analysis. For readers especially interested in dynamic optimality, we provide an outline of a proof that a lower bound on search costs by Wilber [32] has the subsequence property, and extensive suggestions for adapting this proof to Splay.

17 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane, which are designed for a CRCW PRAM and require O(n log n) expected work and O(log n log log n log* n) time.
Abstract: We describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane. The algorithms are designed for a CRCW PRAM. For general segments, we give an algorithm requiring optimal O(A+n log n) expected work and optimal O(log n) time, where A is the number of intersecting pairs of segments. If the segments form a simple chain, we give an algorithm requiring optimal O(n) expected work and O(log n log log n log* n) expected time, and a simpler algorithm requiring O(n log* n) expected work. The serial algorithm corresponding to the latter is among the simplest known algorithms requiring O(n log* n) expected operations. For a set of segments forming K chains, we give an algorithm requiring O(A+n log* n+K log n) expected work and O(log n log log n log* n) expected time. The parallel time bounds require the assumption that enough processors are available, with processor allocations every log n steps.

17 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations