scispace - formally typeset
Search or ask a question
Author

Robert E. Tarjan

Bio: Robert E. Tarjan is an academic researcher from Princeton University. The author has contributed to research in topics: Time complexity & Spanning tree. The author has an hindex of 114, co-authored 400 publications receiving 67305 citations. Previous affiliations of Robert E. Tarjan include AT&T & Massachusetts Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors presented an O(m(log n) 2 ) time algorithm for hierarchically decomposing a directed n-vertex, m-edge graph with weighted edges into strong components, which is useful in cluster analysis of data with an asymmetric similarity measure.

11 citations

Journal ArticleDOI
TL;DR: It is shown that for infinitely many n, there is a (non-reducible) flow graph H with n nodes (all with in-degree and out-degree bounded by two), such that any weak node sequence for H has length at least cn 2, where c is a positive constant.

11 citations

Journal ArticleDOI
TL;DR: The hollow heap is introduced, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap, using a dag instead of a tree or set of trees to represent a heap.
Abstract: We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except delete and delete-min take O(1) time, worst case as well as amortized; delete and delete-min take O(log n) amortized time on a heap of n items. Hollow heaps are the simplest structure to achieve these bounds. Hollow heaps combine two novel ideas: the use of lazy deletion and re-insertion to do decrease-key operations and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name.

11 citations

Journal ArticleDOI
TL;DR: This paper studies the computational complexity of SAPTS and design efficient algorithms to solve it, and shows that SAPTS is polynomial-time solvable and designs a fast polynometric-time exact algorithm to solve the important two-tier case.
Abstract: Many web-based systems have a tiered application architecture, in which a request needs to transverse all the tiers before finishing its processing. One of the most important QoS metrics for these applications is the expected response time for the user. Since the expected response time in any tier depends upon the number of servers allocated to this tier, and a request's total response time is the sum of the response times over all the tiers, many different configurations (number of servers allocated to each tier) can satisfy the expected response-time requirement. Naturally, one would like to find the configuration that minimizes the total system cost while satisfying the total response-time requirement. This is modeled as a non-linear optimization problem using an open-queuing network model of response time, which we call the server allocation problem for tiered systems (SAPTS). In this paper we study the computational complexity of SAPTS and design efficient algorithms to solve it. For a variable number of tiers, we show that the decision version of SAPTS is NP-complete. Then we design a simple two-approximation algorithm and a fully polynomial-time approximation scheme (FPTAS). If the number of tiers is a constant, we show that SAPTS is polynomial-time solvable. Furthermore, we design a fast polynomial-time exact algorithm to solve the important two-tier case. Most of our results extend to the general case in which each tier has an arbitrary response-time function.

11 citations

Proceedings ArticleDOI
06 Jan 1998
TL;DR: In software markets which operate at breakneck pace and have short development cycles, effective information flow is a first order priority, and information can become impacted by the clash between US software market characteristics and Japanese business culture.
Abstract: The impact of effective information flow in software ventures is analyzed through a recent case in which a hot, lucrative technology was lost on its way to the marketplace. The failure occurred despite the fact that the venture had many components crucial to success, including a proprietary intellectual property position, enormous market demand, a well-qualified, committed team and sufficient funding. One reason for this failure is the lack of information flows among several parties critical to the success of the venture. This case suggests that in software markets which operate at breakneck pace and have short development cycles, effective information flow is a first order priority. These blockages in information flows can stem from the nature of the cultures that are created to produce software ideas, especially proprietary technologies. The case also suggests that information can become impacted by the clash between US software market characteristics and Japanese business culture. Fortunately, there are inexpensive solutions that can substantially improve the return on investment, especially foreign investment, in new software technologies.

11 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations