scispace - formally typeset
Search or ask a question

Showing papers by "Michael T. Goodrich published in 2000"


Proceedings ArticleDOI
01 Mar 2000
TL;DR: The strengths of PILOT are its universal access and platform independence, its use as an algorithm visualization tool, its ability to test algorithmic concepts, its support for graph generation and layout, its automated grading mechanism, and the ability to award partial credit to proposed solutions.
Abstract: We describe a Web-based interactive system, called PILOT, for testing computer science concepts. The strengths of PILOT are its universal access and platform independence, its use as an algorithm visualization tool, its ability to test algorithmic concepts, its support for graph generation and layout, its automated grading mechanism, and its ability to award partial credit to proposed solutions.

104 citations


Book ChapterDOI
20 Sep 2000
TL;DR: A novel hierarchical force-directed method for drawing large graphs that can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.
Abstract: We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are "smoother" and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.

74 citations


01 Jan 2000
TL;DR: Two deterministic algorithms for constructing the arrangement determined by a set of (algebraic) curve segments in the plane use a divide-and-conquer approach based on derandomized geometric sampling and achieve the optimal running time O(n log n + k), where n is the number of segments and k is thenumber of intersections.

47 citations


Book ChapterDOI
05 Sep 2000
TL;DR: This work introduces the tree cross-product problem, which abstracts a data structure common to applications in graph visualization, string matching, and software analysis, and design solutions with a variety of tradeoffs, yielding improvements and new results for these applications.
Abstract: We introduce the tree cross-product problem, which abstracts a data structure common to applications in graph visualization, string matching, and software analysis. We design solutions with a variety of tradeoffs, yielding improvements and new results for these applications.

46 citations


Proceedings ArticleDOI
01 Feb 2000
TL;DR: Goodrich et al. as mentioned in this paper presented two deterministic algorithms for constructing the arrangement determined by a set of (algebraic) curve segments in the plane using a divide-and-conquer approach based on derandomized geometric sampling and achieved the optimal running time O(n log n + k).
Abstract: We describe two deterministic algorithms for constructing the arrangement determined by a set of (algebraic) curve segments in the plane. They both use a divide-and-conquer approach based on derandomized geometric sampling and achieve the optimal running time O(n log n + k), where n is the number of segments and k is the number of intersections. The first algorithm, a simplified version of one presented in [I], generates a structure of size O(nloglogn + k) and its parallel implementation runs in time O(log 2 n). The second algorithm is better in that the decomposition of the arrangement constructed has optimal size O(n + k) and it has a parallel implementation in the EREW PRAM model that runs in time O(log a/2 n). The improvements in the second algorithm are achieved by means of an approach that adds some degree of globality to the divide-and-conquer approach based on random sampling. The approach extends previous work by Dehne et al.[7], Deng and Zhu [8] and Kiihn [9], that use small separators for planar graphs in the design of randomized geometric algorithms for coarse grained multicomputers. The approach simplifies other previous geometric algorithms [1, 2], and also has the potential of providing efficient deterministic algorithms for the external memory model. 1 Problem and Previous Work We consider a classical problem in computational geometry: computing the arrangement determined by a set of curve segments in the plane. There has been a considerable amount of work on this problem in the computational geometry community, particularly for line segments. Starting with a first efficient algorithm by -----t'T~as A&M University, College Station, TX. E-malh amato@cs.tamu.edu. This work was supported in part by NSF CAREER Award CCR-9624315, NSF grants IIS-9619850, ACI9872126, EIA-9805823, and EIA-9810937, by DOE (ASCI ASAP, Level 2 and 3) grant B347886, and by the Texas Higher Education Coordinating Board grant ARP-036327-017. ?The Johns Hopkins University, Baltimore, MD. E-maih goodrich@cs .jhu.edu. This work was partially supported by U.S. Army Research Office MURI Grant DAAH04-96-1-0013 and by U.S. National Science Foundation Grant CCR-9732300. SMax-Planck-Institut ftir Informatik, Saarbr/icken, Germany. E-malh ramesGmpi-sb.mpg, de Bentley and Ottman [4], optimal output sensitive algorithms algorithms were obtained using a deterministic approach by ChazeUe and Edelsbrunner [5] and using randomized approaches by Clarkson and Shor [6] and by Mumuley [10]. These optimal algorithms perform O(n logn + k) work, where n is the number of segments and k is the number of pairwise intersections. They can be adapted so that they are output sensitive even when multiple intersection points are allowed (a point where many segments intersect is counted only once). On the other hand, unlike its randomized counterparts in [6, 10], the deterministic algorithm in [5] can only handle line segments. An alternative deterministic algorithm by Amato et a/.[1], which follows a divide-andconquer approach based on derandomization of geometric sampling, has the advantage of being parallelizable. However, it can only handle line segments and pairwise intersection points, and the decomposition of the arrangement that it constructs has size O(n log log n + k), as opposed to the optimal O(n+ k). One more variation on the problem is to report all the intersections while using only a linear amount of work space. The solutions in [6] and [1] can be adapted to achieve this. Alternatively, Balaban [3] proposed an elementary deterministic algorithm to achieve this; however, it does not construct the arrangement, it does not seem to parallelize, and it cannot handle multiple intersection points. The algorithm in [1] uses an approach based on random sampling that refines iteratively by using small samples to divide the problem. This divide-and-conquer approach and also the well-known random incremental construction (RIC) approach date from work by Clarkson and Shor [6]. Unfortunately, unlike the RIC approach, divide-and-conquer most often leads to nonoptimal algorithms, at least as far as the most basic analysis can tell, because the dividing step creates spurious boundaries that increase the complexity of the constructed decomposition of the arrangement. In fact, the literature is plagued with running times that are a factor n e or log c n away from optimal. Some techniques have been used to correct this and obtain optimal algorithms: sparse cuttings, pruning and biased sampling. In particular, the algorithm in [1] achieves optimality through

32 citations


Proceedings ArticleDOI
01 Mar 2000
TL;DR: Together, the SAIL package and the searchable database of problems offer a powerful tool for generating, archiving, and retrieving homework assignments (as well as tests and quizzes).
Abstract: In this paper we present a package for the creation of Specialized Assignments In LATEX, SAIL. We describe several features which allow an instructor to create sufficiently different instances of the “same” problem so as to encourage student cooperation without fear of plagiarism. The SAIL package also provides support for grading aids and grading automation. In addition, we describe an on-line system for archiving homework problems in a database that can be easily searched and to which new parametrized problems can be easily added. Together, the SAIL package and the searchable database of problems offer a powerful tool for generating, archiving, and retrieving homework assignments (as well as tests and quizzes).

26 citations



Book ChapterDOI
05 Sep 2000
TL;DR: This result is the first one proving a worst-case polylogarithmic time bound for approximate geometric queries using the simple k-d tree data structure.
Abstract: We show that a popular variant of the well known k-d tree data structure satisfies an important packing lemma. This variant is a binary spatial partitioning tree T defined on a set of n points in Rd, for fixed d ≥ 1, using the simple rule of splitting each node's hyper-rectangular region with a hyperplane that cuts the longest side. An interesting consequence of the packing lemma is that standard algorithms for performing approximate nearest-neighbor searching or range searching queries visit at most O(logd-1 n) nodes of such a tree T in the worst case. Traditionally, many variants of k-d trees have been empirically shown to exhibit polylogarithmic performance, and under certain restrictions in the data distribution some theoretical expected case results have been proven. This result, however, is the first one proving a worst-case polylogarithmic time bound for approximate geometric queries using the simple k-d tree data structure.

24 citations


Journal ArticleDOI
TL;DR: A unified framework of aesthetic criteria and complexity measures for drawing planar graphs with polylines and curves, including aspect ratio, vertex resolution, edge length, edge separation, and edge curvature is described.

21 citations


Journal ArticleDOI
TL;DR: A new approach for cluster-based drawing of large graphs, which obtains clusters by using binary space partition (BSP) trees and a novel BSP-type decomposition, called the balanced aspect ratio (BAR) tree, which guarantees that the cells produced are convex and have bounded aspect ratios.
Abstract: We describe a new approach for cluster-based drawing of large graphs, which obtains clusters by using binary space partition (BSP) trees. We also introduce a novel BSP-type decomposition, called the balanced aspect ratio (BAR) tree, which guarantees that the cells produced are convex and have bounded aspect ratios. In addition, the tree depth is O(log n), and its construction takes O(n log n) time, where n is the number of points. We show that the BAR tree can be used to recursively divide a graph embedded in the plane into subgraphs of roughly equal size, such that the drawing of each subgraph has a balanced aspect ratio. As a result, we obtain a representation of a graph as a collection of O(log n) layers, where each succeeding layer represents the graph in an increasing level of detail. The overall running time of the algorithm is O(n log n+m+D0(G)), where n and m are the number of vertices and edges of the graph G, and D0(G) is the time it takes to obtain an initial embedding of G in the plane. In particular, if the graph is planar each layer is a graph drawn with straight lines and without crossings on the n×n grid and the running time reduces to O(n log n). Communicated by G. Liotta and S. H. Whitesides: submitted November 1998; revised November 1999. Research supported in part by ARO grant DAAH04–96–1–0013 and NSF grant CCR9732300. Duncan, Goodrich, and Kobourov, BAR Trees , JGAA, 4(3) 19–46 (2000) 20

19 citations



Proceedings ArticleDOI
01 Feb 2000
TL;DR: This paper presents a simple adaptive tree-based dictionary structure that is balanced and competitive, and shows that, in spite of its conceptual simplicity, such a scheme is constant-ratio competitive with a static oracle using a priori knowledge of the operation distribution.
Abstract: In this note we describe a general technique for making treestructured dynamic dictionaries adapt to be competitive with the most efficient implementation, by using potential energy parameters and a simple partial rebuilding scheme. Introduction. On-line algorithms deal with optimizing the performance of operation sequences (e.g., see [4, 9]). Such algorithms are desired to be -competitive [4], for some parameter , where is an upper bound on the ratio of the costs of the solution defined by the on-line algorithm and an oracle’s algorithm. In this paper we are interested in dictionary data structures that are competitive in this same sense. Our Results. We present a simple adaptive tree-based dictionary structure that is balanced and competitive. Our approach is based on a potential energy parameter stored at each node in the tree. As updates and queries are performed, the potential energy of tree nodes are increased or decreased. Whenever the potential energy of a node reaches a threshold level, we rebuild the subtree rooted at that node. We show that, in spite of its conceptual simplicity, such a scheme is constant-ratio competitive with a static oracle using a priori knowledge of the operation distribution. Related Prior Work. Besides general work for on-line algorithms (e.g., see [4, 9]) and data structures that use partial rebuilding [7], there has been some prior work on methods for adapting data structures to the way in which they are being used. Most previous data structure competitive analyses have been directed at simple linked-lists structures, with “move-to-front” heuristics applied [9]. Work on other adaptive data structures includes splay trees [10], which perform a sophisticated move-to-root heuristic, but perform many rotations with each access. There is also the randomized binary search tree of Seidel and Aragon [8], which performs random structural changes with each access and can adapt in an expected, probabilistic sense based on data structure usage. Energy-Balanced Binary Search Trees. A dictionary holds pairs of ordered keys and elements, subject to update and query operations. A common way of implementing the dictionary ADT is to use a binary search tree, which maintains balance by local rotation operations. Typically, such rotations are fast, but if the tree has auxiliary structures, rotations are often slow. Standard binary search trees, such as AVL trees [1], red-black trees [3], scapegoat trees [2], or weight-balanced trees [6], maintain balance, but do not adapt themselves based on the distribution of accesses and updates. Splay trees [10], on the other hand, adapt (in an asymptotic sense), but perform a large number of rotations with each access. Finally, randomized binary search trees [8], have good expected behavior but offer no worst-case guarantees on per-

01 Jan 2000
TL;DR: The new algorithm can be viewed as a combination of Chazelle's algorithm and of non-optimal randomized algorithms due to Clarkson et al. (1991) and to Seidel (1991), with the essential innovation that sampling is performed on subchains of the initial polygonal chain, rather than on its edges.
Abstract: We describe a randomized algorithm for computing the trapezoidal decomposition of a simple polygon. Its expected running time is linear in the size of the polygon. By a well-known and simple linear time reduction, this implies a linear time algorithm for triangulating a simple polygon. Our algorithm is considerably simpler than Chazelle's (1991) celebrated optimal deterministic algorithm and, hence, positively answers his question of whether a simpler randomized algorithm for the problem exists. The new algorithm can be viewed as a combination of Chazelle's algorithm and of non-optimal randomized algorithms due to Clarkson et al. (1991) and to Seidel (1991), with the essential innovation that sampling is performed on subchains of the initial polygonal chain, rather than on its edges. It is also essential, as in Chazelle's algorithm, to include a bottom-up preprocessing phase previous to the top-down construction phase.