scispace - formally typeset
Search or ask a question

Showing papers by "Michael T. Goodrich published in 2005"


Book ChapterDOI
01 Jan 2005
TL;DR: In this paper, a technique called confluent drawing is used for visualizing non-planar graphs in a planar way, which allows groups of edges to be merged together and drawn as tracks.
Abstract: We introduce a new approach for drawing diagrams. Our approach is to use a technique we call confluent drawing for visualizing non-planar graphs in a planar way. This approach allows us to draw, in a crossing-free manner, graphs—such as software interaction diagrams—that would normally have many crossings. The main idea of this approach is quite simple: we allow groups of edges to be merged together and drawn as “tracks” (similar to train tracks). Producing such confluent diagrams automatically from a graph with many crossings is quite challenging, however, so we offer two heuristic algorithms to test if a non-planar graph can be drawn efficiently in a confluent way. In addition, we identify several large classes of graphs that can be completely categorized as being either confluently drawable or confluently non-drawable.

117 citations


Proceedings ArticleDOI
06 Jun 2005
TL;DR: The skip quadtree as mentioned in this paper is a multi-dimensional data structure that combines the best features of region quadtrees and skip lists, and it has the well-defined "box"-shaped regions of Region Quadtree and the logarithmic-height search and update hierarchical structure of skip lists.
Abstract: We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R2) or the skip octree (for point data in Rd, with constant d > 2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined "box"-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a skip quadtree, as well as fast methods for performing point location, approximate range, and approximate nearest neighbor queries.

106 citations


Book ChapterDOI
07 Jun 2005
TL;DR: Novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified are introduced.
Abstract: We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked lists, binary search trees, skip lists, and hash tables. Some of our constructions are based on a new reduced-randomness construction for nonadaptive combinatorial group testing.

105 citations


Journal ArticleDOI
TL;DR: In this article, a two-stage combinatorial group testing algorithm was proposed to identify the at most d items out of a given set of n items that are defective using fewer tests for all practical set sizes.
Abstract: We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.

63 citations


Journal ArticleDOI
TL;DR: This work designs a variation of skip lists that performs well for generally biased access sequences and presents two instantiations of biased skip lists, one of which achieves this bound in the worst case, the other in the expected case.
Abstract: We design a variation of skip lists that performs well for generally biased access sequences. Given n items, each with a positive weight wi, 1 ≤ i ≤ n, the time to access item i is O(1 + log (W/wi)), where W=∑i=1 nwi; the data structure is dynamic. We present two instantiations of biased skip lists, one of which achieves this bound in the worst case, the other in the expected case. The structures are nearly identical; the deterministic one simply ensures the balance condition that the randomized one achieves probabilistically. We use the same method to analyze both.

46 citations


Proceedings ArticleDOI
17 Jul 2005
TL;DR: In this paper, the skip-webs data structure is proposed for multi-dimensional data querying, which can be extended to linear (one-dimensional) data, such as sorted sets, as well as multidimensional data such as d-dimensional octrees and digital tries of characters defined over a fixed alphabet.
Abstract: We present a framework for designing efficient distributed data structures for multi-dimensional data. Our structures, which we call skip-webs, extend and improve previous randomized distributed data structures, including skipnets and skip graphs. Our framework applies to a general class of data querying scenarios, which include linear (one-dimensional) data, such as sorted sets, as well as multi-dimensional data, such as d-dimensional octrees and digital tries of character strings defined over a fixed alphabet.We show how to perform a query over such a set of n items spread among n hosts using O(log n/log log n) messages for one-dimensional data, or O(log n) messages for fixed-dimensional data, while using only O(log n) space per host. We also show how to make such structures dynamic so as to allow for insertions and deletions in O(log n) messages for quadtrees, octrees, and digital tries, and O(log n/log log n) messages for one-dimensional data. Finally, we show how to apply a blocking strategy to skip-webs to further improve message complexity for one-dimensional data when hosts can store more data.

43 citations


Book ChapterDOI
12 Sep 2005
TL;DR: In this paper, the authors show how to solve the c-planarity problem in polynomial time for a new class of clustered graphs, which they call extrovert clustered graphs.
Abstract: A clustered graph has its vertices grouped into clusters in a hierarchical way via subset inclusion, thereby imposing a tree structure on the clustering relationship. The c-planarity problem is to determine if such a graph can be drawn in a planar way, with clusters drawn as nested regions and with each edge (drawn as a curve between vertex points) crossing the boundary of each region at most once. Unfortunately, as with the graph isomorphism problem, it is open as to whether the c-planarity problem is NP-complete or in P. In this paper, we show how to solve the c-planarity problem in polynomial time for a new class of clustered graphs, which we call extrovert clustered graphs. This class is quite natural (we argue that it captures many clustering relationships that are likely to arise in practice) and includes the clustered graphs tested in previous work by Dahlhaus, as well as Feng, Eades, and Cohen. Interestingly, this class of graphs does not include, nor is it included by, a class studied recently by Gutwenger et al.; therefore, this paper offers an alternative advancement in our understanding of the efficient drawability of clustered graphs in a planar way. Our testing algorithm runs in O(n3) time and implies an embedding algorithm with the same time complexity.

42 citations


Book ChapterDOI
28 Feb 2005
TL;DR: This paper presents computationally “lightweight” schemes for performing biometric authentication that carry out the comparison stage without revealing any information that can later be used to impersonate the user (or reveal personal biometric information).
Abstract: This paper presents computationally “lightweight” schemes for performing biometric authentication that carry out the comparison stage without revealing any information that can later be used to impersonate the user (or reveal personal biometric information). Unlike some previous computationally expensive schemes — which make use of slower cryptographic primitives — this paper presents methods that are particularly suited to financial institutions that authenticate users with biometric smartcards, sensors, and other computationally limited devices. In our schemes, the client and server need only perform cryptographic hash computations on the feature vectors, and do not perform any expensive digital signatures or public-key encryption operations. In fact, the schemes we present have properties that make them appealing even in a framework of powerful devices capable of public-key signatures and encryptions. Our schemes make it computationally infeasible for an attacker to impersonate a user even if the attacker completely compromises the information stored at the server, including all the server’s secret keys. Likewise, our schemes make it computationally infeasible for an attacker to impersonate a user even if the attacker completely compromises the information stored at the client device (but not the biometric itself, which is assumed to remain attached to the user and is not stored on the client device in any form).

35 citations


Book ChapterDOI
07 Jun 2005
TL;DR: This work provides schemes, based on a technique the authors call chaff injection, for efficiently performing uncheatable grid computing in the context of searching for high-value rare events in the presence of coalitions of lazy and hoarding clients.
Abstract: High-value rare-event searching is arguably the most natural application of grid computing, where computational tasks are distributed to a large collection of clients (which comprise the computation grid) in such a way that clients are rewarded for performing tasks assigned to them. Although natural, rare-event searching presents significant challenges for a computation supervisor, who partitions and distributes the search space out to clients while contending with “lazy” clients, who don't do all their tasks, and “hoarding” clients, who don't report rare events back to the supervisor. We provide schemes, based on a technique we call chaff injection, for efficiently performing uncheatable grid computing in the context of searching for high-value rare events in the presence of coalitions of lazy and hoarding clients.

34 citations


Book ChapterDOI
12 Sep 2005
TL;DR: The tree-confluent graphs are generalized to a broader class of graphs called Δ-Confluent graphs, which coincide with distance-hereditary graphs, and some results about the visualization are given.
Abstract: We generalize the tree-confluent graphs to a broader class of graphs called Δ-confluent graphs. This class of graphs and distance-hereditary graphs, a well-known class of graphs, coincide. Some results about the visualization of Δ-confluent graphs are also given.

31 citations


Book ChapterDOI
15 Aug 2005
TL;DR: Efficient non-adaptive and two-stage combinatorial group testing algorithms are presented, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes.
Abstract: We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.

Proceedings ArticleDOI
08 May 2005
TL;DR: Two new approaches to improving the integrity of network broadcasts and multicasts with low storage and computation overhead are presented, including a leapfrog linking protocol and a novel key predistribution scheme that allows end-to-end integrity checking as well as improved hop-by-hop integrity checking.
Abstract: We present two new approaches to improving the integrity of network broadcasts and multicasts with low storage and computation overhead. The first approach is a leapfrog linking protocol for securing the integrity of packets as they traverse a network during a broadcast, such as in the setup phase for link-state routing. This technique allows each router to gain confidence about the integrity of a packet before passing it on to the next router; hence, allows many integrity violations to be stopped immediately in their tracks. The second approach is a novel key predistribution scheme that we use in conjunction with a small number of hashed message authentication codes (HMAC), which allows end-to-end integrity checking as well as improved hop-by-hop integrity checking. Our schemes are suited to environments, such as in ad hoc and overlay networks, where routers can share only a small number of symmetric keys. Moreover, our protocols do not use encryption (which, of course, can be added as an optional security enhancement). Instead, security is based strictly on the use of one-way hash functions; hence, our algorithms are considerably faster than those based on traditional public-key signature schemes. This improvement in speed comes with only modest reductions in the security for broadcasting, as our schemes can tolerate small numbers of malicious routers, provided they do not form significant cooperating coalitions.

Posted Content
TL;DR: This paper investigates the use of audio for human-assisted authentication of previously un-associated devices and develops and evaluates a system called Loud-and-Clear (L&C), which is suitable for secure device pairing and similar tasks.
Abstract: Secure pairing of electronic devices that lack any previous association is a challenging problem which has been considered in many contexts and in various flavors. In this paper, we investigate the use of audio for human-assisted authentication of previously un-associated devices. We develop and evaluate a system we call Loud-and-Clear (L&C) which places very little demand on the human user. L&C involves the use of a text-to-speech (TTS) engine for vocalizing a robust-sounding and syntactically-correct (English-like) sentence derived from the hash of a device’s public key. By coupling vocalization on one device with the display of the same information on another device, we demonstrate that L&C is suitable for secure device pairing (e.g., key exchange) and similar tasks. We also describe several common use cases, provide some performance data for our prototype implementation and discuss the security properties of L&C.

Proceedings Article
01 Jan 2005
TL;DR: The specification of Accredited DomainKeys provides a mechanism for historical non-repudiation of email messages sent from a given domain, which is useful for the enforcement of acceptable usage policies.
Abstract: We present an architecture called Accredited DomainKeys, which builds on the DomainKeys email authentication infrastructure to address the following questions: • “Did the sender actually send this email?” • “Is the sender of this email trustworthy?” The proposed DomainKeys architecture already addresses the first question but not the second. Accredited DomainKeys strengthens the reliability of a positive answer to the first question and provides a mechanism to answer the second. In terms of infrastructure requirements, Accredited DomainKeys involves a modest additional use of DNS over the existing DomainKeys proposal. In addition, the specification of Accredited DomainKeys provides a mechanism for historical non-repudiation of email messages sent from a given domain, which is useful for the enforcement of acceptable usage policies. Several compliant implementations of Accredited DomainKeys are possible. This paper describes two implementations, one based on time-stamped signatures, and the other based on authenticated dictionaries and the secure transaction management system (STMS) architecture.

Posted Content
TL;DR: This work presents a new multi-dimensional data structure, which it is called the skip quadtree or the skip octree, which has the well-defined "box"-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists.
Abstract: We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R^2) or the skip octree (for point data in R^d, with constant d>2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined "box"-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a skip quadtree, as well as fast methods for performing point location and approximate range queries.

Journal ArticleDOI
TL;DR: Solutions to several constrained polygon annulus placement problems for offset and scaled polygons are given, providing new efficient primitive operations for computational metrology and dimensional tolerancing.

01 Jan 2005
TL;DR: This paper gives a provably optimal-cost dynamic programming algorithm for gerrymandering on a single range query attribute and proposes a family of heuristics on multiple range query attributes for this problem: with range queries and point updates.
Abstract: Client-server databases that require query results to be up-to-date despite storing data that changes dynamically suffer from heavy communication costs. Client-side caching can help mitigate these costs, particularly when individual PUSH-PULL decisions are made for the different semantic regions in the data space. In the PUSH regions the server notifies the client about updates, and in the PULL regions the client sends queries to the server. We call the problem of partitioning the data space into PUSH-PULL regions to achieve the minimum possible communication cost for a given workload the problem of data gerrymandering. In this paper we present solutions under different communication cost models for a frequently encountered scenario: with range queries and point updates. Specifically, we give a provably optimal-cost dynamic programming algorithm for gerrymandering on a single range query attribute. We propose a family of heuristics for gerrymandering on multiple range query attributes. We also handle the dynamic case in which the workload evolves over time. We validate our methods through extensive experiments on real and synthetic data sets.

Journal ArticleDOI
TL;DR: In this paper, the authors combine the idea of confluent drawings with Sugiyama style drawings, in order to reduce the edge crossings in the resultant drawings, and it is easier to understand the structures of graphs from the mixed style drawings.
Abstract: We combine the idea of confluent drawings with Sugiyama style drawings, in order to reduce the edge crossings in the resultant drawings. Furthermore, it is easier to understand the structures of graphs from the mixed style drawings. The basic idea is to cover a layered graph by complete bipartite subgraphs (bicliques), then replace bicliques with tree-like structures. The biclique cover problem is reduced to a special edge coloring problem and solved by heuristic coloring algorithms. Our method can be extended to obtain multi-depth confluent layered drawings.

Posted Content
TL;DR: This work presents a framework for designing efficient distributed data structures for multi-dimensional data, which includes skip-webs, and applies to a general class of data querying scenarios, which include linear (one-dimensional) data, such as sorted sets, as well as multi- dimensional data,such as d-dimensional octrees and digital tries of character strings defined over a fixed alphabet.
Abstract: We present a framework for designing efficient distributed data structures for multi-dimensional data. Our structures, which we call skip-webs, extend and improve previous randomized distributed data structures, including skipnets and skip graphs. Our framework applies to a general class of data querying scenarios, which include linear (one-dimensional) data, such as sorted sets, as well as multi-dimensional data, such as d-dimensional octrees and digital tries of character strings defined over a fixed alphabet. We show how to perform a query over such a set of n items spread among n hosts using O(log n / log log n) messages for one-dimensional data, or O(log n) messages for fixed-dimensional data, while using only O(log n) space per host. We also show how to make such structures dynamic so as to allow for insertions and deletions in O(log n) messages for quadtrees, octrees, and digital tries, and O(log n / log log n) messages for one-dimensional data. Finally, we show how to apply a blocking strategy to skip-webs to further improve message complexity for one-dimensional data when hosts can store more data.

Posted Content
TL;DR: In this paper, the authors generalize the tree-confluent graphs to a broader class of graphs, called Delta-Confluent graphs. But they do not consider the relation between tree-and distance-hereditary graphs.
Abstract: We generalize the tree-confluent graphs to a broader class of graphs called Delta-confluent graphs. This class of graphs and distance-hereditary graphs, a well-known class of graphs, coincide. Some results about the visualization of Delta-confluent graphs are also given.

Book ChapterDOI
15 Aug 2005
TL;DR: 2 dimensional spaces are revisited and it is shown that, for any given set of 3 partitioning planes, it is not only possible to construct such trees, but also possible to derive a simple closed-form upper bound on the aspect ratio.
Abstract: Spatial databases support a variety of geometric queries on point data such as range searches, nearest neighbor searches, etc. Balanced Aspect Ratio (BAR) trees are hierarchical space decomposition structures that are general-purpose and space-efficient, and, in addition, enjoy a worst case performance poly-logarithmic in the number of points for approximate queries. They maintain limits on their depth, as well as on the aspect ratio (intuitively, how skinny the regions can be). BAR trees were initially developed for 2 dimensional spaces and a fixed set of partitioning planes, and then extended to d dimensional spaces and more general partitioning planes. Here we revisit 2 dimensional spaces and show that, for any given set of 3 partitioning planes, it is not only possible to construct such trees, it is also possible to derive a simple closed-form upper bound on the aspect ratio. This bound, and the resulting algorithm, are much simpler than what is known for general BAR trees. We call the resulting BAR trees Parameterized BAR trees and empirically evaluate them for different partitioning planes. Our experiments show that our theoretical bound converges to the empirically obtained values in the lower ranges, and also make a case for using evenly oriented partitioning planes.