scispace - formally typeset
Search or ask a question

Showing papers by "Michael T. Goodrich published in 2008"


Journal ArticleDOI
TL;DR: An approach to IP traceback based on the probabilistic packet marking paradigm, which is called randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers.
Abstract: This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. Our approach, which we call randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages.

128 citations


Proceedings ArticleDOI
14 Jun 2008
TL;DR: This paper presents two sorting algorithms, a distribution sort and a mergesort, and studies sorting lower bounds in a computational model, which is called the parallel external-memory (PEM) model, that formalizes the essential properties of the algorithms for private-cache CMPs.
Abstract: In this paper, we study parallel algorithms for private-cache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no additional assumptions about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks. In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs.

127 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: In this article, the authors study real-world road networks from an algorithmic perspective, focusing on empirical studies that yield useful properties of road networks that can be exploited in the design of fast algorithms that deal with geographic data.
Abstract: This paper studies real-world road networks from an algorithmic perspective, focusing on empirical studies that yield useful properties of road networks that can be exploited in the design of fast algorithms that deal with geographic data. Unlike previous approaches, our study is not based on the assumption that road networks are planar graphs. Indeed, based on the a number of experiments we have performed on the road networks of the 50 United States and District of Columbia, we provide strong empirical evidence that road networks are quite non-planar. Our approach therefore instead is directed at finding algorithmically-motivated properties of road networks as non-planar geometric graphs, focusing on alternative properties of road networks that can still lead to efficient algorithms for such problems as shortest paths and Voronoi diagrams. In particular, we study road networks as multiscale-dispersed graphs, which is a concept we formalize in terms of disk neighborhood systems. This approach allows us to develop fast algorithms for road networks without making any additional assumptions about the distribution of edge weights. In fact, our algorithms can allow for non-metric weights.

103 citations


Journal ArticleDOI
02 Jul 2008
TL;DR: Algorithms for canonically partitioning semi‐regular quadrilateral meshes into structured submeshes using an adaptation of the geometric motorcycle graph of Eppstein and Erickson to quad meshes are described, which may be used to efficiently find isomorphisms between quad meshes.
Abstract: We describe algorithms for canonically partitioning semi-regular quadrilateral meshes into structured submeshes, using an adaptation of the geometric motorcycle graph of Eppstein and Erickson to quad meshes. Our partitions may be used to efficiently find isomorphisms between quad meshes. In addition, they may be used as a highly compressed representation of the original mesh. These partitions can be constructed in sublinear time from a list of the extraordinary vertices in a mesh. We also study the problem of further reducing the number of submeshes in our partitions---we prove that optimizing this number is NP-hard, but it can be efficiently approximated.

67 citations


Book ChapterDOI
08 Apr 2008
TL;DR: This work develops new algorithmic and cryptographic techniques for authenticating the results of queries over databases that are outsourced to an untrusted responder by adopting the decoupling of query answering and answer verification in a way designed for queries related to range search.
Abstract: We develop new algorithmic and cryptographic techniques for authenticating the results of queries over databases that are outsourced to an untrusted responder. We depart from previous approaches by considering super-efficient answer verification, where answers to queries are validated in time asymptotically less that the time spent to produce them and using lightweight cryptographic operations. We achieve this property by adopting the decoupling of query answering and answer verification in a way designed for queries related to range search. Our techniques allow for efficient updates of the database and protect against replay attacks performed by the responder. One such technique uses an off-line audit mechanism: the data source and the user keep digests of the sequence of operations, yet are able to jointly audit the responder to determine if a replay attack has occurred since the last audit.

60 citations


Posted Content
TL;DR: In this article, the authors describe an efficient method for drawing any n-vertex simple graph G in the hyperbolic plane, which produces greedy drawings, which support greedy geometric routing, so that a message M between any pair of vertices may be routed geometrically, simply by having each vertex that receives M pass it along to any neighbor that is closer to the message's eventual destination.
Abstract: We describe an efficient method for drawing any n-vertex simple graph G in the hyperbolic plane. Our algorithm produces greedy drawings, which support greedy geometric routing, so that a message M between any pair of vertices may be routed geometrically, simply by having each vertex that receives M pass it along to any neighbor that is closer in the hyperbolic metric to the message's eventual destination. More importantly, for networking applications, our algorithm produces succinct drawings, in that each of the vertex positions in one of our embeddings can be represented using O(log n) bits and the calculation of which neighbor to send a message to may be performed efficiently using these representations. These properties are useful, for example, for routing in sensor networks, where storage and bandwidth are limited.

57 citations


Book ChapterDOI
15 Sep 2008
TL;DR: Athos, a new, platform-independent and user-transparent architecture for authenticated outsourced storage, is introduced, using light-weight cryptographic primitives and efficient data-structuring techniques to design authentication schemes that allow a client to efficiently verify that the file system is fully consistent with the exact history of updates and queries requested by the client.
Abstract: We study the problem of authenticated storage, where we wish to construct protocols that allow to outsource any complex file system to an untrusted server and yet ensure the file-system's integrity. We introduce Athos, a new, platform-independent and user-transparent architecture for authenticated outsourced storage. Using light-weight cryptographic primitives and efficient data-structuring techniques, we design authentication schemes that allow a client to efficiently verify that the file system is fully consistent with the exact history of updates and queries requested by the client. In Athos, file-system operations are verified in time that is logarithmic in the size of the file system using optimal storage complexity--constant storage overhead at the client and asymptotically no extra overhead at the server. We provide a prototype implementation of Athos validating its performance and its authentication capabilities.

55 citations


Journal ArticleDOI
TL;DR: This work presents a new multi-dimensional data structure, which it is called the skip quadtree or the skip octree, which has the well-defined “box”-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists.
Abstract: We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R2) or the skip octree (for point data in Rd, with constant d > 2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined “box”-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a skip quadtree, as well as fast methods for performing point location, approximate range, and approximate nearest neighbor queries.

36 citations


Book ChapterDOI
15 Sep 2008
TL;DR: It is proved that the skeleton of a general polyhedron has a superquadratic complexity in the worst case and an implementation of an algorithm for the general case is reported.
Abstract: We study the straight skeleton of polyhedra in 3D. We first show that the skeleton of voxel-based polyhedra may be constructed by an algorithm taking constant time per voxel. We also describe a more complex algorithm for skeletons of voxel polyhedra, which takes time proportional to the surface-area of the skeleton rather than the volume of the polyhedron. We also show that any n-vertex axis-parallel polyhedron has a straight skeleton with O(n2) features. We provide algorithms for constructing the skeleton, which run in O( min (n2logn,klogO(1)n)) time, where kis the output complexity. Next, we show that the straight skeleton of a general nonconvex polyhedron has an ambiguity, suggesting a consistent method to resolve it. We prove that the skeleton of a general polyhedron has a superquadratic complexity in the worst case. Finally, we report on an implementation of an algorithm for the general case.

35 citations


Posted Content
TL;DR: In this article, the straight skeleton of polycubes has been studied in three dimensions, and a simple voxel-sweeping algorithm has been proposed to construct a straight skeleton with running time O(min(n^2 log n, k log √ O(1) n) where k is the output complexity.
Abstract: This paper studies the straight skeleton of polyhedra in three dimensions. We first address voxel-based polyhedra (polycubes), formed as the union of a collection of cubical (axis-aligned) voxels. We analyze the ways in which the skeleton may intersect each voxel of the polyhedron, and show that the skeleton may be constructed by a simple voxel-sweeping algorithm taking constant time per voxel. In addition, we describe a more complex algorithm for straight skeletons of voxel-based polyhedra, which takes time proportional to the area of the surfaces of the straight skeleton rather than the volume of the polyhedron. We also consider more general polyhedra with axis-parallel edges and faces, and show that any n-vertex polyhedron of this type has a straight skeleton with O(n^2) features. We provide algorithms for constructing the straight skeleton, with running time O(min(n^2 log n, k log^{O(1)} n)) where k is the output complexity. Next, we discuss the straight skeleton of a general nonconvex polyhedron. We show that it has an ambiguity issue, and suggest a consistent method to resolve it. We prove that the straight skeleton of a general polyhedron has a superquadratic complexity in the worst case. Finally, we report on an implementation of a simple algorithm for the general case.

26 citations


Journal ArticleDOI
TL;DR: Improved cheater detection algorithms that utilize natural delays that exist in long-term grid computations are presented and can tolerate collusions of lazy cheaters, even if the number of such cheaters is a fraction of the total number of participants.

Journal ArticleDOI
TL;DR: These algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings.
Abstract: We provide linear-time algorithms for geometric graphs with sublinearly many crossings. That is, we provide algorithms running in O(n) time on connected geometric graphs having n vertices and k crossings, where k is smaller than n by an iterated logarithmic factor. Specific problems we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that "zeroes in" on edge crossings, together with methods for extending planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having n segments and k crossings in linear time, for the case when k is sublinear in n by an iterated logarithmic factor.

Proceedings ArticleDOI
05 Nov 2008
TL;DR: This work provides an efficient algorithm for two-site Voronoi diagrams in geographic networks that labels each vertex in a geographic network with their two nearest neighbors.
Abstract: We provide an efficient algorithm for two-site Voronoi diagrams in geographic networks. A two-site Voronoi diagram labels each vertex in a geographic network with their two nearest neighbors, which is useful in many contexts.

Journal ArticleDOI
TL;DR: An efficient implementation of the notarized federated identity management model based on the Secure Transaction Management System (STMS) is presented, which enables one to proactively prevent the leaking of secret identity information.
Abstract: We propose a notarized federated identity management model that supports efficient user authentication when providers are unknown to each other. Our model introduces a notary service, owned by a trusted third-party, to dynamically notarize assertions generated by identity providers. An additional feature of our model is the avoidance of direct communications between identity providers and service providers, which provides improved privacy protection for users. We present an efficient implementation of our notarized federated identity management model based on the Secure Transaction Management System (STMS). We also give a practical solution for mitigating aspects of the identity theft problem and discuss its use in our notarized federated identity management model. The unique feature of our cryptographic solution is that it enables one to proactively prevent the leaking of secret identity information.

01 Jan 2008
TL;DR: This paper shows that such greedy geometric routing schemes exist in R, for 3-connected planar graphs, using the standard Euclidean metric, with coordinates that can be represented succinctly, that is, with O(log n) bits.
Abstract: In greedy geometric routing, messages are passed in a network embedded in a metric space according to the greedy strategy of always forwarding messages to nodes that are closer to the destination. In this paper, we study greedy geometric routing in R, using the standard Euclidean metric. Greedy geometric routing is not always possible in fixed-dimensional Euclidean spaces, of course, as is the case, for example, with star graphs. Nevertheless, we show that such greedy geometric routing schemes exist in R, for 3-connected planar graphs, using the standard Euclidean metric, with coordinates that can be represented succinctly, that is, with O(log n) bits. Moreover, our embedding strategy introduces a coordinate system for R that supports distance comparisons using our succinct coordinates. Thus, our scheme can be used to significantly reduce bandwidth, space, and header size over other recently discovered greedy geometric routing implementations for R. ar X iv :0 81 2. 38 93 v1 [ cs .C G ] 1 9 D ec 2 00 8

Posted Content
TL;DR: This paper studies real-world road networks from an algorithmic perspective, focusing on empirical studies that yield useful properties of road networks that can be exploited in the design of fast algorithms that deal with geographic data.
Abstract: This paper studies real-world road networks from an algorithmic perspective, focusing on empirical studies that yield useful properties of road networks that can be exploited in the design of fast algorithms that deal with geographic data Unlike previous approaches, our study is not based on the assumption that road networks are planar graphs Indeed, based on the a number of experiments we have performed on the road networks of the 50 United States and District of Columbia, we provide strong empirical evidence that road networks are quite non-planar Our approach therefore instead is directed at finding algorithmically-motivated properties of road networks as non-planar geometric graphs, focusing on alternative properties of road networks that can still lead to efficient algorithms for such problems as shortest paths and Voronoi diagrams In particular, we study road networks as multiscale-dispersed graphs, which is a concept we formalize in terms of disk neighborhood systems This approach allows us to develop fast algorithms for road networks without making any additional assumptions about the distribution of edge weights In fact, our algorithms can allow for non-metric weights

01 Jan 2008
TL;DR: This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm that uses large checksum cords to "link" message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers.
Abstract: This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. Our ap- proach, which we call randomize-and-link, uses large checksum cords to "link" message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages. Index Terms—Associate addresses, checksum cords, distributed denial of service (DDOS), IP, probabilistic packet marking, trace- back.

Proceedings ArticleDOI
04 Jun 2008
TL;DR: This work provides a "lazy-greedy" algorithm that is guaranteed to find good matches when mis-matching portions of mesh are localized, and provides empirical evidence that this approach produces good matches between similar quad meshes.
Abstract: We study approximate topological matching of quadrilateral meshes, that is, the problem of finding as large a set as possible of matching portions of two quadrilateral meshes. This study is motivated by applications in graphics that involve shape modeling whose results need to be merged in order to produce a final unified representation of an object. We show that the problem of producing a maximum approximate topological match of two quad meshes in NP-hard. Given this result, which makes an exact solution extremely unlikely, we show that the natural greedy algorithm derived from polynomial-time graph isomorphism can produce poor results, even when it is possible to find matches with only a few non-matching quads. Nevertheless, we provide a "lazy-greedy" algorithm that is guaranteed to find good matches when mis-matching portions of mesh are localized. Finally, we provide empirical evidence that this approach produces good matches between similar quad meshes.

Posted Content
TL;DR: It is shown that greedy geometric routing schemes exist for the Euclidean metric in R 2, for 3-connected planar graphs, with coordinates that can be represented succinctly, that is, with O(logn) bits, where n is the number of vertices in the graph.
Abstract: In greedy geometric routing, messages are passed in a network embedded in a metric space according to the greedy strategy of always forwarding messages to nodes that are closer to the destination. We show that greedy geometric routing schemes exist for the Euclidean metric in R^2, for 3-connected planar graphs, with coordinates that can be represented succinctly, that is, with O(log n) bits, where n is the number of vertices in the graph. Moreover, our embedding strategy introduces a coordinate system for R^2 that supports distance comparisons using our succinct coordinates. Thus, our scheme can be used to significantly reduce bandwidth, space, and header size over other recently discovered greedy geometric routing implementations for R^2.

Posted Content
27 Aug 2008
TL;DR: This paper studies real-world road networks from an algorithmic perspective, focusing on empirical studies that yield useful properties of road networks that can be exploited in the design of fast algorithms that deal with geographic data.
Abstract: This paper studies real-world road networks from an algorithmic perspective, focusing on empirical studies that yield useful properties of road networks that can be exploited in the design of fast algorithms that deal with geographic data. Unlike previous approaches, our study is not based on the assumption that road networks are planar graphs. Indeed, based on the a number of experiments we have performed on the road networks of the 50 United States and District of Columbia, we provide strong empirical evidence that road networks are quite non-planar. Our approach therefore instead is directed at finding algorithmically-motivated properties of road networks as non-planar geometric graphs, focusing on alternative properties of road networks that can still lead to efficient algorithms for such problems as shortest paths and Voronoi diagrams. In particular, we study road networks as multiscale-dispersed graphs, which is a concept we formalize in terms of disk neighborhood systems. This approach allows us to develop fast algorithms for road networks without making any additional assumptions about the distribution of edge weights. In fact, our algorithms can allow for non-metric weights.