scispace - formally typeset
Search or ask a question

Showing papers by "Michael T. Goodrich published in 2010"


Posted Content
TL;DR: In this article, the oblivious RAM simulation problem with a small logarithmic or polylogarithm amortized increase in access times was studied, with a very high probability of success, while keeping the external storage to be of size O(n).
Abstract: Suppose a client, Alice, has outsourced her data to an external storage provider, Bob, because he has capacity for her massive data set, of size n, whereas her private storage is much smaller--say, of size O(n^{1/r}), for some constant r > 1. Alice trusts Bob to maintain her data, but she would like to keep its contents private. She can encrypt her data, of course, but she also wishes to keep her access patterns hidden from Bob as well. We describe schemes for the oblivious RAM simulation problem with a small logarithmic or polylogarithmic amortized increase in access times, with a very high probability of success, while keeping the external storage to be of size O(n). To achieve this, our algorithmic contributions include a parallel MapReduce cuckoo-hashing algorithm and an external-memory dataoblivious sorting algorithm.

194 citations


Proceedings ArticleDOI
17 Jan 2010
TL;DR: This algorithm is a simple, randomized, data-oblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability.
Abstract: In this paper, we describe a randomized Shellsort algorithm. This algorithm is a simple, randomized, data-oblivious version of the Shellsort algorithm that always runs in O(n log n) time and succeeds in sorting any given input permutation with very high probability. Taken together, these properties imply applications in the design of new efficient privacy-preserving computations based on the secure multi-party computation (SMC) paradigm. In addition, by a trivial conversion of this Monte Carlo algorithm to its Las Vegas equivalent, one gets the first version of Shellsort with a running time that is provably O(n log n) with very high probability.

88 citations


Proceedings ArticleDOI
19 Apr 2010
TL;DR: All the solutions on a P-processor PEM model provide an optimal speedup of Θ(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts.
Abstract: In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest common ancestors, tree contraction and expression tree evaluation. We also study the problems of computing the connected and biconnected components of a graph, minimum spanning tree of a connected graph and ear decomposition of a biconnected graph. All our solutions on a P-processor PEM model provide an optimal speedup of Θ(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts.

47 citations



Journal ArticleDOI
TL;DR: The results explore what is achievable with straight-line drawings and what more is achieving with Lombardi-style drawings, with respect to drawings of trees with perfect angular resolution.
Abstract: We study methods for drawing trees with perfect angular resolution, i.e., with angles at each node v equal to 2{\pi}/d(v). We show: 1. Any unordered tree has a crossing-free straight-line drawing with perfect angular resolution and polynomial area. 2. There are ordered trees that require exponential area for any crossing-free straight-line drawing having perfect angular resolution. 3. Any ordered tree has a crossing-free Lombardi-style drawing (where each edge is represented by a circular arc) with perfect angular resolution and polynomial area. Thus, our results explore what is achievable with straight-line drawings and what more is achievable with Lombardi-style drawings, with respect to drawings of trees with perfect angular resolution.

41 citations


Proceedings ArticleDOI
13 Apr 2010
TL;DR: This paper introduces a framework for secure two-party (S2P) computations, which is called bureaucratic computing, and demonstrates its efficiency by designing practical S2P computations for sorting, selection, and random permutation.
Abstract: In this paper, we introduce a framework for secure two-party (S2P) computations, which we call bureaucratic computing, and we demonstrate its efficiency by designing practical S2P computations for sorting, selection, and random permutation. In a nutshell, the main idea behind bureaucratic computing is to design data-oblivious algorithms that push all knowledge and influence of input values down to small black-box circuits, which are simulated using Yao's garbled paradigm. The practical benefit of this approach is that it maintains the zero-knowledge features of secure two-party computations while avoiding the significant computational overheads that come from trying to apply Yao's garbled paradigm to anything other than simple two-input functions.

41 citations


Proceedings ArticleDOI
02 Nov 2010
TL;DR: This work gives efficient data-oblivious algorithms for several fundamental geometric problems that are relevant to geographic information systems, including planar convex hulls and all-nearest neighbors, and is applicable to secure multiparty computation (SMC) protocols for geographic data used in location-based services.
Abstract: We give efficient data-oblivious algorithms for several fundamental geometric problems that are relevant to geographic information systems, including planar convex hulls and all-nearest neighbors. Our methods are "data-oblivious" in that they don't perform any data-dependent operations, with the exception of operations performed inside low-level blackbox circuits having a constant number of inputs and outputs. Thus, an adversary who observes the control flow of one of our algorithms, but who cannot see the inputs and outputs to the blackbox circuits, cannot learn anything about the input or output. This behavior makes our methods applicable to secure multiparty computation (SMC) protocols for geographic data used in location-based services. In SMC protocols, multiple parties wish to perform a computation on their combined data without revealing individual data to the other parties. For instance, our methods can be used to solve a problem posed by Du and Atallah, where Alice has a set, A, of m private points in the plane, Bob has another set, B, of n private points in the plane, and Alice and Bob want to jointly compute the convex hull of A ∪ B without disclosing any more information than what can be derived from the answer. In particular, neither Alice nor Bob want to reveal any of their respective points that are in the interior of the convex hull of A ∪ B.

39 citations


Posted Content
TL;DR: Algorithms for finding Lombardi drawings of regular graphs, graphs of bounded degeneracy, and certain families of planar graphs are described.
Abstract: We introduce the notion of Lombardi graph drawings, named after the American abstract artist Mark Lombardi. In these drawings, edges are represented as circular arcs rather than as line segments or polylines, and the vertices have perfect angular resolution: the edges are equally spaced around each vertex. We describe algorithms for finding Lombardi drawings of regular graphs, graphs of bounded degeneracy, and certain families of planar graphs.

35 citations


Book ChapterDOI
21 Sep 2010
TL;DR: Lombardi drawings as mentioned in this paper represent edges as circular arcs rather than as line segments or polylines, and the vertices have perfect angular resolution: the edges are equally spaced around each vertex.
Abstract: We introduce the notion of Lombardi graph drawings, named after the American abstract artistMark Lombardi. In these drawings, edges are represented as circular arcs rather than as line segments or polylines, and the vertices have perfect angular resolution: the edges are equally spaced around each vertex. We describe algorithms for finding Lombardi drawings of regular graphs, graphs of bounded degeneracy, and certain families of planar graphs.

20 citations


Posted Content
TL;DR: In this article, a planar straight-line drawing of a combinatorially-embedded genus-g graph with the graph's canonical polygonal schema drawn as a convex polygon's external face is presented.
Abstract: We study the classic graph drawing problem of drawing a planar graph using straight-line edges with a prescribed convex polygon as the outer face. Unlike previous algorithms for this problem, which may produce drawings with exponential area, our method produces drawings with polynomial area. In addition, we allow for collinear points on the boundary, provided such vertices do not create overlapping edges. Thus, we solve an open problem of Duncan et al., which, when combined with their work, implies that we can produce a planar straight-line drawing of a combinatorially-embedded genus-g graph with the graph's canonical polygonal schema drawn as a convex polygonal external face.

20 citations


Posted Content
TL;DR: Efficient MapReduce simulations of parallel algorithms specified in the BSP and PRAM models are described, which result in efficient Map Reduce algorithms for sorting, 1-dimensional all nearest-neighbors, 2-dimensional convex hulls, 3- dimensional convex Hulls, and fixed-dimensional linear programming.
Abstract: In this paper, we describe efficient MapReduce simulations of parallel algorithms specified in the BSP and PRAM models. We also provide some applications of these simulation results to problems in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 1-dimensional all nearest-neighbors, 2-dimensional convex hulls, 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when reducers can have a buffer size of $B=O(n^\epsilon)$, for a small constant $\epsilon>0$, all of our MapReduce algorithms for these applications run in a constant number of rounds and have a linear-sized message complexity, with high probability, while guaranteeing with high probability that all reducer lists are of size $O(B)$.

Book ChapterDOI
18 Dec 2010
TL;DR: Methods for maintaining subgraph frequencies in a dynamic graph are presented, using data structures that are parameterized in terms of h, the h- index of the graph, to enable a number of new applications in Bioinformatics and Social Networking research.
Abstract: We present techniques for maintaining subgraph frequencies in a dynamic graph, using data structures that are parameterized in terms of h, the h- index of the graph. Our methods extend previous results of Eppstein and Spiro for maintaining statistics for undirected subgraphs of size three to directed subgraphs and to subgraphs of size four. For the directed case, we provide a data structure to maintain counts for all 3-vertex induced subgraphs in O(h) amortized time per update. For the undirected case, we maintain the counts of size-four subgraphs in O(h2) amortized time per update. These extensions enable a number of new applications in Bioinformatics and Social Networking research.

Proceedings ArticleDOI
04 Oct 2010
TL;DR: This work studies methods for attacking the privacy of social networking sites, collaborative filtering sites, databases of genetic signatures, and other data sets that can be represented as vectors of binary relationships using theoretical characterizations as well as experimental tests.
Abstract: We study methods for attacking the privacy of social networking sites, collaborative filtering sites, databases of genetic signatures, and other data sets that can be represented as vectors of binary relationships. Our methods are based on reductions to nonadaptive group testing, which implies that our methods can exploit a minimal amount of privacy leakage, such as contained in a single bit that indicates if two people in a social network have a friend in common or not. We analyze our methods for turning such privacy leaks into floods using theoretical characterizations as well as experimental tests. Our empirical analyses are based on experiments involving privacy attacks on the social networking sites Facebook and LiveJournal, a database of mitochondrial DNA, a power grid network, and the movie-rating database released as a part of the Netflix Prize contest. For instance, with respect to Facebook, our analysis shows that it is effectively possible to break the privacy of members who restrict their friends lists to friends-of-friends.

Posted Content
TL;DR: In this paper, the authors address the problem of replicating a Voronoi diagram of a planar point set by making proximity queries, which are of three possible (in decreasing order of information content): 1) the exact location of the nearest site(s) in the point set, 2) the distance to and label(s), 3) a unique label for every nearest site in the set, and provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of $V(S)$ with
Abstract: We address the problem of replicating a Voronoi diagram $V(S)$ of a planar point set $S$ by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in $S$; 2. the distance to and label(s) of the nearest site(s) in $S$; 3. a unique label for every nearest site in $S$. We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of $V(S)$ with $O(n)$ queries and $O(n \log^2 n)$ processing time. We also prove that queries of Type 3 can never exactly clone $V(S)$, but we show that with $O(n \log\frac{1}{\epsilon})$ queries we can construct an $\epsilon$-approximate cloning of $V(S)$. In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.

Book ChapterDOI
06 Sep 2010
TL;DR: This work addresses the problem of replicating a Voronoi diagram V(S) of a planar point set S by making proximity queries and provides one of the first natural algorithmic applications of retroactive data structures.
Abstract: We address the problem of replicating a Voronoi diagram V(S) of a planar point set S by making proximity queries: 1. the exact location of the nearest site(s) in S 2. the distance to and label(s) of the nearest site(s) in S 3. a unique label for every nearest site in S. In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.

Posted Content
TL;DR: In this article, a sparsity-exploiting mastermind algorithm is proposed for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data.
Abstract: In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.

Journal ArticleDOI
TL;DR: In this article, a planar separator decomposition for geometric graphs with sublinearly many edge crossings was proposed, and a linear time algorithm for Voronoi diagrams and single-source shortest paths was given.
Abstract: We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in $O(n)$ time on connected geometric graphs having $n$ vertices and $k$ pairwise crossings, where $k$ is smaller than $n$ by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having $n$ segments and $k$ crossings in linear time, for the case when $k$ is sublinear in $n$ by an iterated logarithmic factor.

Book ChapterDOI
21 Sep 2010
TL;DR: What is achievable with straight-line drawings and what more is achievable is explored with Lombardi-style drawings, with respect to drawings of trees with perfect angular resolution.
Abstract: We study methods for drawing trees with perfect angular resolution, i.e., with angles at each vertex, v, equal to 2π/d(v). We show: 1. Any unordered tree has a crossing-free straight-line drawing with perfect angular resolution and polynomial area. 2. There are ordered trees that require exponential area for any crossing-free straight-line drawing having perfect angular resolution. 3. Any ordered tree has a crossing-free Lombardi-style drawing (where each edge is represented by a circular arc) with perfect angular resolution and polynomial area. Thus, our results explore what is achievable with straight-line drawings and what more is achievable with Lombardi-style drawings, with respect to drawings of trees with perfect angular resolution.

Proceedings ArticleDOI
28 Jun 2010
TL;DR: Several new properties of two-site and two-color round-trip Voronoi diagrams in a geographic network are proved, including a relationship between the "doubling density" of sites and an upper bound on the number of non-empty Voronoa regions.
Abstract: The round-trip distance function on a geographic network (such as a road network, flight network, or utility distribution grid) defines the "distance" from a single vertex to a pair of vertices as the minimum length tour visiting all three vertices and ending at the starting vertex. Given a geographic network and a subset of its vertices called "sites" (for example a road network with a list of grocery stores), a two-site round-trip Voronoi diagram labels each vertex in the network with the pair of sites that minimizes the round-trip distance from that vertex. Alternatively, given a geographic network and two sets of sites of different types (for example grocery stores and coffee shops), a two-color round-trip Voronoi diagram labels each vertex with the pair of sites of different types minimizing the round-trip distance. In this paper, we prove several new properties of two-site and two-color round-trip Voronoi diagrams in a geographic network, including a relationship between the "doubling density" of sites and an upper bound on the number of non-empty Voronoi regions. We show how those lemmas can be used in new algorithms asymptotically more efficient than previous known algorithms when the networks have reasonable distribution properties related to doubling density, and we provide experimental data suggesting that road networks with standard point-of-interest sites have these properties.

Book ChapterDOI
21 Sep 2010
TL;DR: This work solves an open problem of Duncan et al., and implies that it can produce a planar straight-line drawing of a combinatorially-embedded genus-g graph with the graph's canonical polygonal schema drawn as a convexpolygonal external face.
Abstract: We study the classic graph drawing problem of drawing a planar graph using straight-line edges with a prescribed convex polygon as the outer face. Unlike previous algorithms for this problem, which may produce drawings with exponential area, our method produces drawings with polynomial area. In addition, we allow for collinear points on the boundary, provided such vertices do not create overlapping edges. Thus, we solve an open problem of Duncan et al., which, when combined with their work, implies that we can produce a planar straight-line drawing of a combinatorially-embedded genus-g graph with the graph's canonical polygonal schema drawn as a convex polygonal external face.

Posted Content
TL;DR: In this paper, Du and Atallah give efficient data-oblivious algorithms for several fundamental geometric problems that are relevant to geographic information systems, including planar convex hulls and all-nearest neighbors.
Abstract: We give efficient data-oblivious algorithms for several fundamental geometric problems that are relevant to geographic information systems, including planar convex hulls and all-nearest neighbors. Our methods are "data-oblivious" in that they don't perform any data-dependent operations, with the exception of operations performed inside low-level blackbox circuits having a constant number of inputs and outputs. Thus, an adversary who observes the control flow of one of our algorithms, but who cannot see the inputs and outputs to the blackbox circuits, cannot learn anything about the input or output. This behavior makes our methods applicable to secure multiparty computation (SMC) protocols for geographic data used in location-based services. In SMC protocols, multiple parties wish to perform a computation on their combined data without revealing individual data to the other parties. For instance, our methods can be used to solve a problem posed by Du and Atallah, where Alice has a set, A, of m private points in the plane, Bob has another set, B, of n private points in the plane, and Alice and Bob want to jointly compute the convex hull of A u B without disclosing any more information than what can be derived from the answer. In particular, neither Alice nor Bob want to reveal any of their respective points that are in the interior of the convex hull of A u B.

Book ChapterDOI
TL;DR: In this article, the authors prove several new properties of two-site and two-color round-trip Voronoi diagrams in a geographic network, including a relationship between the doubling density of sites and an upper bound on the number of non-empty Voroni regions, which can be used in new algorithms asymptotically more efficient than previous known algorithms when the networks have reasonable distribution properties related to doubling density.
Abstract: The round-trip distance function on a geographic network (such as a road network, flight network, or utility distribution grid) defines the "distance" from a single vertex to a pair of vertices as the minimum length tour visiting all three vertices and ending at the starting vertex. Given a geographic network and a subset of its vertices called "sites" (for example a road network with a list of grocery stores), a two-site round-trip Voronoi diagram labels each vertex in the network with the pair of sites that minimizes the round-trip distance from that vertex. Alternatively, given a geographic network and two sets of sites of different types (for example grocery stores and coffee shops), a two-color round-trip Voronoi diagram labels each vertex with the pair of sites of different types minimizing the round-trip distance. In this paper, we prove several new properties of two-site and two-color round-trip Voronoi diagrams in a geographic network, including a relationship between the "doubling density" of sites and an upper bound on the number of non-empty Voronoi regions. We show how those lemmas can be used in new algorithms asymptotically more efficient than previous known algorithms when the networks have reasonable distribution properties related to doubling density, and we provide experimental data suggesting that road networks with standard point-of-interest sites have these properties.

Posted Content
TL;DR: The priority range tree as discussed by the authors is a data structure that accommodates fast orthogonal range reporting queries on prioritized points, which is motivated by the Weber-Fechner Law, which states that humans perceive and interpret data on a logarithmic scale.
Abstract: We describe a data structure, called a priority range tree, which accommodates fast orthogonal range reporting queries on prioritized points. Let $S$ be a set of $n$ points in the plane, where each point $p$ in $S$ is assigned a weight $w(p)$ that is polynomial in $n$, and define the rank of $p$ to be $r(p)=\lfloor \log w(p) \rfloor$. Then the priority range tree can be used to report all points in a three- or four-sided query range $R$ with rank at least $\lfloor \log w \rfloor$ in time $O(\log W/w + k)$, and report $k$ highest-rank points in $R$ in time $O(\log\log n + \log W/w' + k)$, where $W=\sum_{p\in S}{w(p)}$, $w'$ is the smallest weight of any point reported, and $k$ is the output size. All times assume the standard RAM model of computation. If the query range of interest is three sided, then the priority range tree occupies $O(n)$ space, otherwise $O(n\log n)$ space is used to answer four-sided queries. These queries are motivated by the Weber--Fechner Law, which states that humans perceive and interpret data on a logarithmic scale.


Posted Content
TL;DR: It is shown that there is an input permutation that causes Spin-the-bottle sort to require Ω( n2logn) expected time in order to succeed, and that in O(n2 logn) time this algorithm succeeds with high probability for any input.
Abstract: We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a \emph{temperature} parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require $\Omega(n^2\log n)$ expected time in order to succeed, and that in $O(n^2\log n)$ time this algorithm succeeds with high probability for any input. We also show there is an implementation of Annealing sort that runs in $O(n\log n)$ time and succeeds with very high probability.

Book ChapterDOI
15 Dec 2010
TL;DR: A data structure, called a priority range tree, which accommodates fast orthogonal range reporting queries on prioritized points, which is motivated by the Weber–Fechner Law, which states that humans perceive and interpret data on a logarithmic scale.
Abstract: We describe a data structure, called a priority range tree, which accommodates fast orthogonal range reporting queries on prioritized points. Let S be a set of n points in the plane, where each point p in S is assigned a weight w(p) that is polynomial in n, and define the rank of p to be \(r(p)=\lfloor \log w(p) \rfloor\). Then the priority range tree can be used to report all points in a three- or four-sided query range R with rank at least \(\lfloor \log w \rfloor\) in time O(logW/w + k), and report k highest-rank points in R in time O(loglogn + logW/w′ + k), where W = ∑ p ∈ S w(p), w′ is the smallest weight of any point reported, and k is the output size. All times assume the standard RAM model of computation. If the query range of interest is three sided, then the priority range tree occupies O(n) space, otherwise O(nlogn) space is used to answer four-sided queries. These queries are motivated by the Weber–Fechner Law, which states that humans perceive and interpret data on a logarithmic scale.