scispace - formally typeset
Search or ask a question

Showing papers by "John Iacono published in 2004"


Journal ArticleDOI
TL;DR: The data structure presented here is a simplification of the cache-oblivious B-tree of Bender, Demaine, and Farach-Colton and has memory performance optimized for all levels of the memory hierarchy even though it has no memory-hierarchy-specific parameterization.

93 citations


Journal ArticleDOI
TL;DR: Four space-efficient algorithms for computing the convex hull of a planar point set using only a small amount of additional memory are described.

63 citations


Journal ArticleDOI
John Iacono1
01 Sep 2004
TL;DR: Given a fixed distribution of point location queries among the triangles in a triangulation of the plane, a data structure is presented that achieves the entropy bound on the expected point location query time.
Abstract: Given a fixed distribution of point location queries among the triangles in a triangulation of the plane, a data structure is presented that achieves, within constant multiplicative factors, the entropy bound on the expected point location query time. The data structure is a simple variation of Kirkpatrick's classic planar point location structure [D.G. Kirkpatrick, SIAM J. Comput. 12 (1) (1983) 28-35], and has linear construction costs and space requirements.

42 citations


Proceedings ArticleDOI
11 Jan 2004
TL;DR: In this article, the authors introduce a new data structuring paradigm in which operations can be performed on a data structure not only in the present but also in the past, called retroactive data structures, the historical sequence of operations performed on the data structure is not fixed.
Abstract: We introduce a new data structuring paradigm in which operations can be performed on a data structure not only in the present but also in the past. In this new paradigm, called retroactive data structures, the historical sequence of operations performed on the data structure is not fixed. The data structure allows arbitrary insertion and deletion of operations at arbitrary times, subject only to consistency requirements. We initiate the study of retroactive data structures by formally defining the model and its variants. We prove that, unlike persistence, efficient retroactivity is not always achievable, so we go on to present several specific retroactive data structures.

38 citations


Proceedings ArticleDOI
08 Jun 2004
TL;DR: This work presents an O-time algorithm for finding a ham-sandwich geodesic, and shows that this algorithm is optimal in thealgebraic computation tree model when parameterizing the running time with respect to n and k.
Abstract: Let P be a simple polygon with m vertices, k of which are reflex, and which contains r red points and b blue points in its interior. Let n=m+r+b. A ham-sandwich geodesic is a shortest path in P between any two points on the boundary of P that simultaneously bisects the red points and the blue points. We present an O (n log k)-time algorithm for finding a ham-sandwich geodesic. We also show that this algorithm is optimal in thealgebraic computation tree model when parameterizing the running time with respect to n and k.

30 citations


Journal ArticleDOI
01 May 2004
TL;DR: This work presents a data structure that is optimized for answering queries quickly when they are geometrically close to the previous successful query, and works with a variety of distance functions.
Abstract: In the 2D point searching problem, the goal is to preprocess n points P = {P1,..., pn} in the plane so that, for an online sequence of query points q1, ...,qm, it can quickly be determined which (if any) of the elements of P are equal to each query point qi. This problem can be solved in O(logn) time by mapping the problem to one dimension. We present a data structure that is optimized for answering queries quickly when they are geometrically close to the previous successful query. Specifically, our data structure executes queries in time O(log d(qi-1, qi)), where d is some distance function between two points, and uses O(n logn) space. Our structure works with a variety of distance functions. In contrast, it is proved that, for some of the most intuitive distance functions d, it is impossible to obtain an O (log d(qi-1, qi)) runtime, or any bound that is o(log n).

18 citations


Posted Content
19 Oct 2004
TL;DR: It is proved that the optimal number of memory transfers is .
Abstract: Consider laying out a fixed-topology tree of N nodes into external memory with block size B so as to minimize the worst-case number of block memory transfers required to traverse a path from the root to a node of depth D. We prove that the optimal number of memory transfers is

16 citations


Journal ArticleDOI
John Iacono1
TL;DR: In this paper, the authors presented a data structure that achieves, within constant multiplicative factors, the entropy bound on the expected point location query time, given a fixed distribution of point location queries among the triangles in a triangulation of the plane.

10 citations


Proceedings ArticleDOI
08 Jun 2004
TL;DR: This work provides necessary and sufficient conditions for theexistence of a chord and for the existence of a geodesic path which separate the two sets when they exist and derives efficient algorithms for their obtention.
Abstract: We consider the separability of two point sets inside a polygon by means of chords or geodesic lines. Specifically, given a set of red points and a set of blue points in the interior of a polygon, we provide necessary and sufficient conditions for the existence of a chord and for the existence of a geodesic path which separate the two sets when they exist we also derive efficient algorithms for their obtention. We study as well the separation of the two sets using a minimum number of pairwise non-crossing chords.

10 citations


Book ChapterDOI
TL;DR: It is proved the existence of a vertex-unfolding using only cuts that lie in a plane orthogonal to a coordinate axis and containing a vertex of the orthostack.
Abstract: An algorithm was presented in [BDD+98] for unfolding orthostacks into one piece without overlap by using arbitrary cuts along the surface. It was conjectured that orthostacks could be unfolded using cuts that lie in a plane orthogonal to a coordinate axis and containing a vertex of the orthostack. We prove the existence of a vertex-unfolding using only such cuts.

9 citations


01 Jan 2004
TL;DR: How to detect duplicates in a sequence of k n-bit vectors presented as a list of single-bit changes between consecutive vectors, in O((n + k) log n) time is shown.
Abstract: We show how to detect duplicates in a sequence of k n-bit vectors presented as a list of single-bit changes between consecutive vectors, in O((n + k) log n) time. Problem We are given a sequence S = {v1, . . . , vk} of k n-bit vectors, presented as follows: The first bit vector is all zeros and each subsequent vector vi is obtained from the previous vector vi−1 by flipping a single bit in position bi, 0 ≤ bi < n. S is represented as b2, b3, . . . , bk. The problem is to detect duplicates in the sequence v1, v2, . . . , vk. More formally, we seek a labeling S → {1, . . . , k}, vi 7→ ci, such that ci = cj iff vi = vj . Solution Without loss of generality in the remainder of this note we assume that n is a power of two. Let T be the perfectly balanced binary tree on n leaves. We number the leaves of T from 0 to n− 1 and associate each with a bit position. Each interior node x of T is similarly associated with a block B(x) of consecutive bit positions corresponding to the leaves of the subtree rooted at x. For a bit vector vi, let vi(x) be its substring in B(x). The idea behind our data structure is simple: each node x has an associated data structure that stores implicitly the set ⋃k i=1{vi(x)}. The data structure stored at node x consists of two arrays Dx and Fx that store the following data: • Dx[1, . . . , dx] contains the sorted set including 1 and all distinct values i, 1 < i ≤ k, such that vi−1(x) 6= vi(x). • Fx[1, . . . , dx] contains integers in the range 1, . . . , dx with the property that Fx[i] = Fx[j] iff vDx[i](x) = vDx[j](x). 1Research supported in part by NSF ITR Grant CCR-0081964 and by a grant from US-Israel Binational Science Foundation; part of work has been carried out while visiting MaxPlanck-Institut für Informatik. Department of Computer and Information Science, Polytechnic University, 5 MetroTech Center, Brooklyn, NY 11201 USA; http://cis.poly.edu/ ̃aronov. 2Research supported in part by NSF grant CCF-0430849. Department of Computer and Information Science, Polytechnic University, 5 MetroTech Center, Brooklyn, NY 11201 USA; http://john.poly.edu. We now complete the description of our algorithm, by explaining how to initialize Dz and Fz for all leaves z ∈ T and how to compute Dx and Fx from Dl, Fr, Dl, Fr for any internal node x with children l and r. Froot describes the desired labeling of S, since Droot contains all the numbers 1, . . . , k. If one stores all of the leaves z in an array in numerical order, a linear scan of the sequence b2, b3, . . . , bk of bit updates allows one to initialize arrays Dz and Fz, for all z. Specifically, we store the current bit vector vi−1 explicitly in a bit array V [0, . . . , n − 1]. Since bi = j indicates a bit flip in position z = j (recall that bit positions, and thus leaves are identified with integers 0, . . . , n− 1), we flip the value of V [j], add j to Dz, and depending on the resulting value of V [j], set the next entry in Fz to zero or one. Algorithm 1 The pseudocode for computing Dx, Fx from Dl, Fl, Dr, Fr. 1: i← j ← k ← 1 2: Dl[dl + 1]← DR[dr + 1]←∞ 3: repeat 4: Px[k]← (Fr(i), Fl(j), k, 0) 5: if Dl[i] < Dr[j] then 6: Dx[k]← Dl[i]; i← i + 1 7: else if Dl[i] = Dr[j] then 8: Dx[k]← Dl[i]; i← i + 1; j ← j + 1; 9: else if Dl[i] > Dr[j] then 10: Dx[k]← Dr[j]; j ← j + 1; 11: end if 12: k ← k + 1; 13: until i = dl + 1 and j = dr + 1 14: dx ← k − 1 . dx is the length of Px and Dx 15: Sort Px lexicographically on the first two fields, by radix sort 16: for k ← 2 to dx do 17: if Px[k − 1][1] = Px[k][1] and Px[k − 1][2] = Px[k][2] then 18: Px[k][4]← Px[k − 1][4] 19: else 20: Px[k][4]← Px[k − 1][4] + 1 21: end if 22: end for 23: for k ← 1 to dx do 24: Fx[Px[k][3]]← Px[k][4] 25: end for Now, we describe, for an internal node x of T with children l and r, how to construct Dx, Fx from arrays Dl, Dr, Fl, Fr; see Algorithm 1. The new sorted array Dx[1, . . . , dx] is built by merging the arrays Dl and Dr, eliminating any duplicates, in time O(dl + dr) =

Posted Content
TL;DR: It is proved that the optimal number of memory transfers required to traverse a path from the root to a node of depth D is 2.
Abstract: Consider laying out a fixed-topology tree of N nodes into external memory with block size B so as to minimize the worst-case number of block memory transfers required to traverse a path from the root to a node of depth D. We prove that the optimal number of memory transfers is $$ \cases{ \displaystyle \Theta\left( {D \over \lg (1{+}B)} \right) & when $D = O(\lg N)$, \cr \displaystyle \Theta\left( {\lg N \over \lg \left(1{+}{B \lg N \over D}\right)} \right) & when $D = \Omega(\lg N)$ and $D = O(B \lg N)$, \cr \displaystyle \Theta\left( {D \over B} \right) & when $D = \Omega(B \lg N)$. } $$