scispace - formally typeset
Search or ask a question

Showing papers by "John Iacono published in 2011"


Posted Content
John Iacono1
TL;DR: Pairing heaps have constant amortized time Insert and Meld as discussed by the authors, which is the same as Fibonacci heaps for all operations but decrease-key.
Abstract: Pairing heaps are shown to have constant amortized time Insert and Meld, thus showing that pairing heaps have the same amortized runtimes as Fibonacci heaps for all operations but Decrease-key.

54 citations


Book ChapterDOI
05 Dec 2011
TL;DR: This work focuses on determining the effective entropy of 2D-RMQ, and gives tight upper and lower bounds on the expected effective entropy for the case when A contains independent identically-distributed random values.
Abstract: We consider the two-dimensional range maximum query (2D-RMQ) problem: given an array A of ordered values, to pre-process it so that we can find the position of the largest element in a (user-specified) range of rows and range of columns. We focus on determining the effective entropy of 2D-RMQ, i.e., how many bits are needed to encode A so that 2D-RMQ queries can be answered without access to A. We give tight upper and lower bounds on the expected effective entropy for the case when A contains independent identically-distributed random values, and new upper and lower bounds for arbitrary A, for the case when A contains few rows. The latter results improve upon upper and lower bounds by Brodal et al. (ESA 2010). We also give some efficient data structures for 2D-RMQ whose space usage is close to the effective entropy.

27 citations


Journal ArticleDOI
TL;DR: The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog BN memory transfers between any two levels of the memory hierarchy, and shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal caching structure rapidly converge.
Abstract: This paper gives tight bounds on the cost of cache-oblivious searching. The paper shows that no cache-oblivious search structure can guarantee a search performance of fewer than lg elog B N memory transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes are limited to be powers of 2. The paper gives modified versions of the van Emde Boas layout, where the expected number of memory transfers between any two levels of the memory hierarchy is arbitrarily close to [lg e+O(lg lg B/lg B)]log B N+O(1). This factor approaches lg e≈1.443 as B increases. The expectation is taken over the random placement in memory of the first element of the structure. Because searching in the disk-access machine (DAM) model can be performed in log B N+O(1) block transfers, this result establishes a separation between the (2-level) DAM model and cache-oblivious model. The DAM model naturally extends to k levels. The paper also shows that as k grows, the search costs of the optimal k-level DAM search structure and the optimal cache-oblivious search structure rapidly converge. This result demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost replicates the performance of an optimal parameterized k-level DAM structure.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the first continuous bloomings of all convex polyhedra were constructed, showing that the source unfolding can be continuously bloomed, and that any unfolding of a polyhedron can be further refined to have a continuous blooming.
Abstract: We construct the first two continuous bloomings of all convex polyhedra. First, the source unfolding can be continuously bloomed. Second, any unfolding of a convex polyhedron can be refined (further cut, by a linear number of cuts) to have a continuous blooming.

15 citations


Proceedings ArticleDOI
John Iacono1
13 Jun 2011
TL;DR: This result is the 2-d analogue of the jump from the optimum binary search trees of Knuth in 1971, to the s play trees of Sleator and Tarjan in 1985 where in the static optimality theorem it was proven that splay trees had the same asymptotic performance of optimum search trees without being provided the probability distribution.
Abstract: In the past ten years, there have been a number of data structures that, given a distribution of planar point location queries, produce a planar point location data structure that is tuned for the provided distribution. These structures all suffer from the requirement that the query distribution be provided in advance. For the problem of point location in a triangulation, a data structure is presented that performs asymptotically as well as these structures, but does not require the distribution to be provided in advance. This result is the 2-d analogue of the jump from the optimum binary search trees of Knuth in 1971 which required that the distribution be provided, to the splay trees of Sleator and Tarjan in 1985 where in the static optimality theorem it was proven that splay trees had the same asymptotic performance of optimum search trees without being provided the probability distribution.

13 citations


Posted Content
TL;DR: This work focuses on determining the effective entropy of 2D-RMQ, i.e., how many bits are needed to encode an array so that two-dimensional range maximum query queries can be answered without accessing the array.
Abstract: We consider the \emph{two-dimensional range maximum query (2D-RMQ)} problem: given an array $A$ of ordered values, to pre-process it so that we can find the position of the smallest element in the sub-matrix defined by a (user-specified) range of rows and range of columns. We focus on determining the \emph{effective} entropy of 2D-RMQ, i.e., how many bits are needed to encode $A$ so that 2D-RMQ queries can be answered \emph{without} access to $A$. We give tight upper and lower bounds on the expected effective entropy for the case when $A$ contains independent identically-distributed random values, and new upper and lower bounds for arbitrary $A$, for the case when $A$ contains few rows. The latter results improve upon previous upper and lower bounds by Brodal et al. (ESA 2010). In some cases we also give data structures whose space usage is close to the effective entropy and answer 2D-RMQ queries rapidly.

13 citations


Posted Content
TL;DR: In this paper, the authors consider the dictionary problem in external memory and improve the update time of the well-known buffer tree by roughly a logarithmic factor, and present a lower bound in the cell-probe model showing that their data structure is optimal.
Abstract: We consider the dictionary problem in external memory and improve the update time of the well-known buffer tree by roughly a logarithmic factor. For any \lambda >= max {lg lg n, log_{M/B} (n/B)}, we can support updates in time O(\lambda / B) and queries in sublogarithmic time, O(log_\lambda n). We also present a lower bound in the cell-probe model showing that our data structure is optimal. In the RAM, hash tables have been used to solve the dictionary problem faster than binary search for more than half a century. By contrast, our data structure is the first to beat the comparison barrier in external memory. Ours is also the first data structure to depart convincingly from the indivisibility paradigm.

11 citations


Book ChapterDOI
20 Jul 2011
TL;DR: It is proved that the working-set bound is asymptotically equivalent to the unified bound (which is the minimum per operation among the static-finger, static-optimality, and working- set bounds) and that these bounds are the best possible with respect to the considered measures.
Abstract: We present a priority queue that supports the operations: insert in worst-case constant time, and delete, delete-min, find-min and decrease-key on an element x in worst-case $O(\lg(\min\{w_x, q_x\}+2))$ time, where wx (respectively, qx) is the number of elements that were accessed after (respectively, before) the last access of x and are still in the priority queue at the time when the corresponding operation is performed. Our priority queue then has both the working-set and the queueish properties; and, more strongly, it satisfies these properties in the worst-case sense. We also argue that these bounds are the best possible with respect to the considered measures. Moreover, we modify our priority queue to satisfy a new unifying property — the time-finger property — which encapsulates both the working-set and the queueish properties. In addition, we prove that the working-set bound is asymptotically equivalent to the unified bound (which is the minimum per operation among the static-finger, static-optimality, and working-set bounds). This latter result is of tremendous interest by itself as it had gone unnoticed since the introduction of such bounds by Sleater and Tarjan [10]. Together, these results indicate that our priority queue also satisfies the static-finger, the static-optimality and the unified bounds.

4 citations


Proceedings ArticleDOI
14 Apr 2011
TL;DR: This work considers the dictionary problem in external memory and improves the update time of the well-known buffer tree by roughly a logarithmic factor and presents a lower bound in the cell-probe model showing that the data structure is optimal.
Abstract: We consider the dictionary problem in external memory and improve the update time of the wellknown buffer tree by roughly a logarithmic factor. For any λ ≥ max{lg lg n, logM/B(n/B)}, we can support updates in time O( λ B ) and queries in time O(logλ n). We also present a lower bound in the cell-probe model showing that our data structure is optimal. In the RAM, hash tables have been use to solve the dictionary problem faster than binary search for more than half a century. By contrast, our data structure is the first to beat the comparison barrier in external memory. Ours is also the first data structure to depart convincingly from the indivisibility paradigm.

3 citations


Journal ArticleDOI
TL;DR: This work presents a new data structure for point location queries in planar triangulations that is asymptotically as fast as the optimal structures, but it requires no prior information about the queries.
Abstract: Over the last decade, there have been several data structures that, given a planar subdivision and a probability distribution over the plane, provide a way for answering point location queries that is fine-tuned for the distribution. All these methods suffer from the requirement that the query distribution must be known in advance. We present a new data structure for point location queries in planar triangulations. Our structure is asymptotically as fast as the optimal structures, but it requires no prior information about the queries. This is a 2D analogue of the jump from Knuth's optimum binary search trees (discovered in 1971) to the splay trees of Sleator and Tarjan in 1985. While the former need to know the query distribution, the latter are statically optimal. This means that we can adapt to the query sequence and achieve the same asymptotic performance as an optimum static structure, without needing any additional information.

2 citations


Posted Content
TL;DR: It is shown how to enhance any data structure in the pointer model to make it confluently persistent, with efficient query and update times and limited space overhead, and proves that confluent persistence can be achieved at a logarithmic cost.
Abstract: It is shown how to enhance any data structure in the pointer model to make it confluently persistent, with efficient query and update times and limited space overhead. Updates are performed in $O(\log n)$ amortized time, and following a pointer takes $O(\log c \log n)$ time where $c$ is the in-degree of a node in the data structure. In particular, this proves that confluent persistence can be achieved at a logarithmic cost in the bounded in-degree model used widely in previous work. This is a $O(n/\log n)$-factor improvement over the previous known transform to make a data structure confluently persistent.