Author

# John Iacono

Other affiliations: New York University, Aarhus University, Rutgers University ...read more

Bio: John Iacono is an academic researcher from Université libre de Bruxelles. The author has contributed to research in topic(s): Data structure & Amortized analysis. The author has an hindex of 24, co-authored 170 publication(s) receiving 2130 citation(s). Previous affiliations of John Iacono include New York University & Aarhus University.

##### Papers published on a yearly basis

##### Papers

More filters

••

TL;DR: Output-sensitive algorithms for computing this decision boundary for point sets on the line and in ℝ2 are developed, which is the best possible when parameterizing with respect to n and k.

Abstract: Given a set R of red points and a set B of blue points, the nearest-neighbour decision rule classifies a new point q as red (respectively, blue) if the closest point to q in R ⋃ B comes from R (respectively, B). This rule implicitly partitions space into a red set and a blue set that are separated by a red-blue decision boundary. In this paper we develop output-sensitive algorithms for computing this decision boundary for point sets on the line and in ℝ2. Both algorithms run in time O(n log k), where k is the number of points that contribute to the decision boundary. This running time is the best possible when parameterizing with respect to n and k.

151 citations

••

TL;DR: The data structure presented here is a simplification of the cache-oblivious B-tree of Bender, Demaine, and Farach-Colton and has memory performance optimized for all levels of the memory hierarchy even though it has no memory-hierarchy-specific parameterization.

Abstract: This paper presents a simple dictionary structure designed for a hierarchical memory The proposed data structure is cache-oblivious and locality-preserving A cache-oblivious data structure has memory performance optimized for all levels of the memory hierarchy even though it has no memory-hierarchy-specific parameterization A locality-preserving dictionary maintains elements of similar key values stored close together for fast access to ranges of data with consecutive keysThe data structure presented here is a simplification of the cache-oblivious B-tree of Bender, Demaine, and Farach-Colton The structure supports search operations on N data items using O(logB N + 1) block transfers at a level of the memory hierarchy with block size B Insertion and deletion operations use O(logBN + log2 N/B + 1) amortized block transfers Finally, the data structure returns all k data items in a given search range using O(logB N + k/B + 1) block transfersThis data structure was implemented and its performance was evaluated on a simulated memory hierarchy This paper presents the results of this simulation for various combinations of block and memory sizes

91 citations

••

09 Jan 2001TL;DR: The unified conjecture is presented, which subsumes the working set theorem and dynamic finger theorem, and accurately bounds the performance of splay trees over some classes of sequences where the existing theorems' bounds are not tight.

Abstract: Splay trees are a self adjusting form of search tree that supports access operations in O(log n) amortized time. Splay trees also have several amazing distribution sensitive properties, the strongest two of which are the working set theorem and the dynamic finger theorem. However, these two theorems are shown to poorly bound the performance of splay trees on some simple access sequences. The unified conjecture is presented, which subsumes the working set theorem and dynamic finger theorem, and accurately bounds the performance of splay trees over some classes of sequences where the existing theorems' bounds are not tight. While the unified conjecture for splay trees is unproven, a new data structure, the unified structure, is presented where the unified conjecture does hold. This structure also has a worst case of O(log n) per operation, in contrast to the O(n) worst case runtime of splay trees. A second data structure, the working set structure, is introduced. The working set structure has the same performance attributed to splay trees through the working set theorem, except the runtime is worst case per operation rather than amortized.

74 citations

••

TL;DR: This is the first major progress on Sleator and Tarjan's dynamic optimality conjecture of 1985 that O(1)-competitive binary search trees exist and presents an O(lg lg n)-competitive online binary search tree.

Abstract: We present an $O(\lg \lg n)$-competitive online binary search tree, improving upon the best previous (trivial) competitive ratio of $O(\lg n)$. This is the first major progress on Sleator and Tarjan’s dynamic optimality conjecture of 1985 that $O(1)$-competitive binary search trees exist.

73 citations

••

04 Jan 2009

TL;DR: It is shown that there exists an equal-cost online algorithm, transforming the conjecture of Lucas and Munro into the conjecture that the greedy algorithm is dynamically optimal, and achieving a new lower bound for searching in the BST model.

Abstract: We present a novel connection between binary search trees (BSTs) and points in the plane satisfying a simple property. Using this correspondence, we achieve the following results:1. A surprisingly clean restatement in geometric terms of many results and conjectures relating to BSTs and dynamic optimality.2. A new lower bound for searching in the BST model, which subsumes the previous two known bounds of Wilber [FOCS'86].3. The first proposal for dynamic optimality not based on splay trees. A natural greedy but offline algorithm was presented by Lucas [1988], and independently by Munro [2000], and was conjectured to be an (additive) approximation of the best binary search tree. We show that there exists an equal-cost online algorithm, transforming the conjecture of Lucas and Munro into the conjecture that the greedy algorithm is dynamically optimal.

71 citations

##### Cited by

More filters

•

TL;DR: A deterministic algorithm for triangulating a simple polygon in linear time is given, using the polygon-cutting theorem and the planar separator theorem, whose role is essential in the discovery of new diagonals.

Abstract: We give a deterministic algorithm for triangulating a simple polygon in linear time. The basic strategy is to build a coarse approximation of a triangulation in a bottom-up phase and then use the information computed along the way to refine the triangulation in a top-down phase. The main tools used are the polygon-cutting theorem, which provides us with a balancing scheme, and the planar separator theorem, whose role is essential in the discovery of new diagonals. Only elementary data structures are required by the algorithm. In particular, no dynamic search trees, of our algorithm.

632 citations

••

TL;DR: The proposed method incorporates the voting method into the popular extreme learning machine (ELM) in classification applications and generally outperforms the original ELM algorithm as well as several recent classification algorithms.

Abstract: This paper proposes an improved learning algorithm for classification which is referred to as voting based extreme learning machine. The proposed method incorporates the voting method into the popular extreme learning machine (ELM) in classification applications. Simulations on many real world classification datasets have demonstrated that this algorithm generally outperforms the original ELM algorithm as well as several recent classification algorithms.

288 citations

••

TL;DR: In this article, scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature.

Abstract: Origami describes rules for creating folded structures from patterns on a flat sheet, but does not prescribe how patterns can be designed to fit target shapes. Here, starting from the simplest periodic origami pattern that yields one-degree-of-freedom collapsible structures-we show that scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature. Paper models confirm the feasibility of our calculations. We also assess the difficulty of realizing these geometric structures by quantifying the energetic barrier that separates the metastable flat and folded states. Moreover, we characterize the trade-off between the accuracy to which the pattern conforms to the target surface, and the effort associated with creating finer folds. Our approach enables the tailoring of origami patterns to drape complex surfaces independent of absolute scale, as well as the quantification of the energetic and material cost of doing so.

263 citations