scispace - formally typeset
Search or ask a question

Showing papers on "Binary tree published in 2005"


Journal ArticleDOI
TL;DR: These representations use a number of bits close to the information theoretic lower bound and support operations in constant time, giving unique labels to the nodes of the tree, which can be used to store satellite information with the nodes efficiently.
Abstract: This paper focuses on space efficient representations of rooted trees that permit basic navigation in constant time. While most of the previous work has focused on binary trees, we turn our attention to trees of higher degree. We consider both cardinal trees (or k-ary tries), where each node has k slots, labelled {1,...,k}, each of which may have a reference to a child, and ordinal trees, where the children of each node are simply ordered. Our representations use a number of bits close to the information theoretic lower bound and support operations in constant time. For ordinal trees we support the operations of finding the degree, parent, ith child, and subtree size. For cardinal trees the structure also supports finding the child labelled i of a given node apart from the ordinal tree operations. These representations also provide a mapping from the n nodes of the tree onto the integers {1, ..., n}, giving unique labels to the nodes of the tree. This labelling can be used to store satellite information with the nodes efficiently.

278 citations


Journal ArticleDOI
TL;DR: In this article, a monoid structure on the set of binary search trees is introduced, by a process very similar to the construction of the plactic monoid, the Robinson-Schensted insertion being replaced by the binary search tree insertion.

172 citations


Journal ArticleDOI
TL;DR: Using the number of subtrees containing a particular vertex, the subtree core of the tree is defined, a new concept analogous to, but different from the concepts of center and centroid.

140 citations


Proceedings ArticleDOI
23 Oct 2005
TL;DR: The xbw transform uses path-sorting and grouping to linearize the labeled tree /spl Tscr/ into two coordinated arrays, one capturing the structure and the other the labels, in the spirit of the well-known Burrows-Wheeler transform for strings.
Abstract: Consider an ordered, static tree /spl Tscr/ on t nodes where each node has a label from alphabet set /spl Sigma/. Tree /spl Tscr/ may be of arbitrary degree and of arbitrary shape. Say, we wish to support basic navigational operations such as find the parent of node u, the ith child of u, and any child of it with label /spl alpha/. In a seminal work over fifteen years ago, Jacobson (1989) observed that pointer-based tree representations are wasteful in space and introduced the notion of succinct data structures. He studied the special case of unlabeled trees and presented a succinct data structure of 2t + o(t) bits supporting navigational operations in O(1) time. The space used is asymptotically optimal with the information-theoretic lower bound averaged over all trees. This led to a slew of results on succinct data structures for arrays, trees, strings and multisets. Still, for the fundamental problem of structuring labeled trees succinctly, few results, if any, exist even though labeled trees arise frequently in practice, e.g. in the data as in markup text (XML) or in augmented data structures. We present a novel approach to the problem of succinct manipulation of labeled trees by designing what we call the xbw transform of the tree, in the spirit of the well-known Burrows-Wheeler transform for strings. The xbw transform uses path-sorting and grouping to linearize the labeled tree /spl Tscr/ into two coordinated arrays, one capturing the structure and the other the labels. Using the properties of the xbw transform, we (i) derive the first-known (near-)optimal results for succinct representation of labeled trees with O(1) time for navigation operations, (ii) optimally support the powerful subpath search operation for the first time, and (iii) introduce a notion of tree entropy and present linear time algorithms for compressing a given labeled tree up to its entropy beyond the information-theoretic lower bound averaged over all tree inputs. Our xbw transform is simple and likely to spur new results in the theory of tree compression and indexing, and may have some practical impact in XML data processing.

133 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: The experimental results show a very good performance in comparison to a batch decision tree learner, and high capacity to detect and react to drift, as well as testing with artificial and real-world data sets.
Abstract: This paper presents a system for induction of forest of functional trees from data streams able to detect concept drift The Ultra Fast Forest of Trees (UFFT) is an incremental algorithm, that works online, processing each example in constant time, and performing a single scan over the training examples It uses analytical techniques to choose the splitting criteria, and the information gain to estimate the merit of each possible splitting-test For multi-class problems the algorithm grows a binary tree for each possible pair of classes, leading to a forest of trees Decision nodes and leaves contain naive-Bayes classifiers playing different roles during the induction process Naive-Bayes in leaves are used to classify test examples, naive-Bayes in inner nodes can be used as multivariate splitting-tests if chosen by the splitting criteria, and used to detect drift in the distribution of the examples that traverse the node When a drift is detected, all the sub-tree rooted at that node will be pruned The use of naive-Bayes classifiers at leaves to classify test examples, the use of splitting-tests based on the outcome of naive-Bayes, and the use of naive-Bayes classifiers at decision nodes to detect drift are directly obtained from the sufficient statistics required to compute the splitting criteria, without no additional computations This aspect is a main advantage in the context of high-speed data streams This methodology was tested with artificial and real-world data sets The experimental results show a very good performance in comparison to a batch decision tree learner, and high capacity to detect and react to drift

104 citations


Book ChapterDOI
06 Jul 2005
TL;DR: In this article, a technique for generating efficient monitors for ω-regular-languages is presented, where Buchi automata can be reduced in size and transformed into special, statistically optimal non-deterministic finite state machines, called BTT-FSMs, which recognize precisely the minimal bad prefixes of the original ω regular language.
Abstract: We present a technique for generating efficient monitors for ω-regular-languages. We show how Buchi automata can be reduced in size and transformed into special, statistically optimal nondeterministic finite state machines, called binary transition tree finite state machines (BTT-FSMs), which recognize precisely the minimal bad prefixes of the original ω-regular-language. The presented technique is implemented as part of a larger monitoring framework and is available for download.

79 citations


Proceedings ArticleDOI
18 Jul 2005
TL;DR: A novel decentralized load-balancing algorithm that is utilized to analyze a stochastic process for growing binary trees that are highly balanced -- the leaves of the tree belong to at most four different levels with high probability.
Abstract: We study randomized algorithms for placing a sequence of n nodes on a circle with unit perimeter. Nodes divide the circle into disjoint arcs. We desire that a newly-arrived node (which is oblivious of its index in the sequence) choose its position on the circle by learning the positions of as few existing nodes as possible. At the same time, we desire that that the variation in arc-lengths be small. To this end, we propose a new algorithm that works as follows: The kth node chooses r random points on the circle, inspects the sizes of v arcs in the vicinity of each random point, and places itself at the mid-point of the largest arc encountered. We show that for any combination of r and v satisfying rv iÝ c log k, where c is a small constant, the ratio of the largest to the smallest arc-length is at most eight w.h.p., for an arbitrarily long sequence of n nodes. This strategy of node placement underlies a novel decentralized load-balancing algorithm that we propose for Distributed Hash Tables (DHTs) in peer-to-peer environments.Underlying the analysis of our algorithm is Structured Coupon Collection over n/b disjoint cliques with b nodes per clique, for any n, b ≥ 1. Nodes are initially uncovered. At each step, we choose d nodes independently and uniformly at random. If all the nodes in the corresponding cliques are covered, we do nothing. Otherwise, from among the chosen cliques with at least one uncovered node, we select one at random and cover an uncovered node within that clique. We show that as long as bd ≥ c log n, O(n) steps are sufficient to cover all nodes w.h.p. and each of the first Ω(n) steps succeeds in covering a node w.h.p. These results are then utilized to analyze a stochastic process for growing binary trees that are highly balanced -- the leaves of the tree belong to at most four different levels with high probability.

76 citations


Posted Content
23 Sep 2005
TL;DR: In this paper, it was shown that the extremality of the free Gibbs measure on the infinite binary tree determines how distinguishable Gibbs measures on finite binary trees, which has been studied before in probability, statistical physics and computer science, and it was proved that Steel's conjecture holds true for general trees by giving a reconstruction algorithm that recovers the tree from O(log n)-length sequences when the mutation probabilities are discretized and less than $p^\ast.
Abstract: A major task of evolutionary biology is the reconstruction of phylogenetic trees from molecular data. The evolutionary model is given by a Markov chain on a tree. Given samples from the leaves of the Markov chain, the goal is to reconstruct the leaf-labelled tree. It is well known that in order to reconstruct a tree on $n$ leaves, sample sequences of length $\Omega(\log n)$ are needed. It was conjectured by M. Steel that for the CFN/Ising evolutionary model, if the mutation probability on all edges of the tree is less than $p^{\ast} = (\sqrt{2}-1)/2^{3/2}$, then the tree can be recovered from sequences of length $O(\log n)$. The value $p^{\ast}$ is given by the transition point for the extremality of the free Gibbs measure for the Ising model on the binary tree. Steel's conjecture was proven by the second author in the special case where the tree is "balanced." The second author also proved that if all edges have mutation probability larger than $p^{\ast}$ then the length needed is $n^{\Omega(1)}$. Here we show that Steel's conjecture holds true for general trees by giving a reconstruction algorithm that recovers the tree from $O(\log n)$-length sequences when the mutation probabilities are discretized and less than $p^\ast$. Our proof and results demonstrate that extremality of the free Gibbs measure on the infinite binary tree, which has been studied before in probability, statistical physics and computer science, determines how distinguishable are Gibbs measures on finite binary trees.

75 citations


Journal ArticleDOI
01 Mar 2005
TL;DR: In this article, the authors explore the possibility of using multiple processors to improve the encoding and decoding times of Lempel-Ziv schemes and propose a new layout of the processors based on a full binary tree.
Abstract: We explore the possibility of using multiple processors to improve the encoding and decoding times of Lempel-Ziv schemes. A new layout of the processors, based on a full binary tree, is suggested and it is shown how LZSS and LZW can be adapted to take advantage of such parallel architectures. The layout is then generalized to higher order trees. Experimental results show an improvement in compression over the standard method of parallelization and an improvement in time over the sequential method.

58 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic analysis of the binary search tree (BST) under the random permutation model via an embedding in a continuous time model is presented.
Abstract: We are interested in the asymptotic analysis of the binary search tree (BST) under the random permutation model Via an embedding in a continuous time model, we get new results, in particular the asymptotic behavior of the profile

48 citations


Journal ArticleDOI
TL;DR: Kleene’s result for formal tree series over a commutative semiring A is proved, i.e., the class of formal treeseries over A which are accepted by weighted tree automata, and theclass of rational tree seriesover A are equal.
Abstract: In this paper we prove Kleene’s result for formal tree series over a commutative semiring A (which is not necessarily complete or continuous or idempotent), i.e., the class of formal tree series over A which are accepted by weighted tree automata, and the class of rational tree series over A are equal. We show the result by direct automata-theoretic constructions and prove their correctness.

Posted Content
TL;DR: It is demonstrated that extremality of the free Gibbs measure on the infinite binary tree, which has been studied before in probability, statistical physics and computer science, determines how distinguishable are Gibbs measures on finite binary trees.
Abstract: A major task of evolutionary biology is the reconstruction of phylogenetic trees from molecular data. The evolutionary model is given by a Markov chain on a tree. Given samples from the leaves of the Markov chain, the goal is to reconstruct the leaf-labelled tree. It is well known that in order to reconstruct a tree on $n$ leaves, sample sequences of length $\Omega(\log n)$ are needed. It was conjectured by M. Steel that for the CFN/Ising evolutionary model, if the mutation probability on all edges of the tree is less than $p^{\ast} = (\sqrt{2}-1)/2^{3/2}$, then the tree can be recovered from sequences of length $O(\log n)$. The value $p^{\ast}$ is given by the transition point for the extremality of the free Gibbs measure for the Ising model on the binary tree. Steel's conjecture was proven by the second author in the special case where the tree is "balanced." The second author also proved that if all edges have mutation probability larger than $p^{\ast}$ then the length needed is $n^{\Omega(1)}$. Here we show that Steel's conjecture holds true for general trees by giving a reconstruction algorithm that recovers the tree from $O(\log n)$-length sequences when the mutation probabilities are discretized and less than $p^\ast$. Our proof and results demonstrate that extremality of the free Gibbs measure on the infinite binary tree, which has been studied before in probability, statistical physics and computer science, determines how distinguishable are Gibbs measures on finite binary trees.

Proceedings ArticleDOI
27 Nov 2005
TL;DR: AMIOT algorithm to discover all frequent ordered subtrees in a tree-structured database is presented and it is shown that AMIOT reduces redundant candidate trees and outperforms FREQT algorithm by up to five times in execution time.
Abstract: Frequent subtree mining has become increasingly important in recent years. In this paper, we present AMIOT algorithm to discover all frequent ordered subtrees in a tree-structured database. In order to avoid the generation of infrequent candidate trees, we propose the techniques such as right-and-left tree join and serial tree extension. Proposed methods enumerate only the candidate trees with high probability of being frequent without any duplication. The experiments on synthetic dataset and XML database show that AMIOT reduces redundant candidate trees and outperforms FREQT algorithm by up to five times in execution time.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A variable-feature-set clustering scheme is developed and compared with an already reported binary tree scheme, which achieves a 31.5% relative error reduction with respect to the best result from abinary tree scheme.
Abstract: Acoustic events produced in meeting-room-like environments may carry information useful for perceptually aware interfaces. We focus on the problem of classifying 16 types of acoustic events, using and comparing several types of features and various classifiers based on either GMM or SVM. A variable-feature-set clustering scheme is developed and compared with an already reported binary tree scheme. In our experiments with event-level features, the proposed clustering scheme with SVM achieves a 31.5% relative error reduction with respect to the best result from a binary tree scheme.

Book ChapterDOI
28 Aug 2005
TL;DR: This paper studies automata for unranked trees that are standard in database theory and shows that bottom-up deterministic stepwise tree automata yield the most succinct representations.
Abstract: Automata for unranked trees form a foundation for XML schemas, querying and pattern languages. We study the problem of efficiently minimizing such automata. We start with the unranked tree automata (UTAs) that are standard in database theory, assuming bottom-up determinism and that horizontal recursion is represented by deterministic finite automata. We show that minimal UTAs in that class are not unique and that minimization is np-hard. We then study more recent automata classes that do allow for polynomial time minimization. Among those, we show that bottom-up deterministic stepwise tree automata yield the most succinct representations.

Proceedings ArticleDOI
25 Jul 2005
TL;DR: A tree based regression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks by function approximation based on multivariable polynomial regression and passing only the coefficients returned by the regression function instead of aggregated data is proposed.
Abstract: In this paper, we propose a tree based regression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. By function approximation based on multivariable polynomial regression and passing only the coefficients returned by the regression function instead of aggregated data, TREG achieves the following goals: (1) the sink can get attribute values in regions devoid of sensor nodes for attribute values that show smooth spatial gradation (2) readings over any portion of the region can be obtained at one time by querying the root instead of flooding those regions, thus incurring significant energy savings. As size of the data packet transmitted, from one tree node to another remains constant, the proposed scheme scales well with growing network density. Extensive simulations are performed on real world data to demonstrate the effectiveness of our aggregation algorithm. Results reveal that for a network density of 0.0025, the optimal tree-depth should be 4 in order to restrict the absolute error to less than a threshold of 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of tree depth.

Proceedings ArticleDOI
Carlos Ordonez1
14 Jun 2005
TL;DR: This work focuses on the optimization of linear recursive queries in SQL and focuses on computing the transitive closure of a graph, which results in a significant reduction in the evaluation time of recursive queries.
Abstract: Recursion represents an important addition to the SQL language. This work focuses on the optimization of linear recursive queries in SQL. To provide an abstract framework for discussion, we focus on computing the transitive closure of a graph. Three optimizations are studied: (1) Early evaluation of row selection conditions. (2) Eliminating duplicate rows in intermediate tables. (3) Defining an enhanced index to accelerate join computation. Optimizations are evaluated on two types of graphs: binary trees and sparse graphs. Binary trees represent an ideal graph with no cycles and a linear number of edges. Sparse graphs represent an average case with some cycles and a linear number of edges. In general, the proposed optimizations produce a significant reduction in the evaluation time of recursive queries.

Proceedings ArticleDOI
25 Mar 2005
TL;DR: This paper proposes a modification to the existing anticollision protocol put forth in version 1.0 of the RFID Tag protocol, which reduces the overall read time of a given number of RFID tags by resetting to the appropriate node, for every consecutive read cycle.
Abstract: This paper proposes a modification to the existing anticollision protocol put forth in version 1.0 protocol specification for 900MHz Class 0 RFID Tag. The version 1.0 specification uses a binary tree approach to singulate one RF tag ID at a time. The proposed change reduces the overall read time of a given number of RFID tags by resetting to the appropriate node, for every consecutive read cycle. The present standard resets to the root node of the binary tree for every read cycle.

Journal ArticleDOI
TL;DR: A new algorithm to rapidly compute the two-point, three-point and n -point correlation functions in roughly O( N log N ) time for N particles, instead of O( n n ) as required by brute force approaches.

Journal ArticleDOI
TL;DR: In this paper, a highly structured numerical method that allows for an important speedup in the calculations is described, implemented in a bi-dimensional binary tree (quadtree or octree) structure in a partition of unity framework.
Abstract: We describe in this paper a highly structured numerical method that allows for an important speedup in the calculations. The method is implemented in a bi-dimensional binary tree (quadtree or octree) structure in a partition of unity framework. The partition of unity is constructed by using natural neighbour interpolation. Data can be easily obtained from voxel or pixel-based images, as well as STL files or other CAD descriptions. The method described here possesses linear completeness at least and essential boundary conditions are implemented through the characteristic function method, by employing a special class of functions called R-functions. After the theoretical description of the method, some examples of its performance are presented and analysed. Copyright © 2005 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The proposed method significantly enhances the processing efficiency of the conventional Huffman decoding realized with the ordinary binary tree search method, and shows slightly better processing efficiency, while requiring much less memory space, compared even with those up-to-date efficient search methods of Hashemian and its variants.
Abstract: This paper presents a new method for Huffman decoding specially designed for the MPEG-2 AAC audio. The method significantly enhances the processing efficiency of the conventional Huffman decoding realized with the ordinary binary tree search method. A data structure of one-dimensional array is newly designed based on the numerical interpretation of the incoming bit stream and its utilization for the offset oriented nodes allocation. The Huffman tree implemented with the proposed data structure allows the direct computation of the branching location, eliminating the need for the pipeline-violating "compare and jump" instructions. The experimental results show the average performance enhancement of 67% and 285%, compared to those of the conventional binary tree search method and the sequential search method, respectively. The proposed method also shows slightly better processing efficiency, while requiring much less memory space, compared even with those up-to-date efficient search methods of Hashemian and its variants.

Journal ArticleDOI
05 Sep 2005
TL;DR: This paper proposes a compromise by including three global compilation techniques (type analysis, coloring and binary tree dispatching) in a separate compilation framework.
Abstract: Compilers used in industry are mainly based on a separate compilation framework. However, the knowledge of the whole program improves efficiency of object-oriented language compilers, therefore more efficient implementation techniques are based on a global compilation framework.In this paper, we propose a compromise by including three global compilation techniques (type analysis, coloring and binary tree dispatching) in a separate compilation framework. Files are independently compiled into standard binary files with unresolved symbols. The program is build by linking object files: files are gathered and analyzed, some link code is generated then symbols are resolved.

Journal ArticleDOI
TL;DR: This work investigates the situation in which each unit from a given set is described by some vector of p probability distributions and aims to find simultaneously a ''good'' partition of these units and a probabilistic description of the clusters with a model using ''copula functions'' associated with each class of this partition.

Patent
15 Nov 2005
TL;DR: In this paper, a combination of a binary tree (e.g., a balanced binary tree) with a lookup table is proposed to provide scalable retrieval of entries by either an array index or a secondary key.
Abstract: Example embodiments improve the lookup times and modification costs of indexing on a dynamically sorted list by using a combination of data structures to determine index values for secondary keys and vice versa. More specifically, exemplary embodiments provide a combination of a binary tree (e.g., a balanced binary tree) with a lookup table (e.g., linear dynamic hash table) to provide scalable retrieval of entries by either an array index or a secondary key. For example, given a key, a hash thereof returns a node placement within a balanced binary tree. This positioning can then be used at runtime to determine an index value for a data record, resulting in a near real-time lookup cost. Also note that modifications, such as reordering, insertions, and deletions, become a function of the nodes depth in the binary tree, rather than a linear function of number of records within the data array.

Journal ArticleDOI
Hsien-Kuei Hwang1
TL;DR: An unexpected connection between the profile of plane-oriented recursive trees ( with logarithmic height) and that of random binary trees (with height proportional to the square root of tree size) is unveiled.
Abstract: We summarize several limit results for the profile of random plane-oriented recursive trees. These include the limit distribution of the normalized profile, asymptotic bimodality of the variance, asymptotic approximations of the expected width and the correlation coefficients of two level sizes. We also unveil an unexpected connection between the profile of plane-oriented recursive trees (with logarithmic height) and that of random binary trees (with height proportional to the square root of tree size).

Journal ArticleDOI
TL;DR: An efficient algorithm of node selecting in the binary partition tree is proposed for the final face segmentation, which can exactly segment the faces without any underlying assumption.
Abstract: This paper presents an efficient face segmentation algorithm based on binary partition tree. Skin-like regions are first obtained by integrating the results of pixel classification and watershed segmentation. Facial features are extracted by the techniques of valley detection and entropic thresholding, and are used to refine the skin-like regions. In order to segment the facial regions from the skin-like regions, a novel region merging algorithm is proposed by considering the impact of the common border ratio between adjacent regions, and the binary partition tree is used to represent the whole region merging process. Then the facial likeness of each node in the binary partition tree is evaluated using a set of fuzzy membership functions devised for a number of facial primitives of geometrical, elliptical and facial features. Finally, an efficient algorithm of node selecting in the binary partition tree is proposed for the final face segmentation, which can exactly segment the faces without any underlying assumption. The performance of the proposed face segmentation algorithm is demonstrated by experimental results carried out on a variety of images in different scenarios.

01 Jan 2005
TL;DR: In this article, the problem of finding simultaneously a good partition of these units and a probabilistic description of the clusters with a model using "copula functions" associated with each class of this partition is investigated.
Abstract: This work investigates the situationinwhich each unit from a givenset is described by some vector of p probability distributions. Our aim is to find simultaneously a “good” partition of these units and a probabilistic description of the clusters with a model using “copula functions” associated with each class of this partition. Different copula models are presented. The mixture decomposition problem is resolved in this general case. This result extends the standard mixture decomposition problem to the case where each unit is described by a vector of distributions instead of the traditional classical case where each unit is described by a vector of single (categorical or numerical) values. Several generalizations of some standard algorithms are proposed. All these results are first considered in the case of a single variable and then extended to the case of a vector of p variables by using a top-down binary tree approach. © 2004 Elsevier B.V. All rights reserved.

Posted Content
TL;DR: A family of meta-Fibonacci sequences which arise in studying the number of leaves at the largest level in certain infinite sequences of binary trees, restricted compositions of an integer, and binary compact codes are considered.
Abstract: We look at a family of meta-Fibonacci sequences which arise in studying the number of leaves at the largest level in certain infinite sequences of binary trees, restricted compositions of an integer, and binary compact codes. For this family of meta-Fibonacci sequences and two families of related sequences we derive ordinary generating functions and recurrence relations. Included in these families of sequences are several well-known sequences in the Online Encyclopedia of Integer Sequences (OEIS).

Proceedings ArticleDOI
12 Dec 2005
TL;DR: The proposed algorithm DUMMYREG is run at each parent node and uses information present in the existing child to construct a complete binary tree and further reduces the error when the readings are regenerated at the sink.
Abstract: In this paper we propose a method for data compression and its subsequent regeneration using a polynomial regression technique. We approximate data received over the considered area by fitting it to a function and communicate this by passing only the coefficients that describe the function. In this paper, we extend our previous algorithm TREG to consider non-complete aggregation trees. The proposed algorithm DUMMYREG is run at each parent node and uses information present in the existing child to construct a complete binary tree. In addition to obtaining values in regions devoid of sensor nodes and reducing communication overhead, this new approach further reduces the error when the readings are regenerated at the sink. Results reveal that for a network density of 0.0025 and a complete binary tree of depth 4, the absolute error is 6%. For a non-complete binary tree, TREG returns an error of 18% while this is reduced to 12% when DUMMYREG is used

Journal ArticleDOI
TL;DR: The proposed algorithm requires comparably small size of memory, and it can be used for software-based address lookup in practical Internet routers and results in much smaller number of worst case memory accesses compared to previous schemes.
Abstract: As an essential function in Internet routers, address lookup determines overall router performance. The most important performance metric in software-based address lookup is the number of memory accesses since it is directly related to lookup time. This letter proposes an algorithm to perform efficient binary search for IP address lookup. The depth of the proposed binary tree is very close to the minimum bound, and hence it results in much smaller number of worst case memory accesses compared to previous schemes. The proposed algorithm requires comparably small size of memory, and it can be used for software-based address lookup in practical Internet routers.