scispace - formally typeset
Search or ask a question

Showing papers on "Tree (data structure) published in 2000"


Journal ArticleDOI
TL;DR: In this paper, the authors show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics, which can be used to obtain optimal decision procedures, as was shown by Muller et al.
Abstract: Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. Recently, Bernholtz and Grumberg [1993] have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. In this paper, we show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics. Not only can they be used to obtain optimal decision procedures, as was shown by Muller et al., but, as we show here, they also make it possible to derive optimal model-checking algorithms. Moreover, the simple combinatorial structure that emerges from the automata-theoretic approach opens up new possibilities for the implementation of branching-time model checking and has enabled us to derive improved space complexity bounds for this long-standing problem.

738 citations


Book ChapterDOI
18 Apr 2000
TL;DR: A novel data structure, called Web access pattern tree, or WAP-tree in short, is developed for efficient mining of access patterns from pieces of logs for access pattern mining.
Abstract: With the explosive growth of data available on the World Wide Web, discovery and analysis of useful information from the World Wide Web becomes a practical necessity. Web access pattern, which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. In this paper, we study the problem of mining access patterns from Web logs efficiently. A novel data structure, called Web access pattern tree, or WAP-tree in short, is developed for efficient mining of access patterns from pieces of logs. The Web access pattern tree stores highly compressed, critical information for access pattern mining and facilitates the development of novel algorithms for mining access patterns in large set of log pieces. Our algorithm can find access patterns from Web logs quite efficiently. The experimental and performance studies show that our method is in general an order of magnitude faster than conventional methods.

572 citations


Journal ArticleDOI
TL;DR: The Lexus and the Olive Tree as mentioned in this paper is a classic example of a tree-shaped car and it can be seen as a metaphor for the relationship between cars and Olive trees.
Abstract: (2000). The Lexus and the Olive Tree. Journal of Economic Issues: Vol. 34, No. 1, pp. 232-234.

536 citations


Patent
22 Nov 2000
TL;DR: In this paper, a computerized system lays out document templates represented as a tree of text and shape elements, including variable elements, and the user can define a shape element to have a maximize or minimize property in one or more dimensions.
Abstract: A computerized system lays out document templates represented as a tree of text and shape elements, including variable elements. The user can define a shape element to have a maximize or minimize property in one or more dimensions. The layout makes the minimized dimensions of a shape as small as its contents will allow; and makes the maximized dimensions expand as much as available space allows. Such maximization or minimization can be performed within a horizontal or vertical sequence box. Variable values mapped into variable shape elements can include sub-trees of text and/or shape elements, including shape elements which have the maximize or minimize property, and elements which are themselves variable elements. An anchor point can be fixed at a selected point on a shape, causing the anchor point to remain fixed as the rest of the shape expands or contract. Variable Image elements can maintain the aspect ratios of images mapped into them as those images are scaled. The layout of variable element into which no variable values have been mapped can be suppressed. Both content and attribute values can be mapped into a variable element. Multiple content-mapping rule sets can be used with a given template, and multiple templates can be used with a given content-mapping rule set. The content mapping rules can include data-base queries that vary in response to variable data. Text or shape elements can be defined, respectively, by reference to text models, which defined text attributes, and geometric models, which define shape attributes.

431 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the use of local maximum (LM) filtering to locate trees on high spatial resolution (1-m) imagery, in terms of commission error (falsely indicated trees) and omission error (missed trees).

384 citations


Proceedings ArticleDOI
01 Aug 2000
TL;DR: An algorithm for mining long patterns in databases by using depth first search on a lexicographic tree of itemsets achieves more than one order of magnitude speedup over the recently proposed MaxMiner algorithm.
Abstract: In this paper we present an algorithm for mining long patterns in databases. The algorithm nds large itemsets by using depth rst search on a lexicographic tree of itemsets. The focus of this paper is to develop CPU-e cient algorithms for nding frequent itemsets in the cases when the database contains patterns which are very wide. We refer to this algorithm as DepthProject, and it achieves more than one order of magnitude speedup over the recently proposed MaxMiner algorithm for nding long patterns. These techniques may be quite useful for applications in areas such as computational biology in which the number of records is relatively small, but the itemsets are very long. This necessitates the discovery of patterns using algorithms which are especially tailored to the nature of such domains.

362 citations


Journal ArticleDOI
29 Jun 2000-Nature
TL;DR: This analysis uses this analysis to test competing hypotheses—the “express-train” and the “entangled-bank” models—for the colonization of the Pacific by Austronesian-speaking peoples and finds that the topology of the language tree was highly compatible with the express-train model.
Abstract: Languages, like molecules, document evolutionary history. Darwin observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions, few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses--the "express-train" and the "entangled-bank" models--for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.

362 citations


Journal ArticleDOI
TL;DR: A new branching rule is devised that allows columns to be generated efficiently at each node of the branch-and-bound tree and cuts are described that help to strengthen the linear programming relaxation and to mitigate the effects of problem symmetry.
Abstract: We present a column-generation model and branch-and-price-and-cut algorithm for origin-destination integer multicommodity flow problems. The origin-destination integer multicommodity flow problem is a constrained version of the linear multicommodity flow problem in which flow of a commodity (defined in this case by an origin-destination pair) may use only one path from origin to destination. Branch-and-price-and-cut is a variant of branch-and-bound, with bounds provided by solving linear programs using column-and-cut generation at nodes of the branch-and-bound tree. Because our model contains one variable for each origin destination path, for every commodity, the linear programming relaxations at nodes of the branch-and-bound tree are solved using column generation, i.e., implicit pricing of nonbasic variables to generate new columns or to prove LP optimality. We devise a new branching rule that allows columns to be generated efficiently at each node of the branch-and-bound tree. Then, we describe cuts (cover inequalities) that can be generated at each node of the branch-and-bound tree. These cuts help to strengthen the linear programming relaxation and to mitigate the effects of problem symmetry. We detail the implementation of our combined column and- cut generation method and present computational results for a set of test problems arising from telecommunications applications. We illustrate the value of our branching rule when used to find a heuristic solution and compare branch-and-price and branch-and-price-and-cut methods to find optimal solutions for highly capacitated problems.

353 citations


Patent
23 Feb 2000
TL;DR: In this article, a compressed tree forwarding table is generated from the uncompressed routing table by reducing the number of pointers stored at one or more levels to substantially reduce the unique next hop addresses associated with network addresses at that level.
Abstract: Network routing apparatus employs multi-level tree data structures in a centralized routing table (144, 146, 148) and in distributed forwarding tables (150, 152, 154, 158, 160). Each level of each structure is associated with a different field of a network address appearing in received packets. Pointers in each structure are used to identify either an address of a next hop network, or a next-level tree to be examined for a next-hop address. An uncompressed tree routing table uses directly addressed trees in order to simplify the storage and retrieval of pointers, and the next-tree pointers directly identify next trees. Compressed tree forwarding tables are generated from the uncompressed routing table by reducing the number of pointers stored at one or more levels to substantially the number of unique next hop addresses associated with network addresses at that level. A single mapping table maps pointer values at one level to the locations of trees at the next level in the compressed trees. Next hop address lookup logic (50, 52, 76, 78) performs lookups in accordance with the structure of the compressed trees. Also, the lookup logic stores and selectively operates on multiple forwarding tables in order to provide support for virtual router operation.

350 citations


Patent
01 Dec 2000
TL;DR: In this article, the reverse-path forwarding is used to broadcast each update along the minimum-hop-path tree rooted at the source of the update, where each path tree has the source node as a root node, a parent node and zero or more children nodes.
Abstract: Described is a link-state routing protocol used in a mobile ad hoc network or in an Internet for disseminating topology and link-state information throughout the network. Reverse-path forwarding is used to broadcast each update along the minimum-hop-path tree rooted at the source of the update. Each path tree has the source node as a root node, a parent node, and zero or more children nodes. Updates are received from the parent node in the path tree for the source node that originates the update. Each update includes information related to a link in the network. A determination is made whether to forward the update message to children nodes, if any, in the path tree maintained for the source node originating the update in response to information in the received update. This information itself can indicate whether the update is to be forwarded to other nodes.

334 citations


Proceedings ArticleDOI
Thomas Woo1
26 Mar 2000
TL;DR: A novel approach to packet classification which combines a heuristic tree search with the use of filter buckets is proposed and studied, which is unique in the sense that it can adapt to the input packet distribution by taking into account the relative filter usage.
Abstract: The ability to classify packets according to pre-defined rules is critical to providing many sophisticated value-added services, such as security, QoS, load balancing, traffic accounting, etc. Various approaches to packet classification have been studied in the literature with accompanying theoretical bounds. Practical studies with results applying to large number of filters (from 8K to 1 million) are rare. In this paper, we take a practical approach to the problem of packet classification. Specifically, we propose and study a novel approach to packet classification which combines a heuristic tree search with the use of filter buckets. Besides high performance and a reasonable storage requirement, our algorithm is unique in the sense that it can adapt to the input packet distribution by taking into account the relative filter usage. To evaluate our algorithms, we have developed realistic models of large scale filter tables, and used them to drive extensive experimentation. The results demonstrate the practicality of our algorithms for up to even 1 million filters.

Journal ArticleDOI
TL;DR: The technique can be used to identify factors affecting performance and their relationships, structure them hierarchically, quantify the effect of the factors on performance, and express them quantitatively.

Book ChapterDOI
27 Mar 2000
TL;DR: The Slim-tree is the first metric structure explicitly designed to reduce the degree of overlap, and new algorithms for inserting objects and splitting nodes are presented, generally without sacrificing search performance.
Abstract: In this paper we present the Slim-tree, a dynamic tree for organizing metric datasets in pages of fixed size. The Slim-tree uses the "fat-factor" which provides a simple way to quantify the degree of overlap between the nodes in a metric tree. It is well-known that the degree of overlap directly affects the query performance of index structures. There are many suggestions to reduce overlap in multidimensional index structures, but the Slim-tree is the first metric structure explicitly designed to reduce the degree of overlap. Moreover, we present new algorithms for inserting objects and splitting nodes. The new insertion algorithm leads to a tree with high storage utilization and improved query performance, whereas the new split algorithm runs considerably faster than previous ones, generally without sacrificing search performance. Results obtained from experiments with real-world data sets show that the new algorithms of the Slim-tree consistently lead to performance improvements. After performing the Slim-down algorithm, we observed improvements up to a factor of 35% for range queries.

Book
22 Dec 2000
TL;DR: A Syntactic Model of Interpretation of Natural Language as a Formal Language?
Abstract: 1.Towards a Syntactic Model of Interpretation. Natural Language as a Formal Language?. Underspecification in Language Processing. The Representational Theory of Mind. Pronominal Anaphora: Semantic Problems. The Problem of Multiple Ambiguity. The Problem of Uniqueness. The Problem of Indirect Reference. Quantification. Syntactic Processes of Anaphora. The Anaphora Solution ---- Towards a Representational Account. 2. The General Framework. A Preliminary Sketch. The Data Structures of the Parsing Model. Atomic Formulae. Tree Modalities. Basic Tree Structures. Partial Tree Structures. Requirements. Descriptions of Tree Structures. 3. The Dynamics of Tree Building. The Parsing Process -- A Sketch. A Basic Example. A Left--Dislocation Example. Verb--final Languages and the Grammar--parser Problem. The Parsing Process Defined. Computational Rules. Lexical Transitions. Pragmatic Actions and Lexical Constraints. Summary. 4. Linked Tree Structures. Relative Clauses ---- Preliminaries. The LINK Relation. The Data Reviewed. The Analysis ---- A Sketch for English. Defining Linked Tree Structures. Relativisers Annotating Unfixed Nodes. Relatives: Towards a Dynamic Typology. Relativisers Projecting a Requirement. Variation in Locality. Topic Structures and Relatives. Variation in Order ---- Head--Final Relatives. Head--internal Relatives. The Potential for Lexical Variation. Genitive Constructions as LINK Structures. Summary. 5. Wh Questions: A General Perspective. Introduction. The Semantic Diversity of wh Questions. Scopal Properties of wh Expressions. Wh--initial vs wh--in--situ Structures. Wh--in--situ Structures. Wh--in--situ from a Dynamic Perspective. Expletive wh Structures. Partial Movement. Partial Movement as a Reflex of a Requirement. Wh Expressions and Scope Effects. 6. Crossover Phenomena. Crossover ---- The Problem. Crossover ---- The Dynamic Account. Crossover in Relatives. Crossover Phenomena in Questions. Summary. 7. Quantification Preliminaries. Introduction. Scope Effects and Indefinites. Quantification. Quantified NPs. Scope. Term Reconstructions. Applications ---- E--type Anaphora. 8. Reflections on Language Design. The Overall Perspective. Underspecification and the Formal Language Metaphor. English is not a Formal Language. Wellformedness and Availability of Interpretations. Universals and Language Variation. On Knowledge of Language. 9. Appendix: The Formal Framework. Introduction. Declarative Structure. Feature Decorated Tree Construction. Goal--directedness. The Structure of Goal--directed Pointed Partial Tree Models. Tree Descriptions. Procedural Structure. Actions over Goal--directed Partial Tree Models. Natural Languages. Axioms. Finite Binary trees. Partial Trees. Requirements. Actions. Partial Order. Logical Forms. Computational Rules. Update Actions. Pragmatic Actions. General Index. Symbol Index.

Patent
Navin Chaddha1
23 Mar 2000
TL;DR: In this paper, a multimedia compression system for generating frame rate scalable data in the case of video, and, more generally, universally scalable data is presented, which is scalable across all of the relevant characteristics of the data.
Abstract: A multimedia compression system for generating frame rate scalable data in the case of video, and, more generally, universally scalable data Universally scalable data is scalable across all of the relevant characteristics of the data. In the case of video, these characteristics include frame rate, resolution, and quality. The scalable data generated by the compression system is comprised of multiple additive layers for each characteristic across which the data is scalable. In the case of video, the frame rate layers are additive temporal layers, the resolution layers are additive base and enhancement layers, and the quality layers are additive index planes of embedded codes. Various techniques can be used for generating each of these layers (e.g., Laplacian pyramid decomposition or wavelet decomposition for generating the resolution layers; tree structured vector quantization or tree structured-scalar quantization for generating the quality layers). The compression system further provides for embedded inter-frame compression in the context of frame rate scalability, and non-redundant layered multicast network delivery of the scalable data.

Proceedings Article
10 Sep 2000
TL;DR: A novel index structure, A-tree (Approximation tree), for similarity search of high-dimensional data, which outperforms the SR-tree and the VA-File in all range of dimensionality up to 64 dimension, which is the highest dimension in the authors' experiments.
Abstract: We propose a novel index structure, A-tree (Approximation tree), for similarity search of high-dimensional data. The basic idea of the A-tree is the introduction of Virtual Bounding Rectangles (VBRs), which contain and approximate MBRs and data objects. VBRs can be represented rather compactly, and thus affect the tree configuration both quantitatively and qualitatively. Firstly, since tree nodes can install large number of entries of VBRs, fanout of nodes becomes large, thus leads to fast search. More importantly, we have a free hand in arranging MBRs and VBRs in tree nodes. In the A-trees, nodes contain entries of an MBR and its children VBRs. Therefore, by fetching a node of an A-tree, we can obtain the information of exact position of a parent MBR and approximate position of its children. We have performed experiments using both synthetic and real data sets. For the real data sets, the A-tree outperforms the SR-tree and the VA-File in all range of dimensionality up to 64 dimension, which is the highest dimension in our experiments. The A-tree achieves 77.3% (77.7%, resp.) savings in page accesses compared to the SR-tree (the VA-File, resp.) for 64-dimensional real data.

Proceedings ArticleDOI
31 Jul 2000
TL;DR: Initial results are presented showing that a tree-based model derived from aTree-annotated corpus improves on a tree modelderived from an unannotated Corpus, and that a Tree-based stochastic model with a hand-crafted grammar outperforms both.
Abstract: Previous stochastic approaches to generation do not include a tree-based representation of syntax. While this may be adequate or even advantageous for some applications, other applications profit from using as much syntactic knowledge as is available, leaving to a stochastic model only those issues that are not determined by the grammar. We present initial results showing that a tree-based model derived from a tree-annotated corpus improves on a tree model derived from an unannotated corpus, and that a tree-based stochastic model with a hand-crafted grammar outperforms both.

Proceedings Article
29 Jun 2000
TL;DR: Investigation of how the splitting criteria and pruning methods of decision tree learning algorithms are influenced by misclassification costs or changes to the class distribution findsSplitting criteria that are relatively insensitive to costs are found to perform as well as or better than, in terms of expected mis classification cost, splitting criteria that is cost sensitive.
Abstract: This paper investigates how the splitting criteria and pruning methods of decision tree learning algorithms are influenced by misclassification costs or changes to the class distribution. Splitting criteria that are relatively insensitive to costs (class distributions) are found to perform as well as or better than, in terms of expected misclassification cost, splitting criteria that are cost sensitive. Consequently there are two opposite ways of dealing with imbalance. One is to combine a costinsensitive splitting criterion with a cost insensitive pruning method to produce a decision tree algorithm little affected by cost or prior class distribution. The other is to grow a cost-independent tree which is then pruned in a cost-sensitive manner.

Patent
23 Aug 2000
TL;DR: In this paper, a method for extracting digests, reformatting, and automatic monitoring of structured online documents based on visual programming of document tree navigation and transformation is provided for any document that has internal structure that can be represented by a tree.
Abstract: A method for extracting digests, reformatting, and automatic monitoring of structured online documents based on visual programming of document tree navigation and transformation is provided for structured online documents such as HTML, XML, SGML document or any document that has internal structure that can be represented by a tree. A digest of an online document is a collection of fragments (30) of this document which are of interest to a user. The system is based on a technique whereby a user selects a fragment of an online document shown in a source window (10) and copies this fragment to a target window (70) that contains the formatting digest. The system generates a sequence of web site navigation commands, online navigation tree commands and fragments commands that cause the assemble of the reformatted digest in the target window (20). The user can later ask the system to replay the generated commands, thus causing the automatic creation of the reformatted digest of the changed version of the online document. The digest documents can be displayed by user agents running on wireless and portable computer devices that have bandwidth and computational power limitations.

Journal ArticleDOI
TL;DR: In this article, a branch and cut algorithm was proposed to estimate all quadratic terms by successive linearizations within a branching tree using reformulation-linearization techniques (RLT).
Abstract: We present a branch and cut algorithm that yields in finite time, a globally e-optimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a branching tree using Reformulation-Linearization Techniques (RLT). To do so, four classes of linearizations (cuts), depending on one to three parameters, are detailed. For each class, we show how to select the best member with respect to a precise criterion. The cuts introduced at any node of the tree are valid in the whole tree, and not only within the subtree rooted at that node. In order to enhance the computational speed, the structure created at any node of the tree is flexible enough to be used at other nodes. Computational results are reported that include standard test problems taken from the literature. Some of these problems are solved for the first time with a proof of global optimality.

Proceedings ArticleDOI
01 Aug 2000
TL;DR: A new user-centered approach to decision tree construction where the user and the computer can both contribute their strengths: the user provides domain knowledge and evaluates intermediate results of the algorithm, the computer automatically creates patterns satisfying user constraints and generates appropriate visualizations of these patterns.
Abstract: trees have been successfully used for the task of classifi- cation. However, state-of-the-art algorithms do not incorporate the user in the tree construction process. This paper presents a new user-centered approach to this process where the user and the com- puter can both contribute their strengths: the user provides domain knowledge and evaluates intermediate results of the algorithm, the computer automatically creates patterns satisfying user constraints and generates appropriate visualizations of these patterns. In this cooperative approach, domain knowledge of the user can direct the search of the algorithm. Additionally, by providing adequate data and knowledge visualizations, the pattern recognition capabilities of the human can be used to increase the effectivity of decision tree construction. Furthermore, the user gets a deeper understanding of the decision tree than just obtaining it as a result of an algorithm. To achieve the intended level of cooperation, we introduce a new visu- alization of data with categorical and numerical attributes. A novel technique for visualizing decision trees is presented which provides deep insights into the process of decision tree construction. As a key contribution, we integrate a state-of-the-art algorithm for deci- sion tree construction such that many different styles of cooperation - ranging from completely manual over combined to completely automatic classification - are supported. An experimental perfor- mance evaluation demonstrates that our cooperative approach yields an efficient construction of decision trees that have a small size, but a high accuracy.


Patent
20 Sep 2000
TL;DR: In this paper, a method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps, which include two data reductions, the first performed based upon a query vector extracted from a query image, and the second level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to query vector.
Abstract: A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.

Patent
08 Dec 2000
TL;DR: In this article, the authors propose a qualitative modeling of the interrelations between various objects whose attributes are relevant to a score made by the predictor according to which decisions are made, wherein this relevancy is determined by an input of a domain expert to the problem in hand.
Abstract: In an automatic decision-making system, a method and a tool for the reduction of the dimension of data mining, which is automatically coupled to an empirical predictor of the system. The method includes a qualitative modeling of the interrelations between various objects whose attributes are relevant to a score made by the predictor according to which decisions are made, wherein this relevancy is determined by an input of a domain expert to the problem in hand. The model is called a Knowledge-Tree and its conclusions are represented by a graphical symbolization called the Knowledge-Tree map. Data mining, which follows the construction of the Knowledge-Tree map regards only datasets which are associated with logical and validated branches of the knowledge tree. Because the expert input which reduces the dimension of data mining was completed prior to data mining, interception by human reasoning is not needed after data mining and the decision making process can proceed automatically.

Posted Content
John Hull1
TL;DR: This article provides more details on the ways Hull- White trees can be used and discusses the analytic results available when x = r, and makes the point that it is important to distinguish between the per-period rate over one time step on the tree and the instantaneous short rate that is used in some of these analytic results.
Abstract: The Hull-White interest rate tree-building procedure was first outlined in the Fall 1994 issue of the Journal of Derivatives. It is becoming widely used by practitioners. This procedure is appropriate for models where there is some function x = f(r) of the short rate r that follows a mean- reverting arithmetic process. It can be used to implement the Ho-Lee model, the Hull-White model, and the Black- Karasinski model. Also, it is a tool that can be used for developing a wide range of new models. In this article we provide more details on the ways Hull- White trees can be used. We discuss the analytic results available when x = r, and make the point that it is important to distinguish between the per-period rate over one time step on the tree and the instantaneous short rate that is used in some of these analytic results. We provide an example of the implementation of the model using market data. We show how the tree can be designed so that it provides an exact fit to the initial volatility environment (but at the same time explain why we do not recommend this approach). We also discuss how to deal with such issues as variable time steps, cash flows that occur between nodes, barrier options, and path-dependence.

Journal Article
TL;DR: In this article, a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques, is presented. But it does not consider the security properties of the protocols.
Abstract: On a case study, we present a new approach for verifying cryptographic protocols, based on rewriting and on tree automata techniques. Protocols are operationally described using Term Rewriting Systems and the initial set of communication requests is described by a tree automaton. Starting from these two representations, we automatically compute an over-approximation of the set of exchanged messages (also recognized by a tree automaton). Then, proving classical properties like confidentiality or authentication can be done by automatically showing that the intersection between the approximation and a set of prohibited behaviors is the empty set. Furthermore, this method enjoys a simple and powerful way to describe intruder work, the ability to consider an unbounded number of parties, an unbounded number of interleaved sessions, and a theoretical property ensuring safeness of the approximation.

Journal ArticleDOI
TL;DR: Analytical models that estimate the cost (in terms of node and disk accesses) of selection and join queries using R-tree-based structures using uniform-like and nonuniform data distributions are presented.
Abstract: Selection and join queries are fundamental operations in database management systems (DBMS). Support for nontraditional data, including spatial objects, in an efficient manner is of ongoing interest in database research. Toward this goal, access methods and cost models for spatial queries are necessary tools for spatial query processing and optimization. We present analytical models that estimate the cost (in terms of node and disk accesses) of selection and join queries using R-tree-based structures. The proposed formulae need no knowledge of the underlying R-tree structure(s) and are applicable to uniform-like and nonuniform data distributions. In addition, experimental results are presented which show the accuracy of the analytical estimations when compared to actual runs on both synthetic and real data sets.

Journal ArticleDOI
01 Aug 2000
TL;DR: This work attempts to review the use of dynamic programming search strategies for large-vocabulary continuous speech recognition (LVCSR) by searching using a lexical tree, language-model look-ahead and word-graph generation.
Abstract: Initially introduced in the late 1960s and early 1970s, dynamic programming algorithms have become increasingly popular in automatic speech recognition. There are two reasons why this has occurred. First, the dynamic programming strategy can be combined with a very efficient and practical pruning strategy so that very large search spaces can be handled. Second, the dynamic programming strategy has turned out to be extremely flexible in adapting to new requirements. Examples of such requirements are the lexical tree organization of the pronunciation lexicon and the generation of a word graph instead of the single best sentence. We attempt to review the use of dynamic programming search strategies for large-vocabulary continuous speech recognition (LVCSR). The following methods are described in detail: searching using a lexical tree, language-model look-ahead and word-graph generation.

Journal ArticleDOI
TL;DR: PhyloDraw is a unified viewing tool for phylogenetic trees that visualizes various kinds of tree diagrams and can export the final tree layout to BMP (bitmap image format) and PostScript.
Abstract: Summary: PhyloDraw is a unified viewing tool for phylogenetic trees. PhyloDraw supports various kinds of multi-alignment formats (Dialign2, Clustal-W, Phylip format, NEXUS, MEGA, and pairwise distance matrix) and visualizes various kinds of tree diagrams, e.g. rectangular cladogram, slanted cladogram, phylogram, unrooted tree, and radial tree. By using several control parameters, users can easily and interactively manipulate the shape of phylogenetic trees. This program can export the final tree layout to BMP (bitmap image format) and PostScript. Availability: http:// pearl.cs.pusan.ac.kr/ phylodraw/

Patent
10 Nov 2000
TL;DR: In this paper, a method and apparatus for building a searchable multi-dimensional index tree that indexes a plurality of data objects is provided, and a router that uses the multidimensional index tree of the present invention to provide packet classification functions.
Abstract: A method and apparatus are provided for building a searchable multi-dimensional index tree that indexes a plurality of data objects. In one aspect of the invention, the index tree divides dataspace into three subspaces and indexes the data objects using a single dimension. If too many data objects map to the same point in that dimension, the dimension is switched to a new dimension of the data object and the data object is indexed using the new dimension. A split node having a split value is used to keep track of the indexing. In another aspect of the invention, the index tree divides dataspace into two subspaces, and equal bits are used in the split nodes to track the content of the data objects in the subspaces. If too many data objects sharing the same key within the same dimension map to a single point, then the dimension is switched to a new dimension and the data objects are indexed using the new dimension. Also disclosed is the multi-dimensional index tree itself as well as a router that uses the multi-dimensional index tree of the present invention to provide packet classification functions.