scispace - formally typeset
Search or ask a question

Showing papers on "Decision tree model published in 1999"


Proceedings ArticleDOI
20 Dec 1999
TL;DR: The proposed model is applied to many kind of botanical trees, and the model can successfully animate tree movements caused by external forces such as winds and human interaction to the branches.
Abstract: This paper proposes a new modeling and animation method of botanical tree for interactive virtual environment. Some studies of botanical tree modeling have been based on the Growth Model, which can construct a very natural tree structure. However, this model makes it difficult to predict the final form of tree from given parameters; that is, if an objective form of a tree is given and it is to be reconstructed into a three-dimensional model, we have to change the parameters to reflect the structure by a trial-and-error technique. Thus, we propose a new top-down approach in which a tree's form is defined by volume data that is made from a captured real image set, and the branch structure is realized by simple branching rules. The tree model is described as a set of connected branch segments, and leaf models that consist of leaves and twigs that are attached to the branch segments. To animate the botanical trees, dynamics simulation is performed on the branch segments in two phases. In the first phase, each segment is assumed to be a rigid stick with a fixed end on one side, and rotational movements from influence of external forces are calculated in each segment independently. And the forces propagated from the tip of a branch to the root are calculated from the restoration force and thickness of the branch. Finally, the rotational movements of segments are executed in order from the base segment, and the fixed end of each segment is moved to the free end of the segment to be connected so as to maintain the relative angles between the segments. The proposed model is applied to many kind of botanical trees, and the model can successfully animate tree movements caused by external forces such as winds and human interaction to the branches.

71 citations


Book
01 Mar 1999
TL;DR: Lower bound results for randomized OBDDs and randomized syn tactic readk-times branching programs form the main part of this work, where little was known about such randomized variants of branching programs.
Abstract: This work is settled in the area of complexity theory for rest icted variants of branching programs. Today, branching programs can be considered one of th standard nonuniform models of computation. One reason for their popularity is that they al low to describe computations in an intuitively straightforward way and promise to be easier to analyze than the traditional models. In complexity theory, we are mainly interested in upper and l ower bounds on the size of branching programs. Although proving superpolynomial lower boun ds on the size of general branching programs still remains a challenging open problem, ther e has been considerable success in the study of lower bound techniques for various restricte d variants, most notably perhaps read-once branching programs and OBDDs (ordered binary dec ision diagrams). Surprisingly, OBDDs have also turned out to be extremely useful in practica l applications as a data structure for Boolean functions. So far, research has concentrated on deterministic and, to s ome extent, also nondeterministic types of branching programs. Given the practical and theore tical importance of the probabilistic mode of computation, it seems natural to ask whether we can pr ove any interesting results for probabilistic variants of branching programs, defined in an alogy to the well-known probabilistic Turing machines. At the time of the beginning of this work, ve ry little was known about such randomized variants of branching programs. Meanwhile, a co nsiderable part of the “complexity landscape” for randomized variants of branching programs w ith limited read access to the input variables is charted. Here we describe how some pieces of thi s knowledge have been obtained. Lower bound results for randomized OBDDs and randomized syn tactic readk-times branching programs form the main part of this work.

21 citations


Proceedings ArticleDOI
06 Jul 1999
TL;DR: Experimental results indicate that not only does an explicit speciation algorithm reduce the complexity of the used niching method, but it also reduces the required number of evaluations of the fitness function.
Abstract: The most efficient speciation methods suffer from a quite high complexity from O(n c(n)) to O(n/sup 2/), where c(n) is a factor that can be proportional to n, the population size. A speciation method based on a classification tree is presented, having a complexity of O(n log n). The population is considered as a set of attribute vectors to train the classification tree. The splitting method of the subsets of individuals associated to the nodes is a vector quantization algorithm. The stopping criterion of the tree induction is based on a heuristic, able to recognize whether the set of the individuals associated to a node of the tree is a subpopulation or not. Experimental results for two easy and two hard multimodal optimization problems are presented. These problems are solved with a high reliability. Moreover, experiments indicate that not only does an explicit speciation algorithm reduce the complexity of the used niching method, but it also reduces the required number of evaluations of the fitness function.

20 citations


Journal ArticleDOI
01 Mar 1999
TL;DR: Two methods of this kind of approximating dissimilarity matrice by a tree distance are introduced and compared, simulating noisy partial tree dissimilarities.
Abstract: In tree clustering, we try to approximate a given dissimilarity matrice by a tree distance. In some cases, especially when comparing biological sequences, some dissimilarity values cannot be evaluated and we get some partial dissimilarity with undefined values. In that case one can develop a sequential method to reconstruct a valued tree or evaluate the missing values using a tree model. This paper introduces two methods of this kind and compare them simulating noisy partial tree dissimilarities.

18 citations


Proceedings Article
01 Jan 1999
TL;DR: This paper solves the widely publicised open problem to decide whether or not it is possible to partition the vertices of a graph into four non-empty sets A, B, C, and D, and shows that RET-C4 is NP-complete, but for any graph H, other than Cd, with at most four vertices: RET-H is polynomial time solvable.
Abstract: In this paper, we solve a widely publicised open problem posed by Peter Winkler in 1988. The problem is to decide whether or not it is possible to partition the vertices of a graph into four non-empty sets A, B, C, and D, such that there is no edge between the sets A and C, and between the sets B and D; and that there is at least one edge between any other pair of sets. Winkler asked whether this problem is NP-complete. He was motivated by a general problem that we explain after introducing the following definitions. In the following, let G and H be graphs. A homomorphism f : G -) H, of G to H, is a mapping f of the vertices of G to the vertices of H, such that if g and g’ are adjacent vertices of G then f(g) and f(g’) are either adjacent vertices of H or the same vertex of H. Note that we have deviated slightly from the usual definition of a homomorphism by allowing two adjacent vertices of G to map to the same vertex of H. A compaction c : G -* H, of G to H, is a homomorphism of G to H, such that for every vertex x of H, there exists a vertex v of G with C(V) = x, and for every edge hh’ of H, there exists an edge gg’ of G with c(g) = h and c(d) = h’. Notice that the first part of the definition for a compaction (the requirement for every vertex x of H) follows from the second part unless H has isolated vertices. If there exists a compaction of G to H then G is said to compact to H. Now suppose that H is an induced subgraph of G. A retraction r : G ‘+ H, of G to H, is a homomorphism of G to H, such that r(h) = h, for every vertex h of H. If there exists a retraction of G to H then G is said to retract to H, and H is said to, be a retract of G. We shall denote a k-cycle by Ck. The problem of deciding the existence of a compaction to a fixed graph H, called the compaction problem for H, and denoted as COMP-IT, is the following : Instance : A graph G. Question : Does G compact to H? Note that Winkler’s problem is the problem COMPCJ. When both G and H are input graphs (Le., H is not fixed), the problem of deciding whether or not G -01 of Computing Science, Simon Fraser &iversity, Bum‘aby, British Columbia, Canada V5A lS6. compacts to H has been studied by Almira Karabeg and Dino Karabeg. The problem of deciding the existence of a retraction to a fixed graph H, called the retraction problem for H, and denoted as RET-H, asks whether or not an input graph G, containing H as an induced subgraph, retracts to H _ Retraction problems have been of continuing interest in graph theory and have considerable literature. It is not difficult to show that for every fixed graph H, if RET-H is solvable in polynomial time then COMPH is also solvable in polynomial time. Is the converse true? This was the general problem that motivated Winkler. It turns out that RET-C4 is NP-complete, but for any graph H, other than Cd, with at most four vertices: RET-H is polynomial time solvable. In other words, the unique smallest graph H for which RETH is NP-complete is C4. This observation was made by Tomas Feder and Peter Winkler, and led Winlrler to ask specifically the following question in 1988 which has been a popular open problem : Is COMP-Cd NPcomplete? We show that COMP-C4 is NP-complete. To show this, we give a transformation from RET-Cd to COMP-Cd, using the technique explained below. Let a graph G containing H as an induced subgraph be an instance of RET-H. When we give a transformation from RET-H to COMP-H, we do the following. We construct in time polynomial in the size of G, a graph G’ (containing G as an induced subgraph), such that the following statements (i), (ii), and (iii) are equivalent : (i) G retracts to H. (ii) G’ retracts to H. (iii) G’ compacts to H. Thus if RET-H is NP-complete, this shows that COMPHis also NP-complete. It is this technique that we have used throughout, when giving a transformation from RET-H to COMP-H, for any graph H. We prove the equivalence of the above statements by showing that (i) is equivalent to (ii), and (ii) is equivalent to (iii). Feder extended the proof of NP-completeness of RET-C4 to apply to any cycle Ck, k 1 4 (the same result was proved independently by Gary MacGillivray). Correspondingly, we show that COMP-Ck is also NPcomplete, for all k 2 4. We show this by giving a

15 citations


01 Jan 1999

10 citations


Book ChapterDOI
01 Jan 1999
TL;DR: This chapter investigates empirically the performance of some of the most common stochastic complexity approximations in an attempt to understand their small sample behaviour in the incomplete data framework, and allows for the first time a comparison between the true stoChastic complexity and its approximation with real-world data.
Abstract: Stochastic complexity of a data set is defined as the shortest possible code length for the data obtainable by using some fixed set of models. This measure is of great theoretical and practical importance as a tool for tasks such as determining model complexity, or performing predictive inference. Unfortunately, for cases where the data has missing information, computing the stochastic complexity requires marginalizing (integrating) over the missing data, which results even in the discrete data case to computing a sum with an exponential number of terms. Therefore, in most cases the stochastic complexity measure has to be approximated. In this chapter, we will investigate empirically the performance of some of the most common stochastic complexity approximations in an attempt to understand their small sample behaviour in the incomplete data framework. In earlier empirical evaluations the problem of not knowing the actual stochastic complexity for incomplete data was circumvented either by using synthetic data, or by comparing the behaviour of the stochastic complexity approximation methods to crossvalidated prediction error, approaches which both suffer from validity problems. Our comparison is based on the novel idea of using demonstrably representative small samples from real data sets, and then calculating by ‘brute force’ the exponential sums. This allows for the first time a comparison between the true stochastic complexity and its approximations with real-world data.

8 citations


Journal ArticleDOI
TL;DR: In this article, Nisan et al. showed that a one-round Merlin?Arthur protocol is as powerful as a general interactive proof system and can simulate a one round Arthur?Merlin protocol.

7 citations


Proceedings ArticleDOI
01 Jan 1999
TL;DR: The GA (genetic algorithm) and K-means algorithm were used to select a subset of the suitable features at each node in the binary decision tree to recognize the various defect patterns of a cold mill strip using abinary decision tree.
Abstract: This paper suggests a method to recognize the various defect patterns of a cold mill strip using a binary decision tree. In classifying complex patterns with high similarity like the defect patterns of a cold mill strip, the selection of an optimal feature set and an appropriate recognizer is important to achieve high recognition rate. In this paper the GA (genetic algorithm) and K-means algorithm were used to select a subset of the suitable features at each node in the binary decision tree. The feature subset with maximum fitness is chosen and the patterns are classified into two classes using a linear decision function. This process is repeated at each node until all the patterns are classified into individual classes. In this way, the classifier using the binary decision tree is constructed automatically. After constructing the binary decision tree, the final recognizer is accomplished by having a neural network learning sets of standard patterns at each node. In this paper, the classifier using the binary decision tree was applied to the recognition of defect patterns of a cold mill strip, and the experimental results are given to demonstrate the usefulness of the proposed scheme.

6 citations


Journal ArticleDOI
TL;DR: A new model of complexity is defined, called object complexity, for measuring the performance of hidden-surface removal algorithms, and an algorithm is presented that solves in the object complexity model the same problem that Bern3 addressed in the scene complexity model.
Abstract: We define a new model of complexity, called object complexity, for measuring the performance of hidden-surface removal algorithms. This model is more appropriate for predicting the performance of these algorithms on current graphics rendering systems than the standard measure of scene complexity used in computational geometry. We also consider the problem of determining the set of visible windows in scenes consisting of n axis-parallel windows in ℝ3. We present an algorithm that runs in optimal Θ(n log n) time. The algorithm solves in the object complexity model the same problem that Bern3 addressed in the scene complexity model.

6 citations


01 Jan 1999
TL;DR: The following is a corrigenda of the black hardbound edition of Computational Complexity of Boolean Formulas with Query Symbols, i.e., the author’s doctoral dissertiation submitted to the University of Tsukuba at January 1999.
Abstract: The following is a corrigenda of the black hardbound edition (i.e., the author’s doctoral dissertiation submitted to the University of Tsukuba at January 1999) of Computational Complexity of Boolean Formulas with Query Symbols. The asterisked items are already corrected in phd2.dvi of 99/01/15 11:52. ∗ next page to the title page. line 16. Wrong: Notre Dame University Right: University of Notre Dame ∗ page 12. line -1. Wrong: approach Right: approaches

Book ChapterDOI
24 May 1999

Journal ArticleDOI
TL;DR: In this article, a decision tree model for addressing cross-cultural ethical conflicts is described, which is intended to provide an ethically sound yet pragmatic tool for decision makers facing such situations.
Abstract: The authors have previously developed and described a decision tree model for addressing cross-cultural ethical conflicts. The model is intended to provide an ethically sound yet pragmatic tool for decision makers facing such situations. This paper presents the results of an empirical test of the model in an educational setting with a sample of business students. Students trained to use the model demonstrated significantly more flexibility and appropriateness in their decisions on case scenarios than those who were not trained. The implications for use of the model in educational settings and recommendations for future research are discussed.

Book ChapterDOI
01 Jan 1999
TL;DR: The communication cost is considered, the complexity of a web query is partitioned into the accessing complexity and the constructing complexity, and the classical complexity classes of queries are redefined.
Abstract: A new complexity model for web queries is presented in this paper. The novelties of this model are: (1) the communication cost is considered; (2) the complexity of a web query is partitioned into the accessing complexity and the constructing complexity; (3) the classical complexity classes of queries are redefined. The upper bound of this model is the class TAC-Computable. A language which captures exactly the class is presented.

Patent
31 Aug 1999
TL;DR: In this paper, a tree model is generated from the periphery of an object tree and leaves are rendered to the respective branch segments and the model is displayed on a display for display.
Abstract: PROBLEM TO BE SOLVED: To generate the three-dimensional shape of a model approximating an existing tree shape. SOLUTION: An object tree 1 is photographed from the periphery by plural cameras 2, background images are eliminated from obtained images, color conversion is performed and input to a computer 3 for image generation. Volume data are prepared, also branch segments are prepared and a tree model is prepared. Leaves are rendered to the respective branch segments and the tree model is displayed on a display 4.

Journal ArticleDOI
TL;DR: An average case measure is defined for the time complexity of circuits, using this notion tight upper and lower bounds could be obtained for the average case complexity of several basic Boolean functions.
Abstract: In contrast to machine models like Turing machines or random access machines, circuits are a rigid computational model. The internal information flow of a computation is fixed in advance, independent of the actual input. Therefore, in complexity theory only worst case complexity measures have been used to analyse this model. In [JRS94] we have defined an average case measure for the time complexity of circuits. Using this notion tight upper and lower bounds could be obtained for the average case complexity of several basic Boolean functions.

Proceedings ArticleDOI
21 Sep 1999
TL;DR: A new tree-based search approach is proposed, which permits the testing of "multiple hypotheses" by utilizing the orthogonal property of the PN spreading sequence, by defining a delayed decision mechanism and a depth dependent threshold.
Abstract: Before a communication link is established, spread spectrum systems must synchronize by acquiring the spreading sequence The choice of code-acquisition algorithms is dictated by the trade-off of complexity and performance In this paper, a new tree-based search approach is proposed, which permits the testing of "multiple hypotheses" by utilizing the orthogonal property of the PN spreading sequence By defining a delayed decision mechanism and a depth dependent threshold, in the proposed approach, it is possible to permit a trade-off between the computational time required for the maximum likelihood (ML) test and the robustness lost in pure tree search