scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
Book ChapterDOI
01 Jan 1999
TL;DR: The communication cost is considered, the complexity of a web query is partitioned into the accessing complexity and the constructing complexity, and the classical complexity classes of queries are redefined.
Abstract: A new complexity model for web queries is presented in this paper. The novelties of this model are: (1) the communication cost is considered; (2) the complexity of a web query is partitioned into the accessing complexity and the constructing complexity; (3) the classical complexity classes of queries are redefined. The upper bound of this model is the class TAC-Computable. A language which captures exactly the class is presented.

3 citations

Proceedings ArticleDOI
01 Oct 2007
TL;DR: The experiment results show that the GA -LDA model outperforms other classification methods, including probabilistic neural network and decision tree model.
Abstract: This study applies genetic algorithms to select financial statement variables which are used to predict the direction of one-year-ahead earnings change. To evaluate the forecasting ability of GA-based-linear discriminant analysis (GA-LDA), this study compares it with probabilistic neural network and decision tree model. The experiment results show that the GA -LDA model outperforms other classification methods.

3 citations

Journal Article
XU Hua-li1
TL;DR: The results show that decision tree constructed by this algorithm is simple, has certain degree of anti-interference and can meet the decision accuracy requirements from different users.
Abstract: Concerning the complicated structure and being sensitive to noise and other problems of decision tree constructed by classical decision tree algorithms,a new decision tree construction algorithm based on multiscale rough set model was proposed.This proposed algorithm introduced the concept of scale variable and scale function,the index of approximate classification accuracy in different scales was used to select test attributes and the hold-down factor was put forward to prune the decision tree and removed the noise rules effectively.The results show that decision tree constructed by this algorithm is simple,and has certain degree of anti-interference and can meet the decision accuracy requirements from different users.

3 citations

Journal ArticleDOI
Raymond L. Major1
01 Feb 2000
TL;DR: This work introduces a practical algorithm that forms a finite number of features using a decision tree in a polynomial amount of time and shows empirically that this procedure forms many features that subsequently appear in a tree and the new features aid in producing simpler trees when concepts are being learned from certain problem domains.
Abstract: Using decision trees as a concept description language, we examine the time complexity for learning Boolean functions with polynomial-sized disjunctive normal form expressions when feature construction is performed on an initial decision tree containing only primitive attributes. A shortcoming of several feature-construction algorithms found in the literature is that it is difficult to develop time complexity results for them. We illustrate a way to determine a limit on the number of features to use for building more concise trees within a standard amount of time. We introduce a practical algorithm that forms a finite number of features using a decision tree in a polynomial amount of time. We show empirically that our procedure forms many features that subsequently appear in a tree and the new features aid in producing simpler trees when concepts are being learned from certain problem domains. Expert systems developers can use a method such as this to create a knowledge base of information that contains specific knowledge in the form of If-Then rules.

3 citations

Posted Content
TL;DR: The structured Husler-Reiss distribution is found to fit the observed extremal dependence well, and the fitted model confirms the importance of flow-connectedness for the strength of dependence between high water levels, even for locations at large distance apart.
Abstract: A Markov tree is a probabilistic graphical model for a random vector by which conditional independence relations between variables are encoded via an undirected tree and each node corresponds to a variable. One possible max-stable attractor for such a model is a Husler-Reiss extreme value distribution whose variogram matrix inherits its structure from the tree, each edge contributing one free dependence parameter. Even if some of the variables are latent, as can occur on junctions or splits in a river network, the underlying model parameters are still identifiable if and only if every node corresponding to a missing variable has degree at least three. Three estimation procedures, based on the method of moments, maximum composite likelihood, and pairwise extremal coefficients, are proposed for usage on multivariate peaks over thresholds data. The model and the methods are illustrated on a dataset of high water levels at several locations on the Seine network. The structured Husler-Reiss distribution is found to fit the observed extremal dependence well, and the fitted model confirms the importance of flow-connectedness for the strength of dependence between high water levels, even for locations at large distance apart.

3 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121