scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An exponential lower bound on the size of a decision tree for this function is obtained, and an asymptotic formula is derived, having a linear main term, for its average sensitivity is derived.
Abstract: We study various combinatorial complexity measures of Boolean functions related to some natural arithmetic problems about binary polynomials, that is, polynomials over F2. In particular, we consider the Boolean function deciding whether a given polynomial over F2 is squarefree. We obtain an exponential lower bound on the size of a decision tree for this function, and derive an asymptotic formula, having a linear main term, for its average sensitivity. This allows us to estimate other complexity characteristics such as the formula size, the average decision tree depth and the degrees of exact and approximative polynomial representations of this function. Finally, using a different method, we show that testing squarefreeness and irreducibility of polynomials over F2 cannot be done in AC0[p] for any odd prime p. Similar results are obtained for deciding coprimality of two polynomials over F2 as well.

6 citations

Proceedings ArticleDOI
12 Dec 2008
TL;DR: Experiments performed on a programming optimized source code show that the computational complexity associated with each frame is well controlled below a given limit with very little R-D performance degradation under a reasonable constraint comparing to the unconstrained case.
Abstract: The allowable computational complexity of video encoding is limited in a power-constrained system. Different video frames are associated with different motions and contexts, and so are associated with different computational complexities if no complexity control is utilized. Variation in computational complexity leads to encoding delay jittering. Typically motion estimation (ME) consumes much more computational complexity than other encoding tools. This work proposes a practical complexity control method based on the complexity analysis of an H.264 video encoder to determine the coding gain of each encoding tool in the video encoder. Experiments performed on a programming optimized source code show that the computational complexity associated with each frame is well controlled below a given limit with very little R-D performance degradation under a reasonable constraint comparing to the unconstrained case.

6 citations

Journal ArticleDOI
TL;DR: It is shown that a noisy parallel decision tree making O(n) queries needs Ω(log* n) rounds to compute OR of n bits, and more general trade-offs between the number of queries and rounds are proved.
Abstract: WeshowthatanoisyparalleldecisiontreemakingO(n)queriesneedsW(log n) rounds to compute OR of n bits. This answers a question of Newman (IEEE Conference on Computational Complexity, 2004, 113-124). We prove more general tradeoffs between the number of queries and rounds. We also settle a similar question for computing MAX in the noisy comparison tree model; these results bring out interesting differences among the noise models.

6 citations

01 Jan 2013
TL;DR: This paper focuses on the discovery of a tree model that can be used to predict events in the future for a highly imbalanced data in which the number of data instances in one class is extremely smaller than thenumber of instances in other classes.
Abstract: resent computer technology has progressed at a very rapid speed and it has tremendous effect to the storage of electronic data. With enormously increasing data, conventional method to analyze data is inappropriate. Data mining is thus an essential technology and has been proven useful for automatic analysis of large data (5), (7). Data mining is the discovery of interesting patterns from large data. Discovered patterns can be in various forms such as a tree model for classification, a set of rules for association, or a group of representative centroids for data clustering. In this paper, we focus on the discovery of a tree model. This model can be used to predict events in the future. Decision tree induction is normally a powerful technique to discover a tree model for future event prediction. But for a highly imbalanced data in which the number of data instances in one class is extremely smaller than the number of instances in other classes. Dataset of a smaller group is called a rare class (2), (6) or a minority class. A dataset with high imbalance between majority and minority classes can cause much trouble to the tree induction algorithm. During the tree building process data instances from a minority class are normally pruned and disappear from the final tree model. This makes the tree model predict the majority class correctly, but minority class tends to be

6 citations

Journal ArticleDOI
31 Mar 2012
TL;DR: This study suggests the method of creating a decision tree using marginally conditional variables and apply to actual data to search for efficiency.
Abstract: Data mining is a method of searching for an interesting relationship among items in a given database. The decision tree is a typical algorithm of data mining. The decision tree is the method that classifies or predicts a group as some subgroups. In general, when researchers create a decision tree model, the generated model can be complicated by the standard of model creation and the number of input variables. In particular, if the decision trees have a large number of input variables in a model, the generated models can be complex and difficult to analyze model. When creating the decision tree model, if there are marginally conditional variables (intervening variables, external variables) in the input variables, it is not directly relevant. In this study, we suggest the method of creating a decision tree using marginally conditional variables and apply to actual data to search for efficiency.

6 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121