scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
01 Jan 2013
TL;DR: In this article, the decision tree model has been used as a data mining method to predict precipitation and evaluation of drought in Yazd synoptic meteorological station, and the results indicated that the model is able to present suitable prediction of precipitation especially when 5-year moving average of data is used.
Abstract: Undesirable effects of droughts on the agricultural and economical sectors and especially on the natural resources are intense. Different methods have been presented to predict the main factors of drought such as precipitation and during the recent decades some new computer based models have been developed for drought prediction. In most cases these models have presented quite satisfactory results. Decision tree, as one of these models, produces rules based on evaluation of the parameters from portion (component) to the whole, and finally reaches understandable knowledge from the existing statistical data. In this research, decision tree model has been used as a data mining method to predict precipitation and evaluation of drought in Yazd synoptic meteorological station. Simulations were carried out in four different conditions. Related variables including previous monthly precipitation, mean temperature, maximum temperature, humidity, wind speed, wind direction, and evaporation were used as independent input variables for all these four conditions and the amount of precipitation was predicted 12 months in advance. Finally for evaluation of the model performance in different conditions, statistical criteria were employed. Results indicated that the decision tree model is able to presents suitable prediction of precipitation especially when 5-year moving average of data is used. Precise prediction of precipitation and the accurate evaluation of drought conditions are of great importance for a better management and planning for drought damages reduction.

1 citations

Proceedings ArticleDOI
25 Jun 2010
TL;DR: An optimized decision tree algorithm based on rough sets model is proposed to avoid redundant steps of cutting branches later and improve the efficiency of the algorithm.
Abstract: An optimized decision tree algorithm based on rough sets model is proposed. Firstly the most popular decision tree algorithms, which are based rough sets model, usually partition examples too detailed to avoid the negative impact caused by a few special examples on decision tree because of the classification accuracy. The inhibitory factors are put forward in the forming process of the algorithm to cut branches for decision tree, avoiding redundant steps of cutting branches later. Secondly the condition attribute and decision attribute are matched in every division to avoid unnecessary calculation and improve the efficiency of the algorithm.

1 citations

Posted Content
TL;DR: The authors proposed a dynamic programming algorithm that marginalises over latent binary tree structures with $N$ leaves, allowing them to compute the likelihood of a sequence of tokens under a latent tree model, which they maximise to train a recursive neural function.
Abstract: We model the recursive production property of context-free grammars for natural and synthetic languages. To this end, we present a dynamic programming algorithm that marginalises over latent binary tree structures with $N$ leaves, allowing us to compute the likelihood of a sequence of $N$ tokens under a latent tree model, which we maximise to train a recursive neural function. We demonstrate performance on two synthetic tasks: SCAN (Lake and Baroni, 2017), where it outperforms previous models on the LENGTH split, and English question formation (McCoy et al., 2020), where it performs comparably to decoders with the ground-truth tree structure. We also present experimental results on German-English translation on the Multi30k dataset (Elliott et al., 2016), and qualitatively analyse the induced tree structures our model learns for the SCAN tasks and the German-English translation task.

1 citations

Journal Article
TL;DR: Based on the Wang-Zhang-Xiao algorithm,an efficient algorithm for computing m-tight error linear complexity of binary sequences with period pn is given, where p is a prime and 2 is a primitive root modulus p2.
Abstract: Based on the earlier notions of linear complexity,k-error linear complexity,k-error linear complexity profile and minerror,the concept of m-tight error linear complexity is presented to study the stability of the linear complexity of sequences. The m-tight error linear complexity of sequence S is defined as a two tuple (km,LCm),which is the m-th jump point of the k-error linear complexity profile of sequence S.An algorithm is given for the m-tight error linear complexity of binary sequences with period 2n by using the modified cost different from that used in the Stamp-Martin algorithm.The new algo-rithm is free of computations relating the sequence elements.Based on the Wang-Zhang-Xiao algorithm,an efficient algorithm for computing m-tight error linear complexity of binary sequences with period pn is given,where p is a prime and 2 is a primitive root modulus p2.

1 citations

Book ChapterDOI
07 Jun 2014
TL;DR: In this article, Buhrman, Koucký, and Vereshchagin gave a lower bound of 099n for the communication complexity of any randomized protocol that with probability at least 001 approximates C(x|y) with additive error en for all input pairs.
Abstract: The paper [Harry Buhrman, Michal Koucký, Nikolay Vereshchagin Randomized Individual Communication Complexity IEEE Conference on Computational Complexity 2008: 321-331] considered communication complexity of the following problem Alice has a binary string x and Bob a binary string y, both of length n, and they want to compute or approximate Kolmogorov complexity C(x|y) of x conditional to y It is easy to show that deterministic communication complexity of approximating C(x|y) with additive error α is at least n − 2α − O(1) The above referenced paper asks what is randomized communication complexity of this problem and shows that for r-round randomized protocols its communication complexity is at least Ω((n/α)1/r ) In this paper, for some positive e, we show the lower bound 099n for (worst case) communication length of any randomized protocol that with probability at least 001 approximates C(x|y) with additive error en for all input pairs

1 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121