Topic
Decision tree model
About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Dec 2020TL;DR: In this article, the authors used the J48 decision tree model of data mining software Weka and made parameter analysis, and showed experimentally how the number of instances is affecting the correctness of the classification.
Abstract: Data mining classification methods can be a powerful tool when it comes to learning card game rules such as Poker. There are millions of possible combination in the game and making a decision tree to cover all the rules is not desirable. We used the J48 decision tree model of data mining software Weka and made parameter analysis. Then we show experimentally how the number of instances is affecting the correctness of the classification, and propose an equation to determine accuracy based on the number of instances in a data set. We examine several different attributes and the experiment shows high performance.
••
TL;DR: Wang et al. as discussed by the authors implemented a predictive model of stroke based on decision tree is implemented to predict the stroke probability of ten samples by using Python language and showed that older people with high blood pressure, heart disease, habitual smoking are more possible to have stroke, with a prediction accuracy of 88% for decision tree method and 79% for Naive Bayes model, respectively.
Abstract: In this paper, the predictive model of stroke based on decision tree is implemented to predict the stroke probability of ten samples by using Python language. The dataset of stroke is collected and is preprocessed, then the Gini coefficients of each feature are calculated to select the division, and then the decision tree model is obtained. Finally, the stroke probability is predicted for ten samples. In addition, Naive Bayes model is applied to predict the stroke probability to compare with the decision tree method. The experimental results show that older people with high blood pressure, heart disease, habitual smoking are more possible to have stroke, with a prediction accuracy of 88% for decision tree method and 79% for Naive Bayes model, respectively.
••
22 Oct 2010
TL;DR: This paper presents a short-term electric load forecasting method based on Autoregressive Tree Algorithm and Rough Set Theory that not only avoids the complexity and long training time of the model, but also considers various factors comprehensively.
Abstract: This paper presents a short-term electric load forecasting method based on Autoregressive Tree Algorithm and Rough Set Theory. Firstly, Rough Set Theory was used to reduce the testing properties of Autoregressive Tree. It can optimize the Autoregressive Tree Algorithm. Then, Autoregressive Tree Model of Short-term electric load forecasting is set up. Using Rough Set Theory, the attributes will be reduced off; whose dependence is zero, through knowledge reduction method. It not only avoids the complexity and long training time of the model, but also considers various factors comprehensively. At the same time, this algorithm has improved the prediction rate greatly by using automatic Data Mining Algorithms. Practical examples show that it can improve the load forecast accuracy effectively, and reduce the prediction time.
••
26 Aug 2010TL;DR: A model designed to measure the difference of similar Web services and a way to calculate the distance of two Web services is presented and an example scenario of service re-configuration and evolution is walked through to demonstrate the distance prediction mechanism.
Abstract: To organize similar Web services for more precise service discovery and easy service composition, it is important to predict the difference of two Web services. Furthermore, to provide a quantified measure on the differences of Web service turns out to be a key enabler for understanding Web services. In this paper, we present a model designed to measure the difference of similar Web services and a way to calculate the distance of two Web services. The basis of the scheme is a tree model to model Web services and a decision tree to calculate the degree of matching between the paths of two trees. The matching relies on a term categorization technique against selected feature dimensions defined in standardized service description languages. The term categorization results are used to calculate the differences of corresponding paths of the trees, which are in turn used to derive the difference of the trees. We walk through an example scenario of service re-configuration and evolution to demonstrate the distance prediction mechanism we proposed.
01 Jan 1997
TL;DR: In this article, it was shown that any parallel algorithm in the fixed degree algebraic decision tree model that answers membership queries in M'CR using p processors, requires s2(log 1 WI/H log( p; n)) rounds where 1 W 1 is the number of connected components of W. This implies non-trivia1 lower bounds for parallel algorithms that use a superlinear number of processors.
Abstract: WC present lower bounds on the number of rounds required to solve a decision problem in the parallel algebraic decision tree model. More specifically, we show that any parallel algorithm in the fixed degree algebraic decision tree model that answers membership queries in M’ CR” using p processors, requires s2(log 1 WI/H log( p;‘n)) rounds where 1 W 1 is the number of connected components of W. This implies non-trivia1 lower bounds for parallel algorithms that use a superlinear number of processors, namely, that the speed-up obtainable in such cases is not proportional to the number of processors. We further prove a similar result for the average case complexity. We give applications of this result to various fundamental problems in computational geometry like convex-hull construction and trapezoidal decomposition and also present algorithms with matching upper bounds. The algorithms extend Reif and Sen’s work in parallel computational geometry to the sublogarithmic time range based on recent progress in padded-sorting. A corollary of our result strengthens the known lower-bound of parallel sorting from the parallel comparison tree model to the more powerful bounded-degree decision tree.