Topic
Decision tree model
About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.
Papers published on a yearly basis
Papers
More filters
••
25 May 2021TL;DR: In this paper, a decision tree model for influencing factors of university students' physical education practice teaching effect is designed, and the average clustering accuracy of the designed model is as high as 99.4%.
Abstract: In the traditional SEM model, density-based clustering method is used to complete the clustering analysis on the influencing factors of university students’ physical education practice teaching effect. The result of clustering analysis is low reliability and time-consuming, which leads to the poor performance of the model analysis of influencing factors. Thus, in this paper, a decision tree model for influencing factors of university students’ physical education practice teaching effect is designed. K-means clustering algorithm is adopted to classify the basic information data on PE practice of university students. According to the results of data clustering combined with the basic ideas and concepts of decision tree, a decision tree model of influencing factors of PE practice teaching effect is constructed using ID3 algorithm. The average clustering accuracy of the designed model is as high as 99.4%. According to the decision-making results of the model, “physical fitness” is the most important factor affecting the effect of university students’ physical education practice teaching.
1 citations
•
17 Jan 2020
TL;DR: In this paper, the authors proposed a random forest model construction method and a device, electronic equipment and a storage medium, which comprises the following steps: dividing a plurality of dimension characteristic variables of a data set into plurality of characteristic subsets; constructing a corresponding decision tree model based on the sample of each feature subset, and determining the weightof each feature variable in the feature subset based on decision tree, sampling part of the feature variables sorted in the front from each feature subsets, and combining to form a dimensionality reduction feature variable set.
Abstract: The invention provides a random forest model construction method and a device, electronic equipment and a storage medium. The method comprises the following steps: dividing a plurality of dimension characteristic variables of a data set into a plurality of characteristic subsets; constructing a corresponding decision tree model based on the sample of each feature subset, and determining the weightof each feature variable in the feature subset based on the decision tree model; according to the descending sort of the weights of the feature variables in each feature subset, sampling part of thefeature variables sorted in the front from each feature subset, and combining to form a dimensionality reduction feature variable set; dividing samples in the data set into a plurality of sample subsets; performing replacement sampling on each sample subset according to the dimension reduction characteristic variable set to obtain a new sample subset consistent with the sample size of the data set; independently constructing a decision tree model based on each new sample subset, and integrating the plurality of constructed decision tree models to obtain a random forest model; therefore, the data processing efficiency of the random forest model can be improved.
1 citations
••
01 Jan 2015
TL;DR: In this study, a genetic algorithm is used to construct decision trees of increased accuracy and efficiency compared to those constructed by the conventional ID3 or C4.5 decision tree building algorithms.
Abstract: in knowledge discovery and decision support systems. They are simple and practical prediction models but often suffer from excessive complexity and can even be incomprehensible. In this study, a genetic algorithm is used to construct decision trees of increased accuracy and efficiency compared to those constructed by the conventional ID3 or C4.5 decision tree building algorithms. An improved definition of an efficient binary decision tree is proposed and evaluated – instead of simply using the number of nodes in a tree, the average number of questions asked in the tree for all the database entries is proposed.
1 citations
••
30 Nov 1997TL;DR: Experimental results in the domain of modelling elementary subtraction skills showed that the tree quality and the leaf quality of a decision path provided valuable references for resolving contradicting predictions and a single tree model representation performed nearly equally well to the multi-tree model representation.
Abstract: Input-Output Agent Modelling (IOAM) is an approach to modelling an agent in terms of relationships. between the inputs and outputs of the cognitive system. This approach, together with a leading inductive learning algorithm, C4.5, has been adopted to build a subtraction skill modeller, C4.5-IOAM. It models agents' competencies with a set of decision trees. C4.5-IOAM makes no prediction when predictions from different decision trees are contradictory. This paper proposes three techniques for resolving such situations. Two techniques involve selecting the more reliable prediction from a set of competing predictions using a tree quality measure and a leaf quality measure. The other technique merges multiple decision trees into a single tree. This has the additional advantage of producing more comprehensible models. Experimental results, in the domain of modelling elementary subtraction skills, showed that the tree quality and the leaf quality of a decision path provided valuable references for resolving contradicting predictions and a single tree model representation performed nearly equally well to the multi-tree model representation.
1 citations
••
08 Oct 2003TL;DR: The weight-threshold method is proposed to make the classifier predictable and reduce the matching workload and it can be illustrated that packet filtering based on decision tree classifiers is more efficient and can study inductively from the rules.
Abstract: Traditional technique for packet filtering is linear searching, in such way, filter efficiency is very low in the worst condition and rules in the rule-list are independent so that information among them can not be used effectively In this paper, a new idea using a decision tree classifier is proposed It first analyses the differences between packet filtering and general classification problems Then the building and searching algorithms based on decision tree are explicated and the algorithm's performance is analyzed At last, the weight-threshold method is proposed to make the classifier predictable and reduce the matching workload It can be illustrated that packet filtering based on decision tree classifiers is more efficient and can study inductively from the rules
1 citations