scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
Journal Article
TL;DR: This study identifies important factors for classifying Sasang constitutions through two-stage decision tree analysis and suggests that gender must be considered in the first stage to improve the accuracy of classification.
Abstract: 1. Objectives: In SCM, a personal Sasang constitution must be determined accurately before any Sasang treatment. The purpose of this study is to develop an objective method for classification of Sasang constitution. 2. Methods: We collected samples from 5 centers where SCM is practiced, and applied two-stage decision tree analysis on these samples. We recruited samples from 5 centers. The collected data were from subjects whose response to herbal medicine was confirmed according to Sasang constitution. 3. Results: The two-stage decision tree model shows higher classification power than a simple decision tree model. This study also suggests that gender must be considered in the first stage to improve the accuracy of classification. 4. Conclusions: We identified important factors for classifying Sasang constitutions through two-stage decision tree analysis. The two-stage decision tree model shows higher classification power than a simple decision tree model.
Patent
19 Mar 2019
TL;DR: In this paper, an intelligent learning system and method combining data signals and knowledge guidance is presented, which comprises a module for determining the number of nodes of four parameter hidden layers: a primary function center value, a primary functions expansion constant and a weight value question; the learning module was used for carrying out a learning stage by utilizing a self-organizing selection method; the logic regression algorithm module is used for fitting a logic function according to the collected data variables to estimate the probability of occurrence of a fault, so that the fault which frequently occurs in the elevator is judged within a
Abstract: The invention discloses an intelligent learning system and method combining data signals and knowledge guidance. The method comprises a module for determining the number of nodes of four parameter hidden layers: a primary function center value, a primary function expansion constant and a weight value question; the learning module is used for carrying out a learning stage by utilizing a self-organizing selection method; the logic regression algorithm module is used for fitting a logic function according to the collected data variables to estimate the probability of occurrence of a fault, so that the fault which frequently occurs in the elevator is judged within a certain range when the data is separated from a certain period, and the requirement of maintenance is decided according to the fault; on the premise that the predicted variables are mutually independent, classifying the acquired data according to the decision tree model; a decision tree is constructed by using a training data set, and a classification is generated for data samples according to the tree. According to the method, self-organizing learning is utilized in deep learning, the learning efficiency is improved, and the pre-judgment accuracy is improved through real-time data transmission and periodic learning.
Journal Article
TL;DR: Cascading Decision Trees shorten the size of explanations of classifications, without sacrificing model performance overall, by dividing the notion of a decision path and an explanation path into several smaller decision subtrees that cascade them in sequence.
Abstract: Classic decision tree learning is a binary classification algorithm that constructs models with first-class transparency - every classification has a directly derivable explanation. However, learning decision trees on modern datasets generates large trees, which in turn generate decision paths of excessive depth, obscuring the explanation of classifications. To improve the comprehensibility of classifications, we propose a new decision tree model that we call Cascading Decision Trees. Cascading Decision Trees shorten the size of explanations of classifications, without sacrificing model performance overall. Our key insight is to separate the notion of a decision path and an explanation path. Utilizing this insight, instead of having one monolithic decision tree, we build several smaller decision subtrees and cascade them in sequence. Our cascading decision subtrees are designed to specifically target explanations for positive classifications. This way each subtree identifies the smallest set of features that can classify as many positive samples as possible, without misclassifying any negative samples. Applying cascading decision trees to new samples results in a significantly shorter and succinct explanation, if one of the subtrees detects a positive classification. In that case, we immediately stop and report the decision path of only the current subtree to the user as an explanation for the classification. We evaluate our algorithm on standard datasets, as well as new real-world applications and find that our model shortens the explanation depth by over 40.8\% for positive classifications compared to the classic decision tree model.
Posted ContentDOI
05 Oct 2022
TL;DR: In this article , the Else-Tree classifier is proposed, which allows the classification model to learn its limitations by rejecting the decision on cases likely yield to misclassifications and hence produce highly confident outputs.
Abstract: Abstract With advances in machine learning and artificial intelligence, learning models have been used in many decision-making and classification applications. The nature of critical applications, which require a high level of trust in the prediction results, has motivated researchers to study classification algorithms that would minimize misclassification errors. In our study, we have developed the {\em trustable machine learning methodology} that allows the classification model to learn its limitations by rejecting the decision on cases likely yield to misclassificationsand hence produce highly confident outputs. This paper presents our trustable decision tree model through the development of the {\em Else-Tree} classifier algorithm. In contrast to the traditional decision tree models, which use a measurement of impurity to build the tree and decide class labels based on the majority of data samples at the leaf nodes, Else-Tree analyzes homogeneous regions of training data with similar attribute values and the same class label. After identifying the longest or most populated contiguous range per class, a decision node is created for that class, and the rest of the ranges are fed into the else branch to continue building the tree model. The Else-Tree model does not necessarily assign a class for conflicting or doubtful samples. Instead, it has an else-leaf node, led by the last else branch, to determine rejected or undecided data. The Else-Tree classifier has been evaluated and compared with other models through multiple datasets. The results show that Else-Tree can minimize the rate of misclassification.

Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121