scispace - formally typeset
Open AccessPosted ContentDOI

Trustable Decision Tree Model using Else-Tree Classifier

TLDR
In this article , the Else-Tree classifier is proposed, which allows the classification model to learn its limitations by rejecting the decision on cases likely yield to misclassifications and hence produce highly confident outputs.
Abstract
Abstract With advances in machine learning and artificial intelligence, learning models have been used in many decision-making and classification applications. The nature of critical applications, which require a high level of trust in the prediction results, has motivated researchers to study classification algorithms that would minimize misclassification errors. In our study, we have developed the {\em trustable machine learning methodology} that allows the classification model to learn its limitations by rejecting the decision on cases likely yield to misclassificationsand hence produce highly confident outputs. This paper presents our trustable decision tree model through the development of the {\em Else-Tree} classifier algorithm. In contrast to the traditional decision tree models, which use a measurement of impurity to build the tree and decide class labels based on the majority of data samples at the leaf nodes, Else-Tree analyzes homogeneous regions of training data with similar attribute values and the same class label. After identifying the longest or most populated contiguous range per class, a decision node is created for that class, and the rest of the ranges are fed into the else branch to continue building the tree model. The Else-Tree model does not necessarily assign a class for conflicting or doubtful samples. Instead, it has an else-leaf node, led by the last else branch, to determine rejected or undecided data. The Else-Tree classifier has been evaluated and compared with other models through multiple datasets. The results show that Else-Tree can minimize the rate of misclassification.

read more

References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.