scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Multi-scale texture can better describe the texture feature of land, more effectively solve with the phenomenon of “same image for different object” in the classification results, and help to improve classification accuracy of high resolution image.
Abstract: remote sensing image land type data mining was studied based on QUEST decision tree with Dongting Lake area as the research object. First of all, the texture feature of gray level co-occurrence matrix was expounded, and the texture size was selected to construct the QUEST decision tree model; secondly, through spectrum and texture feature of remote sensing data with different resolutions and combining with other auxiliary data, Dongting land information was explored, and land type was classified. Finally, the following conclusions were reached: multi-scale texture can better describe the texture feature of land, more effectively solve with the phenomenon of “same image for different object” in the classification results, and help to improve classification accuracy of high resolution image.

7 citations

Journal ArticleDOI
Ed Blakey1
TL;DR: It is argued that traditional complexity theory does not adequately capture the true complexity of certain non-Turing computers, and, hence, that an extension of the theory is needed in order to accommodate such machines.

7 citations

Journal ArticleDOI
TL;DR: An intelligent system for the classification of Electrocardiograph (ECG) beat signal would play an important role in the diagnosis of cardiac arrhythmias and the objective of this work is to classify an ECG characteristic feature vector as either normal or arrhythmia.
Abstract: An intelligent system for the classification of Electrocardiograph (ECG) beat signal would play an important role in the diagnosis of cardiac arrhythmias. This paper employed a recently invented C5.0 decision trees (DTs) algorithm to develop a supervised ECG beat classifier. In general, decision tree algorithms have proved remarkable ability to derive meaning from complicated or imprecise data. Accordingly, they can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computational techniques. They are nonparametric methods with no assumptions about the space distribution and the classifier structure. This study investigated the performance of the C5.0 decision tree model with boosting on the diagnosis of ECG features' dataset. Boosting process significantly improves the accuracy of a C5.0 model. The algorithm builds up multiple decision-tree models in a sequential manner; the first model is built in the standard way. Then, each of the subsequent models focuses on the misclassified samples by the preceding model. Finally, new samples are classified by ensemble these models using a weighted voting procedure to combine the separate decisions into one overall choice. The objective of this work is to classify an ECG characteristic feature vector as either normal or arrhythmia. The classification performance of boosted C5.0 DTs is evaluated and compared to the one that achieved by multilayer feed-forward neural network. Experimental results showed that the boosted C5.0 DTs model has achieved a remarkable performance that reached 99% classification accuracy on both training and testing subsets.

7 citations

Proceedings Article
28 Jun 2001
TL;DR: In this paper, the authors show that the computational overhead of cross-validation can be reduced significantly by integrating the crossvalidation with the normal decision tree induction process, and they discuss how existing decision tree algorithms can be adapted to this aim and provide an analysis of the speedups these adaptations may yield.
Abstract: Cross-validation is a useful and generally applicable technique often employed in machine learning, including decision tree induction. An important disadvantage of straightforward implementation of the technique is its computational overhead. In this paper we show that, for decision trees, the computational overhead of cross-validation can be reduced significantly by integrating the cross-validation with the normal decision tree induction process. We discuss how existing decision tree algorithms can be adapted to this aim, and provide an analysis of the speedups these adaptations may yield. We identify a number of parameters that influence the obtainable speedups, and validate and refine our analysis with experiments on a variety of data sets with two different implementations. Besides cross-validation, we also briefly explore the usefulness of these techniques for bagging. We conclude with some guidelines concerning when these optimizations should be considered.

7 citations

Journal ArticleDOI
TL;DR: It is shown that partition arguments may yield bounds that are exponentially far from the true communication complexity, and it is proved that there exists a 3-argument function f whose communication complexity is @W(n), while partition arguments can only yield an@W(logn) lower bound for nondeterministiccommunication complexity.

7 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121