Instance-Based Learning Algorithms
TLDR
This paper describes how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy and extends the nearest neighbor algorithm, which has large storage requirements.Abstract:
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.read more
Citations
More filters
Journal ArticleDOI
Retrieval, reuse, revision and retention in case-based reasoning
Ramon López de Mántaras,David McSherry,Derek Bridge,David B. Leake,Barry Smyth,Susan Craw,Boi Faltings,Mary Lou Maher,Michael T. Cox,Kenneth D. Forbus,Mark T. Keane,Agnar Aamodt,Ian Watson +12 more
TL;DR: The cognitive science foundations of CBR and its relationship to analogical reasoning are examined, and a representative selection ofCBR research in the past few decades on aspects of retrieval, reuse, revision and retention are reviewed.
Journal ArticleDOI
Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm
TL;DR: This paper introduces ICET, a new algorithm for cost-sensitive classification that uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm and establishes that ICET performs significantly better than its competitors.
Practical feature subset selection for machine learning
Mark Hall,Lloyd A. Smith +1 more
TL;DR: A new feature selection algorithm is described that uses a correlation based heuristic to determine the “goodness” of feature subsets, and its effectiveness is evaluated with three common machine learning algorithms.
Proceedings ArticleDOI
Learning to detect malicious executables in the wild
TL;DR: A fielded application for detecting malicious executables in the wild is described using techniques from machine learning and data mining, and boosted decision trees outperformed other methods with an area under the roc curve of 0.996.
Journal ArticleDOI
2008 Special Issue: Training neural network classifiers for medical decision making: The effects of imbalanced datasets on classification performance
Maciej A. Mazurowski,Piotr A. Habas,Jacek M. Zurada,Joseph Y. Lo,Jay A. Baker,Georgia D. Tourassi +5 more
TL;DR: The results show that classifier performance deteriorates with even modest class imbalance in the training data and it is shown that BP is generally preferable over PSO for imbalanced training data especially with small data sample and large number of features.
References
More filters
Journal ArticleDOI
Classification and Regression Trees.
Journal ArticleDOI
Induction of Decision Trees
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Classification and regression trees
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Journal ArticleDOI
Nearest neighbor pattern classification
Thomas M. Cover,Peter E. Hart +1 more
TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.