Instance-Based Learning Algorithms
TLDR
This paper describes how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy and extends the nearest neighbor algorithm, which has large storage requirements.Abstract:
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.read more
Citations
More filters
Journal ArticleDOI
Classifier and feature set ensembles for web page classification
TL;DR: Experimental results indicate that feature selection and ensemble learning can enhance the predictive performance of classifiers in web page classification, and Bagging and Random Subspace ensemble methods and correlation-based and consistency-based feature selection methods obtain better results in terms of accuracy rates.
Journal ArticleDOI
Clustering algorithm selection by meta-learning systems
TL;DR: Two contributions are explored here: a new approach to characterize clustering problems based on the similarity among objects; and new methods to combine internal indices for ranking algorithms based on their performance on the problems.
Journal ArticleDOI
Prediction of flexible/rigid regions from protein sequences using k-spaced amino acid pairs
TL;DR: A new sequence representation that uses k-spaced amino acid pairs is shown to be the most efficient in the prediction of the flexible/rigid regions of protein sequences.
Proceedings Article
Joint Processing and Discriminative Training for Letter-to-Phoneme Conversion
TL;DR: The key idea is online discriminative training, which updates parameters according to a comparison of the current system output to the desired output, allowing the model to train all of its components together.
Proceedings ArticleDOI
SciMATE: A Novel MapReduce-Like Framework for Multiple Scientific Data Formats
Yi Wang,Wei Jiang,Gagan Agrawal +2 more
TL;DR: This work presents a framework that allows scientific data in different formats to be processed with a MapReduce like API, referred to as SciMATE, and is based on the MATE system developed at Ohio State.
References
More filters
Journal ArticleDOI
Classification and Regression Trees.
Journal ArticleDOI
Induction of Decision Trees
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Classification and regression trees
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Journal ArticleDOI
Nearest neighbor pattern classification
Thomas M. Cover,Peter E. Hart +1 more
TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.