Instance-Based Learning Algorithms
TLDR
This paper describes how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy and extends the nearest neighbor algorithm, which has large storage requirements.Abstract:
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.read more
Citations
More filters
Proceedings ArticleDOI
Genome scale prediction of protein functional class from sequence using data mining
TL;DR: Biologically interpretable rules are identified that can predict protein function even in the absence of identifiable sequence homology in Mycobacterium tuberculosis with an estimated accuracy of 60-80% and give insight into the evolutionary history of the organism.
Proceedings ArticleDOI
Efficient Mining of Contrast Patterns and Their Applications to Classification
TL;DR: This paper examines various kinds of contrast patterns and investigates efficient pattern mining techniques and discusses how to exploit patterns to construct effective classifiers.
Journal ArticleDOI
Automated classification of hand movements using tunable-Q wavelet transform based filter-bank with surface electromyogram signals
Anurag Nishad,Abhay Upadhyay,Ram Bilas Pachori,U. Rajendra Acharya,U. Rajendra Acharya,U. Rajendra Acharya +5 more
TL;DR: T tunable- Q wavelet transform based filter-bank is applied for decomposition of cross-covariance of sEMG (csEMG) signals for basic hand movements classification using Kraskov entropy features.
Journal ArticleDOI
Otolith shape and size: The importance of age when determining indices for fish-stock separation
TL;DR: This study suggests that methods of stock discrimination based on early incremental growth are likely to be effective, and that automated classification techniques will show little benefit in supplementing early growth information with shape indices derived from mature outlines.
References
More filters
Journal ArticleDOI
Classification and Regression Trees.
Journal ArticleDOI
Induction of Decision Trees
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Book
Classification and regression trees
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Journal ArticleDOI
Nearest neighbor pattern classification
Thomas M. Cover,Peter E. Hart +1 more
TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.