scispace - formally typeset
Open AccessJournal ArticleDOI

Instance-Based Learning Algorithms

TLDR
This paper describes how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy and extends the nearest neighbor algorithm, which has large storage requirements.
Abstract
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Evaluating and selecting features via information theoretic lower bounds of feature inner correlations for high-dimensional data

TL;DR: Two lower bounds that have very simple forms for feature redundancy and complementarity are introduced, and it is verified that they are closer to the optima than the existing lower bounds applied by some state-of-the-art information theoretic methods.

Author Profiling for English and Arabic Emails

TL;DR: The Machine Learning setup used to produce classifiers for the different author traits and the experimental results, which are promising for most traits examined, are described.
Book

Machine Reconstruction of Human Control Strategies

Dorian Šuc
TL;DR: To reconstruct human control skill involves machine learning from operator's execution traces, to induce a model of the operator's skill.
Book

Knowledge acquisition and learning by experience—the role of case-specific knowledge

Agnar Aamodt
TL;DR: This chapter presents a framework for integrating KA and ML methods within a total knowledge modeling cycle, favoring an iterative rather than a top down approach to system development.
Journal ArticleDOI

Keypoint selection for efficient bag-of-words feature generation and effective image classification

TL;DR: Experiments carried out based on the Caltech 101, Caltech 256, and PASCAL 2007 datasets demonstrate that performing keypoint selection using IKS1 and IKS2 to generate both the BoW and spatial-based BoW features allows the support vector machine (SVM) classifier to provide better classification accuracy than with the baseline features without key point selection.
References
More filters
Journal ArticleDOI

Induction of Decision Trees

J. R. Quinlan
- 25 Mar 1986 - 
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
Book

Classification and regression trees

Leo Breiman
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Journal ArticleDOI

Nearest neighbor pattern classification

TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.