scispace - formally typeset
Open AccessJournal ArticleDOI

Instance-Based Learning Algorithms

TLDR
This paper describes how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy and extends the nearest neighbor algorithm, which has large storage requirements.
Abstract
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A study of the effect of different types of noise on the precision of supervised learning techniques

TL;DR: Naïve Bayes appears as the most robust algorithm, and SMO the least, relative to the other two techniques, however, the underlying empirical behavior of the techniques is more complex, and varies depending on the noise type and the specific data set being processed.
Journal ArticleDOI

A survey on activity recognition and behavior understanding in video surveillance

TL;DR: This paper provides an overview of benchmark databases for activity recognition, the market analysis of video surveillance, and future directions to work on for this application.
Proceedings ArticleDOI

Temporal sequence learning and data reduction for anomaly detection

TL;DR: An approach that transforms temporal sequences of discrete, unordered observations into a metric space via a similarity measure that encodes intra-attribute dependencies and demonstrates that it can accurately differentiate the profiled user from alternative users when the available features encode sufficient information.
Journal ArticleDOI

Selected Techniques for Data Mining in Medicine

TL;DR: This paper presents selected data mining techniques that can be applied in medicine, and in particular some machine learning techniques including the mechanisms that make them better suited for the analysis of medical databases.
Journal ArticleDOI

A Nearest Hyperrectangle Learning Method

TL;DR: The results in these domains support the claim that NGE theory can be used to create compact representations with excellent predictive accuracy.
References
More filters
Journal ArticleDOI

Induction of Decision Trees

J. R. Quinlan
- 25 Mar 1986 - 
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
Book

Classification and regression trees

Leo Breiman
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Journal ArticleDOI

Nearest neighbor pattern classification

TL;DR: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points, so it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.