scispace - formally typeset
Open AccessJournal ArticleDOI

Learning in the presence of concept drift and hidden contexts

TLDR
A family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear are described, including a heuristic that constantly monitors the system's behavior.
Abstract
On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and reusing them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' perfomance under various conditions such as different levels of noise and different extent and rate of concept drift.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Classifier Technology and the Illusion of Progress

David J. Hand
- 01 Feb 2006 - 
TL;DR: The authors argued that simple methods typically yield performance almost as good as more sophisticated methods, to the extent that the difference in performance may be swamped by other sources of uncertainty that generally are not considered in the classical supervised classification paradigm.
Journal ArticleDOI

Learning drifting concepts: Example selection vs. example weighting

TL;DR: This paper proposes several methods to handle concept drift handling with support vector machines that can effectively select an appropriate window size, example selection, and example weighting, respectively, in a robust way.
Proceedings ArticleDOI

Dynamic weighted majority: a new ensemble method for tracking concept drift

TL;DR: Results suggest that the ensemble method learns drifting concepts almost as well as the base algorithms learn each concept individually, which is the best overall results for these problems to date.
Journal ArticleDOI

On evaluating stream learning algorithms

TL;DR: A general framework for assessing predictive stream learning algorithms and defends the use of prequential error with forgetting mechanisms to provide reliable error estimators, and proves that, in stationary data and for consistent learning algorithms, the holdout estimator, the preQUential error and the prequentially error estimated over a sliding window or using fading factors, all converge to the Bayes error.
Journal ArticleDOI

The Impact of Diversity on Online Ensemble Learning in the Presence of Concept Drift

TL;DR: A new categorization for concept drift is presented, separating drifts according to different criteria into mutually exclusive and nonheterogeneous categories, and it is shown that, before the drift, ensembles with less diversity obtain lower test errors, even though high diversity is more important for more severe drifts.
References
More filters
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Journal ArticleDOI

Instance-Based Learning Algorithms

TL;DR: This paper describes how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy and extends the nearest neighbor algorithm, which has large storage requirements.
Book

Machine Learning: An Artificial Intelligence Approach

TL;DR: This book contains tutorial overviews and research papers on contemporary trends in the area of machine learning viewed from an AI perspective, including learning from examples, modeling human learning strategies, knowledge acquisition for expert systems, learning heuristics, discovery systems, and conceptual data analysis.
Journal ArticleDOI

Learnability and the Vapnik-Chervonenkis dimension

TL;DR: This paper shows that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.
Journal ArticleDOI

Queries and Concept Learning

TL;DR: This work considers the problem of using queries to learn an unknown concept, and several types of queries are described and studied: membership, equivalence, subset, superset, disjointness, and exhaustiveness queries.