scispace - formally typeset

Active learning (machine learning)

About: Active learning (machine learning) is a(n) research topic. Over the lifetime, 13164 publication(s) have been published within this topic receiving 566638 citation(s). The topic is also known as: active learning algorithm. more


Open accessJournal ArticleDOI: 10.1023/A:1022627411411
Corinna Cortes1, Vladimir Vapnik1Institutions (1)
15 Sep 1995-Machine Learning
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. more

Topics: Feature learning (63%), Active learning (machine learning) (62%), Feature vector (62%) more

35,157 Citations

Open accessJournal Article
Abstract: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from more

33,540 Citations

Open accessBook
25 Oct 1999-
Abstract: Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization more

20,120 Citations

Open accessBook
01 Jan 2001-
Topics: Algorithmic learning theory (60%), Semi-supervised learning (55%), Ensemble learning (54%) more

18,681 Citations

Journal ArticleDOI: 10.1109/TKDE.2009.191
Sinno Jialin Pan1, Qiang Yang1Institutions (1)
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. more

Topics: Semi-supervised learning (69%), Inductive transfer (68%), Multi-task learning (67%) more

13,267 Citations

No. of papers in the topic in previous years

Top Attributes

Show by:

Topic's top 5 most impactful authors

Jan Peters

22 papers, 1.1K citations

Jaime G. Carbonell

17 papers, 800 citations

Masashi Sugiyama

17 papers, 1.7K citations

Steven C. H. Hoi

16 papers, 1.1K citations

Stefan Schaal

14 papers, 2.9K citations

Network Information
Related Topics (5)
Supervised learning

20.8K papers, 710.5K citations

92% related
Instance-based learning

5.7K papers, 489.4K citations

91% related
Semi-supervised learning

12.1K papers, 611.2K citations

91% related
Recurrent neural network

29.2K papers, 890K citations

91% related
Stability (learning theory)

17.4K papers, 549.8K citations

90% related