scispace - formally typeset
Search or ask a question
Topic

Active learning (machine learning)

About: Active learning (machine learning) is a research topic. Over the lifetime, 13164 publications have been published within this topic receiving 566638 citations. The topic is also known as: active learning algorithm.


Papers
More filters
Journal ArticleDOI
TL;DR: An approach to the machine learning in accordance to information-theoretic criteria that are derived basing on the Renyi entropy of an arbitrary order is presented, which leads to a problem of the finite dimensional optimization to be solved by a suitable technique.

2 citations

Journal ArticleDOI
01 Jan 2019
TL;DR: Tiresias is an approach that, given a web service exposing an interface with a fixed number of parameters, initializes and actively adapts a model to accurately predict query costs, and is evaluated in terms of effectiveness and efficiency.
Abstract: Delivering accurate estimates of query costs in web services is important in different contexts, e.g., to measure their Quality of Service. However, building a reliable cost model is difficult as (i) a web service is a black box often hiding a complex computation, (ii) a call to the same service can yield completely different costs by simply changing a parameter value, and (iii) execution costs can drift with time. In this paper we propose Tiresias, an approach that, given a web service exposing an interface with a fixed number of parameters, initializes and actively adapts a model to accurately predict query costs. The cost model is represented by a regression tree trained through two interleaved querying cycles: a passive one, where the costs measured for user-generated queries are used to update the tree, and an active one, where the service is probed through system-generated queries to cope with drifts in the cost function. Tiresias is finally evaluated in terms of effectiveness and efficiency through a set of experimental tests performed on both real and synthetic datasets.

2 citations

Proceedings ArticleDOI
01 Jun 2020
TL;DR: The experimental set-up shows that a well-tuned random forest algorithm is equal to, or better than, the deep learning approach and increases the speed of the retraining process by a factor of around 400.
Abstract: This paper compares the efficiency of state-of-the-art machine learning algorithms used to detect an object in an image. A comparison between a deep learning algorithm such as the VGG-16 and a well-tuned random forest algorithm using classical image analysis parameters is presented. To estimate the efficiency, the classification performances like AUC, precision, recall and computation time of the algorithm retraining process are used. The experimental set-up shows that a well-tuned random forest algorithm is equal to, or better than, the deep learning approach and increases the speed of the retraining process by a factor of around 400.

2 citations

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The experimental results show that the proposed evaluation model has certain advantages in accuracy and efficiency, and can obtain better evaluation results of PE teaching quality in Colleges and universities.
Abstract: Traditional methods of Physical Education (PE) teaching quality assessment in Colleges and universities have been unable to meet the needs of information and modern PE teaching mode in terms of accuracy and efficiency. Therefore, aiming at the problem of classroom PE teaching quality evaluation in Colleges and universities, this paper proposes an assistant PE teaching quality evaluation model based on active learning support vector machine. Considering the actual situation in many aspects, the evaluation index system of classroom PE teaching quality is constructed. The active learning support vector machine (AL-SVM) is used to establish the evaluation model of classroom PE teaching quality. The collected data sets of PE teaching quality in a university are experimented and the results are analyzed. The experimental results show that, compared with other evaluation models, the proposed evaluation model has certain advantages in accuracy and efficiency, and can obtain better evaluation results of PE teaching quality in Colleges and universities.

2 citations

Posted Content
TL;DR: In this paper, the authors adapt the concept of discrepancy distance between source and target distributions to restrict the maximization over the hypothesis class to a localized class of functions which are performing accurate labeling on the source domain.
Abstract: The goal of the paper is to design active learning strategies which lead to domain adaptation under an assumption of domain shift in the case of Lipschitz labeling function. Building on previous work by Mansour et al. (2009) we adapt the concept of discrepancy distance between source and target distributions to restrict the maximization over the hypothesis class to a localized class of functions which are performing accurate labeling on the source domain. We derive generalization error bounds for such active learning strategies in terms of Rademacher average and localized discrepancy for general loss functions which satisfy a regularity condition. Practical algorithms are inferred from the theoretical bounds, one is based on greedy optimization and the other is a K-medoids algorithm. We also provide improved versions of the algorithms to address the case of large data sets. These algorithms are competitive against other state-of-the-art active learning techniques in the context of domain adaptation as shown in our numerical experiments, in particular on large data sets of around one hundred thousand images.

2 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
87% related
Deep learning
79.8K papers, 2.1M citations
85% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Feature (computer vision)
128.2K papers, 1.7M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023350
2022839
2021790
2020673
2019624
2018471