scispace - formally typeset
Open AccessProceedings Article

Who said what: Modeling individual labelers improves classification

Reads0
Chats0
TLDR
In this paper, the authors proposed to use the information about which expert produced which label to reduce the workload on individual experts and also give a better estimate of the unobserved ground truth.
Abstract
Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010); Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Deep Learning is Robust to Massive Label Noise

TL;DR: It is shown that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels, and that training in this regime requires a significant but manageable increase in dataset size that is related to the factor by which correct labels have been diluted.
Journal ArticleDOI

Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey

TL;DR: This paper aims to present algorithms to develop counter algorithms to fade away its negative effects to train deep neural networks efficiently and divides them into one of the two subgroups: noise model based and noise model free methods.
Proceedings ArticleDOI

Learning From Noisy Labels by Regularized Estimation of Annotator Confusion

TL;DR: In this paper, a regularization term is added to the loss function that encourages convergence to the true annotator confusion matrix, which is a confusion matrix that is jointly estimated along with the classifier predictions.
Journal ArticleDOI

Learning to detect chest radiographs containing pulmonary lesions using visual attention networks

TL;DR: Two novel neural network architectures to detect pulmonary lesions in chest x‐rays imagesthat use visual attention mechanisms are proposed, designed to learn from a large number of weakly‐labelled images and a small number of annotated images.
Related Papers (5)
Trending Questions (2)
How do people treat individual expert opinions vs. aggregate expert opinion?

Individual expert opinions are modeled separately, then combined using averaging weights, allowing for weighting more reliable experts and leveraging unique strengths, improving classification over majority voting or distribution modeling.

How does labeling actuary data help to improve the accuracy of actuarial models?

متن داده شده هیچ اطلاعاتی در مورد چگونگی کمک برچسب گذاری داده های محاسبه به بهبود دقت مدل های حساباری ارائه نمی دهد.