Open AccessProceedings Article
Who said what: Modeling individual labelers improves classification
Melody Y. Guan,Varun Gulshan,Andrew M. Dai,Geoffrey E. Hinton +3 more
- Vol. 32, Iss: 1, pp 3109-3118
Reads0
Chats0
TLDR
In this paper, the authors proposed to use the information about which expert produced which label to reduce the workload on individual experts and also give a better estimate of the unobserved ground truth.Abstract:
Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010); Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training.read more
Citations
More filters
Posted Content
Deep Learning is Robust to Massive Label Noise
TL;DR: It is shown that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels, and that training in this regime requires a significant but manageable increase in dataset size that is related to the factor by which correct labels have been diluted.
Journal ArticleDOI
Automated Diagnosis of Plus Disease in Retinopathy of Prematurity Using Deep Convolutional Neural Networks
James M. Brown,J. Peter Campbell,Andrew Beers,Ken Chang,Susan Ostmo,R.V. Paul Chan,Jennifer G. Dy,Deniz Erdogmus,Stratis Ioannidis,Jayashree Kalpathy-Cramer,Jayashree Kalpathy-Cramer,Michael F. Chiang +11 more
TL;DR: This fully automated algorithm diagnosed plus disease in ROP with comparable or better accuracy than human experts, which has potential applications in disease detection, monitoring, and prognosis in infants at risk of ROP.
Journal ArticleDOI
Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey
Görkem Algan,Ilkay Ulusoy +1 more
TL;DR: This paper aims to present algorithms to develop counter algorithms to fade away its negative effects to train deep neural networks efficiently and divides them into one of the two subgroups: noise model based and noise model free methods.
Proceedings ArticleDOI
Learning From Noisy Labels by Regularized Estimation of Annotator Confusion
TL;DR: In this paper, a regularization term is added to the loss function that encourages convergence to the true annotator confusion matrix, which is a confusion matrix that is jointly estimated along with the classifier predictions.
Journal ArticleDOI
Learning to detect chest radiographs containing pulmonary lesions using visual attention networks
Emanuele Pesce,Samuel Joseph Withey,Petros-Pavlos Ypsilantis,Robert Bakewell,Vicky Goh,Giovanni Montana +5 more
TL;DR: Two novel neural network architectures to detect pulmonary lesions in chest x‐rays imagesthat use visual attention mechanisms are proposed, designed to learn from a large number of weakly‐labelled images and a small number of annotated images.