scispace - formally typeset
Open AccessProceedings Article

Part-dependent Label Noise: Towards Instance-dependent Label Noise

Reads0
Chats0
TLDR
In this article, the authors approximate the instance-dependent label noise by exploiting parts-dependent labels, where the transition matrices for parts can be learned by exploiting anchor points (i.e., data points that belong to a specific class almost surely).
Abstract
Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise. Note that there are psychological and physiological evidences showing that we humans perceive instances by decomposing them into parts. Annotators are therefore more likely to annotate instances based on the parts rather than the whole instances. Motivated by this human cognition, in this paper, we approximate the instance-dependent label noise by exploiting \textit{parts-dependent} label noise. Specifically, since instances can be approximately reconstructed by a combination of parts, we approximate the instance-dependent \textit{transition matrix} for an instance by a combination of the transition matrices for the parts of the instance. The transition matrices for parts can be learned by exploiting anchor points (i.e., data points that belong to a specific class almost surely). Empirical evaluations on synthetic and real-world datasets demonstrate our method is superior to the state-of-the-art approaches for learning from the instance-dependent label noise.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

TL;DR: This paper introduces an intermediate class to avoid directly estimating the noisy class posterior of the transition matrix, and introduces the dual $T-estimator for estimating transition matrices, leading to better classification performances.
Posted Content

Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates

TL;DR: Performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near-optimal classifier as if performing ERM over the clean training data, which the authors do not have access to.
Proceedings Article

Meta Label Correction for Noisy Label Learning

TL;DR: This paper views the label correction procedure as a meta-process and proposes a new meta-learning based framework termed MLC (Meta Label Correction) for learning with noisy labels, which achieves large improvements over previous methods in many settings.
Proceedings Article

Learning with Bounded Instance- and Label-dependent Label Noise

TL;DR: In this paper, the authors introduce the concept of distilled examples, i.e. examples whose labels are identical with the labels assigned for them by the Bayes optimal classifier, and prove that under certain conditions classifiers learnt on distilled examples will converge to the BILN.
References
More filters
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Proceedings Article

Algorithms for Non-negative Matrix Factorization

TL;DR: Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.
Posted Content

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

TL;DR: Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.
Journal ArticleDOI

Recognition-by-Components: A Theory of Human Image Understanding.

TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.
Related Papers (5)