Open AccessPosted Content
Pseudo-Representation Labeling Semi-Supervised Learning.
Song-Bo Yang,Tian-Li Yu +1 more
Reads0
Chats0
TLDR
The pseudo-representation labeling is a simple and flexible framework that utilizes pseudo-labeling techniques to iteratively label a small amount of unlabeled data and use them as training data and outperforms the current state-of-the-art semi-supervised learning methods in industrial types of classification problems such as the WM-811K wafer map and the MIT-BIH Arrhythmia dataset.Abstract:
In recent years, semi-supervised learning (SSL) has shown tremendous success in leveraging unlabeled data to improve the performance of deep learning models, which significantly reduces the demand for large amounts of labeled data. Many SSL techniques have been proposed and have shown promising performance on famous datasets such as ImageNet and CIFAR-10. However, some exiting techniques (especially data augmentation based) are not suitable for industrial applications empirically. Therefore, this work proposes the pseudo-representation labeling, a simple and flexible framework that utilizes pseudo-labeling techniques to iteratively label a small amount of unlabeled data and use them as training data. In addition, our framework is integrated with self-supervised representation learning such that the classifier gains benefits from representation learning of both labeled and unlabeled data. This framework can be implemented without being limited at the specific model structure, but a general technique to improve the existing model. Compared with the existing approaches, the pseudo-representation labeling is more intuitive and can effectively solve practical problems in the real world. Empirically, it outperforms the current state-of-the-art semi-supervised learning methods in industrial types of classification problems such as the WM-811K wafer map and the MIT-BIH Arrhythmia dataset.read more
Citations
More filters
Proceedings ArticleDOI
Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation
TL;DR: Xia et al. as discussed by the authors proposed a cross-domain adaptive clustering loss to group features of unlabeled target data into clusters and perform cluster-wise feature alignment across the source and target domains.
References
More filters
Journal ArticleDOI
Adaptive Weight Decay for Deep Neural Networks
Kensuke Nakamura,Byung-Woo Hong +1 more
TL;DR: The quantitative evaluation of the proposed algorithm, called adaptive weight-decay (AdaDecay), indicates that AdaDecay improves generalization leading to better accuracy across all the datasets and models.
Posted Content
Adaptive Weight Decay for Deep Neural Networks
Kensuke Nakamura,Byung-Woo Hong +1 more
TL;DR: AdaDecay as mentioned in this paper considers the residual that measures dissimilarity between the current state of model and observations in the determination of the weight-decay for each parameter in an adaptive way, where the gradient norms are normalized within each layer and the degree of regularization for each parameters is determined in proportional to the magnitude of its gradient using the sigmoid function.
Related Papers (5)
A robust semi-supervised learning approach via mixture of label information
Yun Yang,Xingchen Liu +1 more