Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Reads0
Chats0
TLDR
Virtual adversarial training (VAT) as discussed by the authors is a regularization method based on virtual adversarial loss, which is a measure of local smoothness of the conditional label distribution given input.Abstract:
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.read more
Citations
More filters
Proceedings ArticleDOI
BERT-ATTACK: Adversarial Attack Against BERT Using BERT
TL;DR: This paper proposes a high-quality and effective method to generate adversarial samples using pre-trained masked language models exemplified by BERT against its fine-tuned models and other deep neural models for downstream tasks and successfully misleads the target models to predict incorrectly.
Proceedings Article
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
David Berthelot,Nicholas Carlini,Ekin D. Cubuk,Alex Kurakin,Kihyuk Sohn,Han Zhang,Colin Raffel +6 more
TL;DR: A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy.
Proceedings ArticleDOI
S4L: Self-Supervised Semi-Supervised Learning
TL;DR: It is shown that S4L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi- supervised ILSVRC-2012 with 10% of labels.
Adversarial Training Methods for Semi-Supervised Text Classification
TL;DR: In this article, the authors extend adversarial and virtual adversarial training to text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.
Proceedings ArticleDOI
Model Adaptation: Unsupervised Domain Adaptation Without Source Data
TL;DR: This paper proposes a new framework, which is referred to as collaborative class conditional generative adversarial net, to bypass the dependence on the source data and achieves superior performance on multiple adaptation tasks with only unlabeled target data, which verifies its effectiveness in this challenging setting.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.