Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Reads0
Chats0
TLDR
Virtual adversarial training (VAT) as discussed by the authors is a regularization method based on virtual adversarial loss, which is a measure of local smoothness of the conditional label distribution given input.Abstract:
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.read more
Citations
More filters
Posted Content
IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters
TL;DR: This work proposes an effective and simple solution to enhance MAE's fitting ability while preserving its noise-robustness, and proves IMAE's effectiveness using extensive experiments: image classification under clean labels, synthetic label noise, and real-world unknown noise.
Proceedings ArticleDOI
Improving Speech Recognition Using Consistent Predictions on Synthesized Speech
Gary Wang,Andrew Rosenberg,Zhehuai Chen,Yu Zhang,Bhuvana Ramabhadran,Yonghui Wu,Pedro J. Moreno +6 more
TL;DR: It is demonstrated that promoting consistent predictions in response to real and synthesized speech enables significantly improved speech recognition performance and suggests that with this approach, reliance on transcribed audio can be cut nearly in half.
Book ChapterDOI
Improving Object Detection with Selective Self-supervised Self-training
TL;DR: A selective net is proposed to rectify the supervision signals in Web images and not only identifies positive bounding boxes but also creates a safe zone for mining hard negative boxes.
Journal ArticleDOI
Parallel Vision for Long-Tail Regularization: Initial Results From IVFC Autonomous Driving Testing
Jiangong Wang,Xiao Zhang Wang,Tianyu Shen,Yutong Wang,Li Li,Yonglin Tian,Hui Yu,Long Chen,Jingmin Xin,Xiangbin Wu,Nanning Zheng,Fei-Yue Wang +11 more
TL;DR: A theoretical framework named Long-tail Regularization (LoTR), for analyzing and tackling the long-tail problems in the vision perception of autonomous driving, and a Parallel Vision Actualization System (PVAS), which consists of closed-loop optimization and virtual-real interaction, to search for challenging long- tail scenarios and produce large-scale long-tails driving scenarios for autonomous vehicles.
Posted Content
Provably Consistent Partial-Label Learning
TL;DR: This paper proposes the first generation model of candidate label sets, and develops two novel PLL methods that are guaranteed to be provably consistent, i.e., one is risk-consistent and the other is classifier- Consistent.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.