Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Reads0
Chats0
TLDR
Virtual adversarial training (VAT) as discussed by the authors is a regularization method based on virtual adversarial loss, which is a measure of local smoothness of the conditional label distribution given input.Abstract:
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.read more
Citations
More filters
Proceedings ArticleDOI
Adversarial Mixup Synthesis Training for Unsupervised Domain Adaptation
TL;DR: A theoretical analysis on this phenomenon under ideal conditions and shows that AMST could improve generalization ability and experiments on benchmark dataset demonstrate the effectiveness and practicability of AMST.
Journal ArticleDOI
Semi-supervised semantic segmentation with cross teacher training
TL;DR: Chen et al. as discussed by the authors proposed a cross-teacher training framework with three modules that significantly improves traditional semi-supervised learning approaches, which can simultaneously reduce the coupling among peer networks and the error accumulation between teacher and student networks.
Journal ArticleDOI
Tree Segmentation and Parameter Measurement from Point Clouds Using Deep and Handcrafted Features
Feiyu Wang,Mitch Bryson +1 more
TL;DR: In this article , a point cloud segmentation framework is proposed to identify tree stem points in individual trees and is designed to improve performance when labelled training data are limited. But, this method requires a large amount of unlabeled point cloud data.
Proceedings ArticleDOI
Abductive Learning with Ground Knowledge Base
Posted Content
Regularization And Normalization For Generative Adversarial Networks: A Review
Ziqiang Li,Rentuo Tao,Bin Li +2 more
TL;DR: This paper reviews and summarizes the research in the regularization and normalization for GAN, and classifies the methods into six groups: Gradient penalty, Norm normalization and regularization, Jacobian regularized, Layer normalization, Consistency regularizations, and Self-supervision.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.