Virtual Adversarial Training: A Regularization Method for Supervised and
Semi-Supervised Learning
T. Miyato, S. Maeda, M. Koyama, and S. Ishii. (2017)cite arxiv:1704.03976Comment: To be appeared in IEEE Transactions on Pattern Analysis and Machine Intelligence.
Abstract
We propose a new regularization method based on virtual adversarial loss: a
new measure of local smoothness of the conditional label distribution given
input. Virtual adversarial loss is defined as the robustness of the conditional
label distribution around each input data point against local perturbation.
Unlike adversarial training, our method defines the adversarial direction
without label information and is hence applicable to semi-supervised learning.
Because the directions in which we smooth the model are only "virtually"
adversarial, we call our method virtual adversarial training (VAT). The
computational cost of VAT is relatively low. For neural networks, the
approximated gradient of virtual adversarial loss can be computed with no more
than two pairs of forward- and back-propagations. In our experiments, we
applied VAT to supervised and semi-supervised learning tasks on multiple
benchmark datasets. With a simple enhancement of the algorithm based on the
entropy minimization principle, our VAT achieves state-of-the-art performance
for semi-supervised learning tasks on SVHN and CIFAR-10.
%0 Generic
%1 miyato2017virtual
%A Miyato, Takeru
%A Maeda, Shin-ichi
%A Koyama, Masanori
%A Ishii, Shin
%D 2017
%K adversarial pytorch semisup
%T Virtual Adversarial Training: A Regularization Method for Supervised and
Semi-Supervised Learning
%U http://arxiv.org/abs/1704.03976
%X We propose a new regularization method based on virtual adversarial loss: a
new measure of local smoothness of the conditional label distribution given
input. Virtual adversarial loss is defined as the robustness of the conditional
label distribution around each input data point against local perturbation.
Unlike adversarial training, our method defines the adversarial direction
without label information and is hence applicable to semi-supervised learning.
Because the directions in which we smooth the model are only "virtually"
adversarial, we call our method virtual adversarial training (VAT). The
computational cost of VAT is relatively low. For neural networks, the
approximated gradient of virtual adversarial loss can be computed with no more
than two pairs of forward- and back-propagations. In our experiments, we
applied VAT to supervised and semi-supervised learning tasks on multiple
benchmark datasets. With a simple enhancement of the algorithm based on the
entropy minimization principle, our VAT achieves state-of-the-art performance
for semi-supervised learning tasks on SVHN and CIFAR-10.
@preprint{miyato2017virtual,
abstract = {We propose a new regularization method based on virtual adversarial loss: a
new measure of local smoothness of the conditional label distribution given
input. Virtual adversarial loss is defined as the robustness of the conditional
label distribution around each input data point against local perturbation.
Unlike adversarial training, our method defines the adversarial direction
without label information and is hence applicable to semi-supervised learning.
Because the directions in which we smooth the model are only "virtually"
adversarial, we call our method virtual adversarial training (VAT). The
computational cost of VAT is relatively low. For neural networks, the
approximated gradient of virtual adversarial loss can be computed with no more
than two pairs of forward- and back-propagations. In our experiments, we
applied VAT to supervised and semi-supervised learning tasks on multiple
benchmark datasets. With a simple enhancement of the algorithm based on the
entropy minimization principle, our VAT achieves state-of-the-art performance
for semi-supervised learning tasks on SVHN and CIFAR-10.},
added-at = {2020-02-08T19:45:03.000+0100},
author = {Miyato, Takeru and Maeda, Shin-ichi and Koyama, Masanori and Ishii, Shin},
biburl = {https://www.bibsonomy.org/bibtex/24284b24c2dff3a93b50a2952c7010172/topel},
interhash = {4aed9f178e66bd6a4fb7d87f4866a610},
intrahash = {4284b24c2dff3a93b50a2952c7010172},
keywords = {adversarial pytorch semisup},
note = {cite arxiv:1704.03976Comment: To be appeared in IEEE Transactions on Pattern Analysis and Machine Intelligence},
timestamp = {2020-02-08T19:45:03.000+0100},
title = {Virtual Adversarial Training: A Regularization Method for Supervised and
Semi-Supervised Learning},
url = {http://arxiv.org/abs/1704.03976},
year = 2017
}