Аннотация
Class-conditional generative models are an increasingly popular approach to
achieve robust classification. They are a natural choice to solve
discriminative tasks in a robust manner as they jointly optimize for predictive
performance and accurate modeling of the input distribution. In this work, we
investigate robust classification with likelihood-based conditional generative
models from a theoretical and practical perspective. Our theoretical result
reveals that it is impossible to guarantee detectability of adversarial
examples even for near-optimal generative classifiers. Experimentally, we show
that naively trained conditional generative models have poor discriminative
performance, making them unsuitable for classification. This is related to
overlooked issues with training conditional generative models and we show
methods to improve performance. Finally, we analyze the robustness of our
proposed conditional generative models on MNIST and CIFAR10. While we are able
to train robust models for MNIST, robustness completely breaks down on CIFAR10.
This lack of robustness is related to various undesirable model properties
maximum likelihood fails to penalize. Our results indicate that likelihood may
fundamentally be at odds with robust classification on challenging problems.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)