Abstract
Despite the growing prominence of generative adversarial networks (GANs),
optimization in GANs is still a poorly understood topic. In this paper, we
analyze the "gradient descent" form of GAN optimization i.e., the natural
setting where we simultaneously take small gradient steps in both generator and
discriminator parameters. We show that even though GAN optimization does not
correspond to a convex-concave game (even for simple parameterizations), under
proper conditions, equilibrium points of this optimization procedure are still
locally asymptotically stable for the traditional GAN formulation. On
the other hand, we show that the recently proposed Wasserstein GAN can have
non-convergent limit cycles near equilibrium. Motivated by this stability
analysis, we propose an additional regularization term for gradient descent GAN
updates, which is able to guarantee local stability for both the WGAN
and the traditional GAN, and also shows practical promise in speeding up
convergence and addressing mode collapse.
Users
Please
log in to take part in the discussion (add own reviews or comments).