Zusammenfassung
Many tasks in modern machine learning can be formulated as finding equilibria
in sequential games. In particular, two-player zero-sum sequential
games, also known as minimax optimization, have received growing interest. It
is tempting to apply gradient descent to solve minimax optimization given its
popularity and success in supervised learning. However, it has been noted that
naive application of gradient descent fails to find some local minimax and can
converge to non-local-minimax points. In this paper, we propose
Follow-the-Ridge (FR), a novel algorithm that provably converges to and
only converges to local minimax. We show theoretically that the algorithm
addresses the notorious rotational behaviour of gradient dynamics, and is
compatible with preconditioning and positive momentum. Empirically, FR
solves toy minimax problems and improves the convergence of GAN training
compared to the recent minimax optimization algorithms.
Nutzer