Article,

Robust Sampling in Deep Learning

, , and .
(2020)cite arxiv:2006.02734Comment: 8 pages, 3 figures.

Abstract

Deep learning requires regularization mechanisms to reduce overfitting and improve generalization. We address this problem by a new regularization method based on distributional robust optimization. The key idea is to modify the contribution from each sample for tightening the empirical risk bound. During the stochastic training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization. We study different scenarios and show the ones where it can make the convergence faster or increase the accuracy.

Tags

Users

  • @kirk86

Comments and Reviews