Abstract
We explore the family of methods "PAC-Bayes with Backprop" (PBB) to train
probabilistic neural networks by minimizing PAC-Bayes bounds. We present two
training objectives, one derived from a previously known PAC-Bayes bound, and a
second one derived from a novel PAC-Bayes bound. Both training objectives are
evaluated on MNIST and on various UCI data sets. Our experiments show two
striking observations: we obtain competitive test set error estimates (~1.4% on
MNIST) and at the same time we compute non-vacuous bounds with much tighter
values (~2.3% on MNIST) than previous results. These observations suggest that
neural nets trained by PBB may lead to self-bounding learning, where the
available data can be used to simultaneously learn a predictor and certify its
risk, with no need to follow a data-splitting protocol.
Users
Please
log in to take part in the discussion (add own reviews or comments).