Convergence Guarantees for Adaptive Bayesian Quadrature Methods
M. Kanagawa, and P. Hennig. (2019)cite arxiv:1905.10271Comment: To appear in NeurIPS 2019.
Abstract
Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical
integration that empirically compares favorably with Monte Carlo integration on
problems of medium dimensionality (where non-adaptive quadrature is not
competitive). Its key ingredient is an acquisition function that changes as a
function of previously collected values of the integrand. While this adaptivity
appears to be empirically powerful, it complicates analysis. Consequently,
there are no theoretical guarantees so far for this class of methods. In this
work, for a broad class of adaptive Bayesian quadrature methods, we prove
consistency, deriving non-tight but informative convergence rates. To do so we
introduce a new concept we call weak adaptivity. Our results identify a large
and flexible class of adaptive Bayesian quadrature rules as consistent, within
which practitioners can develop empirically efficient methods.
Description
[1905.10271] Convergence Guarantees for Adaptive Bayesian Quadrature Methods
%0 Journal Article
%1 kanagawa2019convergence
%A Kanagawa, Motonobu
%A Hennig, Philipp
%D 2019
%K bayesian convergence quadrature
%T Convergence Guarantees for Adaptive Bayesian Quadrature Methods
%U http://arxiv.org/abs/1905.10271
%X Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical
integration that empirically compares favorably with Monte Carlo integration on
problems of medium dimensionality (where non-adaptive quadrature is not
competitive). Its key ingredient is an acquisition function that changes as a
function of previously collected values of the integrand. While this adaptivity
appears to be empirically powerful, it complicates analysis. Consequently,
there are no theoretical guarantees so far for this class of methods. In this
work, for a broad class of adaptive Bayesian quadrature methods, we prove
consistency, deriving non-tight but informative convergence rates. To do so we
introduce a new concept we call weak adaptivity. Our results identify a large
and flexible class of adaptive Bayesian quadrature rules as consistent, within
which practitioners can develop empirically efficient methods.
@article{kanagawa2019convergence,
abstract = {Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical
integration that empirically compares favorably with Monte Carlo integration on
problems of medium dimensionality (where non-adaptive quadrature is not
competitive). Its key ingredient is an acquisition function that changes as a
function of previously collected values of the integrand. While this adaptivity
appears to be empirically powerful, it complicates analysis. Consequently,
there are no theoretical guarantees so far for this class of methods. In this
work, for a broad class of adaptive Bayesian quadrature methods, we prove
consistency, deriving non-tight but informative convergence rates. To do so we
introduce a new concept we call weak adaptivity. Our results identify a large
and flexible class of adaptive Bayesian quadrature rules as consistent, within
which practitioners can develop empirically efficient methods.},
added-at = {2019-11-19T18:44:29.000+0100},
author = {Kanagawa, Motonobu and Hennig, Philipp},
biburl = {https://www.bibsonomy.org/bibtex/288113d90b45cbe21cbc3a2f1c89ec746/kirk86},
description = {[1905.10271] Convergence Guarantees for Adaptive Bayesian Quadrature Methods},
interhash = {c1dd9c1a395c06dbe45fedec60f19094},
intrahash = {88113d90b45cbe21cbc3a2f1c89ec746},
keywords = {bayesian convergence quadrature},
note = {cite arxiv:1905.10271Comment: To appear in NeurIPS 2019},
timestamp = {2019-11-19T18:44:29.000+0100},
title = {Convergence Guarantees for Adaptive Bayesian Quadrature Methods},
url = {http://arxiv.org/abs/1905.10271},
year = 2019
}