Аннотация
Recently, it has been proposed in the literature to employ deep neural
networks (DNNs) together with stochastic gradient descent methods to
approximate solutions of PDEs. There are also a few results in the literature
which prove that DNNs can approximate solutions of certain PDEs without the
curse of dimensionality in the sense that the number of real parameters used to
describe the DNN grows at most polynomially both in the PDE dimension and the
reciprocal of the prescribed approximation accuracy. One key argument in most
of these results is, first, to use a Monte Carlo approximation scheme which can
approximate the solution of the PDE under consideration at a fixed space-time
point without the curse of dimensionality and, thereafter, to prove that DNNs
are flexible enough to mimic the behaviour of the used approximation scheme.
Having this in mind, one could aim for a general abstract result which shows
under suitable assumptions that if a certain function can be approximated by
any kind of (Monte Carlo) approximation scheme without the curse of
dimensionality, then this function can also be approximated with DNNs without
the curse of dimensionality. It is a key contribution of this article to make a
first step towards this direction. In particular, the main result of this
paper, essentially, shows that if a function can be approximated by means of
some suitable discrete approximation scheme without the curse of dimensionality
and if there exist DNNs which satisfy certain regularity properties and which
approximate this discrete approximation scheme without the curse of
dimensionality, then the function itself can also be approximated with DNNs
without the curse of dimensionality. As an application of this result we
establish that solutions of suitable Kolmogorov PDEs can be approximated with
DNNs without the curse of dimensionality.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)