Machine learning-based modeling of physical systems has experienced increased
interest in recent years. Despite some impressive progress, there is still a
lack of benchmarks for Scientific ML that are easy to use but still challenging
and representative of a wide range of problems. We introduce PDEBench, a
benchmark suite of time-dependent simulation tasks based on Partial
Differential Equations (PDEs). PDEBench comprises both code and data to
benchmark the performance of novel machine learning models against both
classical numerical simulations and machine learning baselines. Our proposed
set of benchmark problems contribute the following unique features: (1) A much
wider range of PDEs compared to existing benchmarks, ranging from relatively
common examples to more realistic and difficult problems; (2) much larger
ready-to-use datasets compared to prior work, comprising multiple simulation
runs across a larger number of initial and boundary conditions and PDE
parameters; (3) more extensible source codes with user-friendly APIs for data
generation and baseline results with popular machine learning models (FNO,
U-Net, PINN, Gradient-Based Inverse Method). PDEBench allows researchers to
extend the benchmark freely for their own purposes using a standardized API and
to compare the performance of new models to existing baseline methods. We also
propose new evaluation metrics with the aim to provide a more holistic
understanding of learning methods in the context of Scientific ML. With those
metrics we identify tasks which are challenging for recent ML methods and
propose these tasks as future challenges for the community. The code is
available at https://github.com/pdebench/PDEBench.
Описание
[2210.07182] PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
%0 Generic
%1 takamoto2022pdebench
%A Takamoto, Makoto
%A Praditia, Timothy
%A Leiteritz, Raphael
%A MacKinlay, Dan
%A Alesiani, Francesco
%A Pflüger, Dirk
%A Niepert, Mathias
%D 2022
%K benchmark dataset neuralpde pde
%T PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
%U http://arxiv.org/abs/2210.07182
%X Machine learning-based modeling of physical systems has experienced increased
interest in recent years. Despite some impressive progress, there is still a
lack of benchmarks for Scientific ML that are easy to use but still challenging
and representative of a wide range of problems. We introduce PDEBench, a
benchmark suite of time-dependent simulation tasks based on Partial
Differential Equations (PDEs). PDEBench comprises both code and data to
benchmark the performance of novel machine learning models against both
classical numerical simulations and machine learning baselines. Our proposed
set of benchmark problems contribute the following unique features: (1) A much
wider range of PDEs compared to existing benchmarks, ranging from relatively
common examples to more realistic and difficult problems; (2) much larger
ready-to-use datasets compared to prior work, comprising multiple simulation
runs across a larger number of initial and boundary conditions and PDE
parameters; (3) more extensible source codes with user-friendly APIs for data
generation and baseline results with popular machine learning models (FNO,
U-Net, PINN, Gradient-Based Inverse Method). PDEBench allows researchers to
extend the benchmark freely for their own purposes using a standardized API and
to compare the performance of new models to existing baseline methods. We also
propose new evaluation metrics with the aim to provide a more holistic
understanding of learning methods in the context of Scientific ML. With those
metrics we identify tasks which are challenging for recent ML methods and
propose these tasks as future challenges for the community. The code is
available at https://github.com/pdebench/PDEBench.
@misc{takamoto2022pdebench,
abstract = {Machine learning-based modeling of physical systems has experienced increased
interest in recent years. Despite some impressive progress, there is still a
lack of benchmarks for Scientific ML that are easy to use but still challenging
and representative of a wide range of problems. We introduce PDEBench, a
benchmark suite of time-dependent simulation tasks based on Partial
Differential Equations (PDEs). PDEBench comprises both code and data to
benchmark the performance of novel machine learning models against both
classical numerical simulations and machine learning baselines. Our proposed
set of benchmark problems contribute the following unique features: (1) A much
wider range of PDEs compared to existing benchmarks, ranging from relatively
common examples to more realistic and difficult problems; (2) much larger
ready-to-use datasets compared to prior work, comprising multiple simulation
runs across a larger number of initial and boundary conditions and PDE
parameters; (3) more extensible source codes with user-friendly APIs for data
generation and baseline results with popular machine learning models (FNO,
U-Net, PINN, Gradient-Based Inverse Method). PDEBench allows researchers to
extend the benchmark freely for their own purposes using a standardized API and
to compare the performance of new models to existing baseline methods. We also
propose new evaluation metrics with the aim to provide a more holistic
understanding of learning methods in the context of Scientific ML. With those
metrics we identify tasks which are challenging for recent ML methods and
propose these tasks as future challenges for the community. The code is
available at https://github.com/pdebench/PDEBench.},
added-at = {2023-11-28T10:43:51.000+0100},
author = {Takamoto, Makoto and Praditia, Timothy and Leiteritz, Raphael and MacKinlay, Dan and Alesiani, Francesco and Pflüger, Dirk and Niepert, Mathias},
biburl = {https://www.bibsonomy.org/bibtex/241f1c090d5d671a758cd2c0dc158920b/annakrause},
description = {[2210.07182] PDEBENCH: An Extensive Benchmark for Scientific Machine Learning},
interhash = {a542b9463dd8624bf5845c17155f99c4},
intrahash = {41f1c090d5d671a758cd2c0dc158920b},
keywords = {benchmark dataset neuralpde pde},
note = {cite arxiv:2210.07182Comment: 16 pages (main body) + 34 pages (supplemental material), accepted for publication in NeurIPS 2022 Track Datasets and Benchmarks},
timestamp = {2023-11-28T10:43:51.000+0100},
title = {PDEBENCH: An Extensive Benchmark for Scientific Machine Learning},
url = {http://arxiv.org/abs/2210.07182},
year = 2022
}