You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+18-8Lines changed: 18 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,9 @@
1
1
# PDEBench
2
2
3
3
The code repository for the NeurIPS 2022 paper
4
-
[PDEBench: An Extensive Benchmark for Scientific Machine Learning](https://arxiv.org/abs/2210.07182):tada:**SimTech Best Paper Award 2023**:confetti_ball:
4
+
[PDEBench: An Extensive Benchmark for Scientific Machine Learning](https://arxiv.org/abs/2210.07182)
5
+
6
+
:tada:[**SimTech Best Paper Award 2023**](https://www.simtech.uni-stuttgart.de/press/SimTech-Best-Paper-Award-2023-Benchmark-for-ML-for-scientific-simulations):confetti_ball:
5
7
6
8
PDEBench provides a diverse and comprehensive set of benchmarks for scientific machine learning, including challenging and realistic physical problems. This repository consists of the code used to generate the datasets, to upload and download the datasets from the data repository, as well as to train and evaluate different machine learning models as baselines. PDEBench features a much wider range of PDEs than existing benchmarks and includes realistic and difficult problems (both forward and inverse), larger ready-to-use datasets comprising various initial and boundary conditions, and PDE parameters. Moreover, PDEBench was created to make the source code extensible and we invite active participation from the SciML community to improve and extend the benchmark.
For GPU support there are additional platform-specific instructions:
59
61
60
-
For PyTorch, [see here](https://pytorch.org/get-started/locally/).
62
+
For PyTorch, the latest version we support is v1.13.1 [see previous-versions/#linux - CUDA 11.7](https://pytorch.org/get-started/previous-versions/#linux-and-windows-2).
61
63
62
-
For JAX, which is approximately 6 times faster for simulations than PyTorch in our tests, [see here](https://github.com/google/jax#installation)
64
+
For JAX, which is approximately 6 times faster for simulations than PyTorch in our tests, [see jax#pip-installation-gpu-cuda-installed-via-pip](https://github.com/google/jax#pip-installation-gpu-cuda-installed-via-pip-easier)
63
65
64
66
65
67
## Installation using conda:
66
68
67
-
If you like you can also install dependencies using anaconda. We suggest using [miniforge](https://github.com/conda-forge/miniforge) (and possibly mamba) as distribution. Otherwise you may have to __enable the conda-forge__ channel for the following commands.
69
+
If you like you can also install dependencies using anaconda, we suggest to use [mambaforge](https://github.com/conda-forge/miniforge#mambaforge) as a distribution. Otherwise you may have to __enable the conda-forge__ channel for the following commands.
0 commit comments