Skip to content

Commit d977f2e

Browse files
authored
SciML
Scientific Machine Learning
2 parents 84afdd3 + a76327b commit d977f2e

File tree

59 files changed

+96
-290
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+96
-290
lines changed

README.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Uncertainty Quantification with python (UQpy)
3535
- Aakash Bangalore Satish, Mohit Singh Chauhan, Lohit Vandanapu, Ketson RM dos Santos, Katiana Kontolati, Dimitris Loukrezis, Promit Chakroborty, Lukáš Novák, Andrew Solanto, Connor Krill
3636

3737
* - **Contributors:**
38-
- Michael Gardner, Prateek Bhustali, Julius Schultz, Ulrich Römer
38+
- Michael Gardner, Prateek Bhustali, Julius Schultz, Ulrich Römer, Nicholas Betters
3939

4040

4141

docs/code/scientific_machine_learning/bayesian_quickstart/bayesian_quickstart_testing.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@
88
# https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html
99
#
1010
# This script assumes you have already run the Bayesian Quickstart Testing script and saved the optimized model
11-
# to a file named ``bayesian_model.pth``. This script
11+
# to a file named ``bayesian_model.pt``. This script
1212
#
1313
# - Loads a trained model from the file ``bayesian_model.pt``
1414
# - Makes deterministic predictions
1515
# - Makes probabilistic predictions
1616
# - Plots probabilistic predictions
1717
#
1818
# First, we import the necessary modules and define the BayesianNeuralNetwork class, so we can load the model state
19-
# dictionary saved in ``bayesian_model.pth``.
19+
# dictionary saved in ``bayesian_model.pt``.
2020

2121
# %%
2222
import torch

docs/code/scientific_machine_learning/bayesian_quickstart/bayesian_quickstart_training.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
# %% md
77
# This is the first half of a Bayesian version of the classification problem from this Pytorch Quickstart tutorial:
8-
# (https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html)
8+
# https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html
99
#
1010
# We strongly recommend reading the Pytorch quick start first to familiarize yourself with the problem.
1111
# We include many of the comments from the Pytorch example, but assume the reader is familiar with model definitions

docs/code/scientific_machine_learning/mcd_trainer/NeuralNetwork_MCD.py

Lines changed: 1 addition & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ def __init__(self, n_samples=20, noise_std=0.05):
3737
self.n_samples = n_samples
3838
self.noise_std = noise_std
3939
self.x = torch.linspace(-1, 1, n_samples).reshape(-1, 1)
40-
self.y =0.4 * torch.sin(4 * self.x) + 0.5 * torch.cos(12 * self.x)
40+
self.y = 0.4 * torch.sin(4 * self.x) + 0.5 * torch.cos(12 * self.x)
4141
self.y += torch.normal(0, self.noise_std, self.x.shape)
4242

4343
def __len__(self):
@@ -147,78 +147,4 @@ def __getitem__(self, item):
147147
ax.set(xlabel="x", ylabel="f(x)")
148148
ax.legend()
149149

150-
151-
# Plotting Results
152-
# x_data = SinusoidalDataset().x.detach()
153-
# y_data = SinusoidalDataset().y.detach()
154-
# x_val = torch.linspace(-1, 1, 1000).view(-1, 1).detach()
155-
# y_val = 0.4 * torch.sin(4 * x_val) + 0.5 * torch.cos(12 * x_val).detach()
156-
# pred_val = model(x_val).detach()
157-
#
158-
# # %% Plot the deterministic model estimates
159-
# fig, ax = plt.subplots()
160-
# ax.scatter(x_data, y_data, label="Data", color="black", s=50)
161-
# ax.plot(
162-
# x_val,
163-
# pred_val,
164-
# label="Final Prediction",
165-
# color="tab:orange",
166-
# )
167-
# ax.plot(
168-
# x_val.detach(),
169-
# y_val.detach(),
170-
# label="Target",
171-
# color="black",
172-
# linestyle="dashed",
173-
# )
174-
# ax.set_title("Deterministic Prediction")
175-
# ax.set(xlabel="$x$", ylabel="$f(x)$")
176-
# ax.legend()
177-
# fig.tight_layout()
178-
#
179-
# train_loss = trainer.history["train_loss"].detach().numpy()
180-
# fig, ax = plt.subplots()
181-
# ax.semilogy(train_loss)
182-
# ax.set_title("Training Loss")
183-
# ax.set(xlabel="Epoch", ylabel="Loss")
184-
# fig.tight_layout()
185-
#
186-
# # %%
187-
# model.drop() # activate the dropout layers
188-
# n = 1_000
189-
# samples = torch.zeros(n, len(x_val))
190-
# for i in range(n):
191-
# samples[i, :] = model(x_val).detach().squeeze()
192-
# mean = torch.mean(samples, dim=1)
193-
# standard_deviation = torch.std(samples, dim=1)
194-
#
195-
# # Plotting Results
196-
# fig, ax = plt.subplots()
197-
# ax.plot(x_val, samples[:, 1], "tab:orange", label="Prediction 1")
198-
# ax.plot(x_val, samples[:, 2], "tab:blue", label="Prediction 2")
199-
# ax.scatter(x_data, y_data, color="black", label="Data")
200-
# ax.plot(x_val, y_val, color="black", linestyle="dashed", label="Target")
201-
# ax.set_title("Two Samples from Dropout NN")
202-
# ax.set(xlabel="$x$", ylabel="$f(x)$")
203-
# ax.legend()
204-
# fig.tight_layout()
205-
#
206-
# # %%
207-
# fig, ax = plt.subplots()
208-
# ax.plot(x_val, mean, label="$\mu$")
209-
# ax.fill_between(
210-
# x_val.view(-1),
211-
# torch.quantile(samples, q=0.025, dim=1),
212-
# torch.quantile(samples, q=0.975, dim=1),
213-
# label="95% Range",
214-
# # mean - (3 * standard_deviation),
215-
# # mean + (3 * standard_deviation),
216-
# # label="$\mu \pm 3\sigma$,",
217-
# alpha=0.3,
218-
# )
219-
# ax.plot(x_val, y_val, label="Target", color="black")
220-
# ax.set_title("Dropout Neural Network 95% Range")
221-
# ax.set(xlabel="x", ylabel="f(x)")
222-
# ax.legend()
223-
224150
plt.show()

docs/requirements.txt

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,6 @@ sphinx_autodoc_typehints == 1.23.0
33
sphinx_rtd_theme == 1.2.0
44
sphinx_gallery == 0.13.0
55
sphinxcontrib_bibtex == 2.5.0
6-
Sphinx==6.1.3
6+
Sphinx==6.1.3
7+
torch == 2.6.0
8+
torchinfo ~= 1.8.0

docs/source/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
)
2626

2727
# The full version, including alpha/beta/rc tags
28-
release = "v4.1.7"
28+
release = "v4.2.0"
2929

3030
# -- General configuration ---------------------------------------------------
3131

docs/source/index.rst

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,21 @@ as a set of modules centered around core capabilities in Uncertainty Quantificat
1212
+-----------------------+------------------------------------------------------------------+
1313
| **Product Owner:** | Michael D. Shields |
1414
+-----------------------+------------------------------------------------------------------+
15-
| **Lead Developers:** | Dimitris Giovanis, Audrey Olivier, Dimitris Tsapetis |
15+
| **Lead Developers:** | Dimitris Giovanis, Audrey Olivier, Dimitris Tsapetis, |
16+
+ + +
17+
| | Connor Krill |
1618
+-----------------------+------------------------------------------------------------------+
1719
| **Development Team:** | Aakash Bangalore Satish, Mohit Singh Chauhan, Lohit Vandanapu, |
1820
+ + +
1921
| | Ketson RM dos Santos, Katiana Kontolati, Dimitris Loukrezis, |
2022
+ + +
21-
| | Promit Chakroborty, Lukáš Novák, Andrew Solanto, Connor Krill |
23+
| | Promit Chakroborty, Lukáš Novák, Andrew Solanto, |
24+
+ + +
25+
| | Ponkrshnan Thiagarajan, George Pasparakis |
2226
+-----------------------+------------------------------------------------------------------+
23-
| **Contributors:** | Michael Gardner, Prateek Bhustali, Julius Schultz, Ulrich Römer |
27+
| **Contributors:** | Michael Gardner, Prateek Bhustali, Julius Schultz, Ulrich Römer, |
28+
+ + +
29+
| | Nicholas Betters |
2430
+-----------------------+------------------------------------------------------------------+
2531

2632
Introduction

docs/source/scientific_machine_learning/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ which uses similar inputs.
2020

2121
The module contains the following parent classes for neural networks:
2222

23-
- :class:`.NormalBayesianLayer`: Parent class to all Bayesian layers. Subclass of :class:`Layer`
24-
- :class:`.ProbabilisticDropoutLayer`: Parent class to all Dropout layers. Subclass of :class:`Layer`
23+
- :class:`.NormalBayesianLayer`: Parent class to all Bayesian layers. Subclass of :class:`Layer`.
24+
- :class:`.ProbabilisticDropoutLayer`: Parent class to all Dropout layers. Subclass of :class:`Layer`.
2525
- :class:`.Layer`: Parent class to all Neural Network Layers. Subclass of :class:`torch.nn.Module`.
2626
- :class:`.Loss`: Parent class to all Loss functions. Subclass of :class:`torch.nn.Module`.
2727
- :class:`.NeuralNetwork`: Parent class to all Neural Networks and Neural Operators. Subclass of :class:`torch.nn.Module`.

docs/source/scientific_machine_learning/layers/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ This is the parent class to all layers.
4242
Like all abstract baseclasses, this cannot be instantiated but can be subclassed to write custom layers.
4343
All layers use the :py:meth:`forward` method to define the forward model call.
4444

45-
The documentation in the :py:meth:`forward` and :py:meth:`extra_repr` on this page may be inherited from PyTorch docstrings.
45+
Some documentation within the :class:`Layer` class may be inherited from PyTorch docstrings.
4646

4747
Methods
4848
~~~~~~~

docs/source/scientific_machine_learning/losses.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The :py:class:`Loss` is an abstract baseclass and a subclass of :py:class:`torch
1515
This is an abstract baseclass and the parent class to all loss functions.
1616
Like all abstract baseclasses, this cannot be instantiated but can be subclassed to write custom losses.
1717

18-
The documentation in the :py:meth:`forward` on this baseclass may be inherited from PyTorch docstrings.
18+
The documentation in the :class:`Loss` may be inherited from PyTorch docstrings.
1919

2020
Methods
2121
~~~~~~~

0 commit comments

Comments
 (0)