Skip to content

Commit 3173398

Browse files
authored
Merge pull request #42 from ENSTA-U2IS/dev
Add MIMO, NotMNIST, improve coverage, and Misc
2 parents adeaca6 + 9ed8c4c commit 3173398

File tree

79 files changed

+2236
-701
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

79 files changed

+2236
-701
lines changed

.coveragerc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[run]
22
branch = True
33
include = */torch-uncertainty/*
4-
omit = *tests*, */datasets/*, setup.py
4+
omit = */tests/*, */datasets/*, setup.py
55

66
[report]
77
exclude_lines =

.github/workflows/run-tests.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ on:
55
branches:
66
- main
77
- dev
8-
pull_request_target:
8+
pull_request:
99
schedule:
1010
- cron: "42 7 * * 0"
1111
workflow_dispatch:
@@ -76,7 +76,7 @@ jobs:
7676
7777
- name: Upload coverage to Codecov
7878
uses: codecov/codecov-action@v3
79-
if: ${{ github.event_name != 'pull_request_target' }}
79+
if: ${{ github.event_name != 'pull_request' }}
8080
continue-on-error: true
8181
with:
8282
token: ${{ secrets.CODECOV_TOKEN }}

CONTRIBUTING.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ contributors to help us build a comprehensive library for uncertainty
55
quantification in PyTorch.
66

77
We are particularly open to any comment that you would have on this project.
8-
In particular, we are open to changing these guidelines as the project evolves.
8+
Specifically, we are open to changing these guidelines as the project evolves.
99

1010
## The scope of TorchUncertainty
1111

@@ -21,7 +21,7 @@ Monte Carlo dropout, ensemble methods, etc.
2121

2222
If you are interested in contributing to torch_uncertainty, we first advise you
2323
to follow the following steps to reproduce a clean development environment
24-
ensuring continuous integration does not break.
24+
ensuring that continuous integration does not break.
2525

2626
1. Install poetry on your workstation.
2727
2. Clone the repository.
@@ -37,21 +37,21 @@ poetry install --with dev
3737
pre-commit install
3838
```
3939

40-
We are using black for code formatting, flake8 for linting, and isort for the
41-
imports. The pre-commit hooks will ensure that your code is properly formatted
40+
We are using `black` for code formatting, `flake8` for linting, and `isort` for the
41+
imports. The `pre-commit` hooks will ensure that your code is properly formatted
4242
and linted before committing.
4343

4444
Before submitting a final pull request, that we will review, please try your
45-
best not to reduce the code coverage and do document your code.
45+
best not to reduce the code coverage and document your code.
4646

4747
If you implement a method, please add a reference to the corresponding paper in the ["References" page](https://torch-uncertainty.github.io/references.html).
4848

4949
### Post-processing methods
5050

5151
For now, we intend to follow scikit-learn style API for post-processing
52-
methods (except that we use a validation dataloader for now). You can get
52+
methods (except that we use a validation dataset for now). You can get
5353
inspiration from the already implemented
54-
[temperature-scaling](https://github.com/ENSTA-U2IS/torch-uncertainty/blob/dev/torch_uncertainty/post_processing/temperature_scaler.py).
54+
[temperature-scaling](https://github.com/ENSTA-U2IS/torch-uncertainty/blob/dev/torch_uncertainty/post_processing/calibration/temperature_scaler.py).
5555

5656
## License
5757

README.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,16 @@
99
[![Code style: black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/psf/black)
1010
</div>
1111

12-
_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!
12+
_TorchUncertainty_ is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!
1313

14-
:construction: _TorchUncertainty_ is in early development :construction: - expect massive changes but reach out and contribute if you are interested by the project!
14+
:construction: _TorchUncertainty_ is in early development :construction: - expect massive changes, but reach out and contribute if you are interested in the project! **Please raise an issue if you have any bugs or difficulties.**
1515

1616
---
1717

1818
This package provides a multi-level API, including:
1919

2020
- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
21-
- baselines available for training on your datasets
21+
- deep learning baselines available for training on your datasets
2222
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR (work in progress 🚧).
2323
- layers available for use in your networks
2424
- scikit-learn style post-processing methods such as Temperature Scaling
@@ -27,12 +27,14 @@ See the [Reference page](https://torch-uncertainty.github.io/references.html) or
2727

2828
## Installation
2929

30-
Install the desired pytorch version in your environment. Then, the package can be installed from PyPI:
30+
The package can be installed from PyPI:
3131

3232
```sh
3333
pip install torch-uncertainty
3434
```
3535

36+
Then, install the desired PyTorch version in your environment.
37+
3638
If you aim to contribute (thank you!), have a look at the [contribution page](https://torch-uncertainty.github.io/contributing.html).
3739

3840
## Getting Started and Documentation
@@ -45,17 +47,18 @@ A quickstart is available at [torch-uncertainty.github.io/quickstart](https://to
4547

4648
### Baselines
4749

48-
To date, the following baselines are implemented:
50+
To date, the following deep learning baselines have been implemented:
4951

5052
- Deep Ensembles
5153
- BatchEnsemble
5254
- Masksembles
55+
- MIMO
5356
- Packed-Ensembles (see [blog post](https://medium.com/@adrien.lafage/make-your-neural-networks-more-reliable-with-packed-ensembles-7ad0b737a873))
5457
- Bayesian Neural Networks
5558

5659
### Post-processing methods
5760

58-
To date, the following post-processing methods are implemented:
61+
To date, the following post-processing methods have been implemented:
5962

6063
- Temperature, Vector, & Matrix scaling
6164

auto_tutorials_source/tutorial_bayesian.py

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
# -*- coding: utf-8 -*-
2-
# fmt: off
32
# flake: noqa
43
"""
54
Train a Bayesian Neural Network in Three Minutes
@@ -41,7 +40,7 @@
4140
from torch_uncertainty.routines.classification import ClassificationSingle
4241

4342
# %%
44-
# We will also need to define an optimizer using torch.optim as well as the
43+
# We will also need to define an optimizer using torch.optim as well as the
4544
# neural network utils withing torch.nn, as well as the partial util to provide
4645
# the modified default arguments for the ELBO loss.
4746
#
@@ -61,13 +60,15 @@
6160
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6261
# We will use the Adam optimizer with the default learning rate of 0.001.
6362

63+
6464
def optim_lenet(model: nn.Module) -> dict:
6565
optimizer = optim.Adam(
6666
model.parameters(),
6767
lr=1e-3,
6868
)
6969
return {"optimizer": optimizer}
7070

71+
7172
# %%
7273
# 3. Creating the necessary variables
7374
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -76,15 +77,18 @@ def optim_lenet(model: nn.Module) -> dict:
7677
# logs, and to fake-parse the arguments needed for using the PyTorch Lightning
7778
# Trainer. We also create the datamodule that handles the MNIST dataset,
7879
# dataloaders and transforms. Finally, we create the model using the
79-
# blueprint from torch_uncertainty.models.
80+
# blueprint from torch_uncertainty.models.
8081

8182
root = Path(os.path.abspath(""))
8283

83-
with ArgvContext("--max_epochs 1"):
84+
# We mock the arguments for the trainer
85+
with ArgvContext(
86+
"file.py",
87+
"--max_epochs 1",
88+
"--enable_progress_bar=False",
89+
"--verbose=False",
90+
):
8491
args = init_args(datamodule=MNISTDataModule)
85-
args.enable_progress_bar = False
86-
args.verbose = False
87-
args.max_epochs = 1
8892

8993
net_name = "bayesian-lenet-mnist"
9094

@@ -156,21 +160,20 @@ def imshow(img):
156160
plt.imshow(np.transpose(npimg, (1, 2, 0)))
157161
plt.show()
158162

163+
159164
dataiter = iter(dm.val_dataloader())
160165
images, labels = next(dataiter)
161166

162167
# print images
163168
imshow(torchvision.utils.make_grid(images[:4, ...]))
164-
print('Ground truth: ', ' '.join(f'{labels[j]}' for j in range(4)))
169+
print("Ground truth: ", " ".join(f"{labels[j]}" for j in range(4)))
165170

166171
logits = model(images)
167172
probs = torch.nn.functional.softmax(logits, dim=-1)
168173

169174
_, predicted = torch.max(probs, 1)
170175

171-
print(
172-
'Predicted digits: ', ' '.join(f'{predicted[j]}' for j in range(4))
173-
)
176+
print("Predicted digits: ", " ".join(f"{predicted[j]}" for j in range(4)))
174177

175178
# %%
176179
# References

0 commit comments

Comments
 (0)