Skip to content

Commit f6ec1b8

Browse files
committed
Resolve merge conflict in README.
2 parents 60f79d7 + ac04b2e commit f6ec1b8

30 files changed

+1602
-250
lines changed

README.md

Lines changed: 49 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ https://user-images.githubusercontent.com/8482575/120886182-f2b78800-c5ec-11eb-9
1717

1818
## Why *flowTorch*?
1919

20-
The *flowTorch* project was started to make the analysis and modeling of fluid data **easy** and **accessible** to everyone. The library design intends to strike a balance between **usability** and **flexibility**. Instead of a monolithic, black-box analysis tool, the library offers modular components that allow assembling custom analysis and modeling workflows with ease. *flowTorch* helps to fuse data from a wide range of file formats typical for fluid flow data, for example, to compare experiments simulations. The available analysis and modeling tools are rigorously tested and demonstrated on a variety of different fluid flow datasets. Moreover, one can significantly accelerate the entire process of accessing, cleaning, analysing, and modeling fluid flow data by starting with one of the pipelines available in the *flowTorch* [documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.0/index.html).
20+
The *flowTorch* project was started to make the analysis and modeling of fluid data **easy** and **accessible** to everyone. The library design intends to strike a balance between **usability** and **flexibility**. Instead of a monolithic, black-box analysis tool, the library offers modular components that allow assembling custom analysis and modeling workflows with ease. *flowTorch* helps to fuse data from a wide range of file formats typical for fluid flow data, for example, to compare experiments simulations. The available analysis and modeling tools are rigorously tested and demonstrated on a variety of different fluid flow datasets. Moreover, one can significantly accelerate the entire process of accessing, cleaning, analysing, and modeling fluid flow data by starting with one of the pipelines available in the *flowTorch* [documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.1/index.html).
2121

2222
To get a first impression of how working with *flowTorch* looks like, the code snippet below shows part of a pipeline for performing a dynamic mode decomposition (DMD) of a transient *OpenFOAM* simulation.
2323

@@ -78,6 +78,9 @@ The easiest way to install *flowTorch* is as follows:
7878
```
7979
# install via pip
8080
pip3 install git+https://github.com/FlowModelingControl/flowtorch
81+
# or install a specific branch, e.g., aweiner
82+
pip3 install git+https://github.com/FlowModelingControl/flowtorch.git@aweiner
83+
8184
# to uninstall flowTorch, run
8285
pip3 uninstall flowtorch
8386
```
@@ -90,7 +93,7 @@ and install the dependencies listed in *requirements.txt*:
9093
pip3 install -r requirements.txt
9194
```
9295

93-
To get an overview of what *flowTorch* can do for you, have a look at the [online documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.0/index.html). The examples presented in the online documentation are also contained in this repository. In fact, the documentation is a static version of several [Jupyter labs](https://jupyter.org/) with start-to-end analyses. If you are interested in an interactive version of one particular example, navigate to `./docs/source/notebooks` and run `jupyter lab`. Note that to execute some of the notebooks, the **corresponding datasets are required**. The datasets can be downloaded [here](https://cloudstorage.tu-braunschweig.de/getlink/fiQUyeDFx3sg2T6LLHBQoCCx/datasets_29_10_2021.tar.gz) (~1.4GB). If the data are only required for unit testing, a reduced dataset may be downloaded [here](https://cloudstorage.tu-braunschweig.de/getlink/fiFZaHCgTWYeq1aZVg3hAui1/datasets_minimal_29_10_2021.tar.gz) (~384MB). Download the data into a directory of your choice and navigate into that directory. To extract the archive, run:
96+
To get an overview of what *flowTorch* can do for you, have a look at the [online documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.1/index.html). The examples presented in the online documentation are also contained in this repository. In fact, the documentation is a static version of several [Jupyter labs](https://jupyter.org/) with start-to-end analyses. If you are interested in an interactive version of one particular example, navigate to `./docs/source/notebooks` and run `jupyter lab`. Note that to execute some of the notebooks, the **corresponding datasets are required**. The datasets can be downloaded [here](https://cloud.tu-braunschweig.de/s/sJYEfzFG7yDg3QT) (~2.6GB). If the data are only required for unit testing, a reduced dataset may be downloaded [here](https://cloud.tu-braunschweig.de/s/b9xJ7XSHMbdKwxH) (~411MB). Download the data into a directory of your choice and navigate into that directory. To extract the archive, run:
9497
```
9598
# full dataset
9699
tar xzf datasets_29_10_2021.tar.gz
@@ -109,6 +112,34 @@ echo "export FLOWTORCH_DATASETS=\"$(pwd)/datasets_minimal/\"" >> ~/.bashrc
109112
. ~/.bashrc
110113
```
111114

115+
## Installing ParaView
116+
117+
**Note:** the following installation of ParaView is only necessary if the *TecplotDataloader* is needed.
118+
119+
*flowTorch* uses the ParaView Python module for accessing [Tecplot](https://www.tecplot.com/) data. When installing ParaView, special attention must be paid to the installed Python and VTK versions. Therefore, the following manual installation is recommend instead of using a standard package installation of ParaView.
120+
121+
1. Determine the version of Python:
122+
```
123+
python3 --version
124+
# example output
125+
Python 3.8.10
126+
```
127+
2. Download the ParaView binaries according to your Python version from [here](https://www.paraview.org/download/). Note that you may have to use an older version ParaView to match your Python version.
128+
3. Install the ParaView binaries, e.g., as follows:
129+
```
130+
# optional: remove old package installation if available
131+
sudo apt remove paraview
132+
# replace the archive's name if needed in the commands below
133+
sudo mv ParaView-5.9.1-MPI-Linux-Python3.8-64bit.tar.gz /opt/
134+
cd /opt
135+
sudo tar xf ParaView-5.9.1-MPI-Linux-Python3.8-64bit.tar.gz
136+
sudo rm ParaView-5.9.1-MPI-Linux-Python3.8-64bit.tar.gz
137+
cd ParaView-5.9.1-MPI-Linux-Python3.8-64bit/
138+
# add path to ParaView binary and Python modules
139+
echo export PATH="\$PATH:$(pwd)/bin" >> ~/.bashrc
140+
echo export PYTHONPATH="\$PYTHONPATH:$(pwd)/lib/python3.8/site-packages" >> ~/.bashrc
141+
```
142+
112143
## Development
113144
### Documentation
114145

@@ -151,21 +182,24 @@ If you encounter any issues using *flowTorch* or if you have any questions regar
151182

152183
## Reference
153184

154-
If *flowTorch* aids your work, you may support our work by referencing the following software article:
185+
If *flowTorch* aids your work, you may support the project by referencing the following article:
186+
155187
```
156188
@article{Weiner2021,
157-
doi = {10.21105/joss.03860},
158-
url = {https://doi.org/10.21105/joss.03860},
159-
year = {2021},
160-
publisher = {The Open Journal},
161-
volume = {6},
162-
number = {68},
163-
pages = {3860},
164-
author = {Andre Weiner and Richard Semaan},
165-
title = {flowTorch - a Python library for analysis and reduced-order modeling of fluid flows},
166-
journal = {Journal of Open Source Software}
167-
}
168-
```
189+
doi = {10.21105/joss.03860},
190+
url = {https://doi.org/10.21105/joss.03860},
191+
year = {2021},
192+
publisher = {The Open Journal},
193+
volume = {6},
194+
number = {68},
195+
pages = {3860},
196+
author = {Andre Weiner and Richard Semaan},
197+
title = {flowTorch - a Python library for analysis and reduced-order modeling of fluid flows},
198+
journal = {Journal of Open Source Software}
199+
}
200+
```
201+
202+
For a list of scientific works relying on flowTorch, refer to [this list](references.md).
169203

170204
## License
171205

docs/source/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,11 @@ def setup(app):
2525
# -- Project information -----------------------------------------------------
2626

2727
project = 'flowTorch'
28-
copyright = '2020, flowTorch contributors'
28+
copyright = '2022, flowTorch contributors'
2929
author = 'flowTorch contributors'
3030

3131
# The full version, including alpha/beta/rc tags
32-
release = '0.1'
32+
release = '1.1'
3333

3434

3535
# -- General configuration ---------------------------------------------------

docs/source/flowtorch.data.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,14 @@ flowtorch.data.tau\_dataloader
5858
:undoc-members:
5959
:show-inheritance:
6060

61+
flowtorch.data.tecplot\_dataloader
62+
----------------------------------
63+
64+
.. automodule:: flowtorch.data.tecplot_dataloader
65+
:members:
66+
:undoc-members:
67+
:show-inheritance:
68+
6169
flowtorch.data.selection\_tools
6270
-------------------------------
6371

docs/source/notebooks/dmd_intro.ipynb

Lines changed: 18 additions & 18 deletions
Large diffs are not rendered by default.

docs/source/notebooks/linear_algebra_basics.ipynb

Lines changed: 20 additions & 13 deletions
Large diffs are not rendered by default.

flowtorch/analysis/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
from .psp_explorer import PSPExplorer
22
from .pod import POD
33
from .dmd import DMD
4-
from .svd import SVD
4+
from .svd import SVD
5+
from .svd import inexact_alm_matrix_complection

flowtorch/analysis/dmd.py

Lines changed: 147 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
"""
33

44
# standard library packages
5-
from typing import Tuple, Set
5+
from typing import Tuple, Set, Union
66
# third party packages
77
import torch as pt
88
from numpy import pi
@@ -32,10 +32,14 @@ class DMD(object):
3232
tensor([-2.3842e-06, -4.2345e+01, -1.8552e+01])
3333
>>> dmd.amplitude
3434
tensor([10.5635+0.j, -0.0616+0.j, -0.0537+0.j])
35+
>>> dmd = DMD(data_matrix, dt=0.1, rank=3, robust=True)
36+
>>> dmd = DMD(data_matrix, dt=0.1, rank=3, robust={"tol": 1.0e-5, "verbose" : True})
3537
3638
"""
3739

38-
def __init__(self, data_matrix: pt.Tensor, dt: float, rank: int = None):
40+
def __init__(self, data_matrix: pt.Tensor, dt: float, rank: int = None,
41+
robust: Union[bool, dict] = False, unitary: bool = False,
42+
optimal: bool = False, tlsq=False):
3943
"""Create DMD instance based on data matrix and time step.
4044
4145
:param data_matrix: data matrix whose columns are formed by the individual snapshots
@@ -44,28 +48,93 @@ def __init__(self, data_matrix: pt.Tensor, dt: float, rank: int = None):
4448
:type dt: float
4549
:param rank: rank for SVD truncation, defaults to None
4650
:type rank: int, optional
51+
:param robust: data_matrix is split into low rank and sparse contributions
52+
if True or if dictionary with options for Inexact ALM algorithm; the SVD
53+
is computed only on the low rank matrix
54+
:type robust: Union[bool,dict]
55+
:param unitary: enforce the linear operator to be unitary; refer to piDMD_
56+
by Peter Baddoo for more information
57+
:type unitary: bool, optional
58+
:param optimal: compute mode amplitudes based on a least-squares problem
59+
as described in spDMD_ article by M. Janovic et al. (2014); in contrast
60+
to the original spDMD implementation, the exact DMD modes are used in
61+
the optimization problem as outlined in an article_ by R. Taylor
62+
:type optimal: bool, optional
63+
:param tlsq: de-biasing of the linear operator by solving a total least-squares
64+
problem instead of a standard least-squares problem; the rank is selected
65+
automatically or specified by the `rank` parameter; more information can be
66+
found in the TDMD_ article by M. Hemati et al.
67+
:type tlsq: bool, optional
68+
69+
70+
.. _piDMD: https://github.com/baddoo/piDMD
71+
.. _spDMD: https://hal-polytechnique.archives-ouvertes.fr/hal-00995141/document
72+
.. _article: http://www.pyrunner.com/weblog/2016/08/03/spdmd-python/
73+
.. _TDMD: http://cwrowley.princeton.edu/papers/Hemati-2017a.pdf
4774
"""
4875
self._dm = data_matrix
4976
self._dt = dt
50-
self._svd = SVD(self._dm[:, :-1], rank)
77+
self._unitary = unitary
78+
self._optimal = optimal
79+
self._tlsq = tlsq
80+
if self._tlsq:
81+
svd = SVD(pt.vstack((self._dm[:, :-1], self._dm[:, 1:])),
82+
rank, robust)
83+
P = svd.V @ svd.V.conj().T
84+
self._X = self._dm[:, :-1] @ P
85+
self._Y = self._dm[:, 1:] @ P
86+
self._svd = SVD(self._X, svd.rank)
87+
del svd
88+
else:
89+
self._svd = SVD(self._dm[:, :-1], rank, robust)
90+
self._X = self._dm[:, :-1]
91+
self._Y = self._dm[:, 1:]
5192
self._eigvals, self._eigvecs, self._modes = self._compute_mode_decomposition()
93+
self._amplitude = self._compute_amplitudes()
94+
95+
def _compute_operator(self):
96+
"""Compute the approximate linear (DMD) operator.
97+
"""
98+
if self._unitary:
99+
Xp = self._svd.U.conj().T @ self._X
100+
Yp = self._svd.U.conj().T @ self._Y
101+
U, _, VT = pt.linalg.svd(Yp @ Xp.conj().T, full_matrices=False)
102+
return U @ VT
103+
else:
104+
s_inv = pt.diag(1.0 / self._svd.s)
105+
return self._svd.U.conj().T @ self._Y @ self._svd.V @ s_inv
52106

53107
def _compute_mode_decomposition(self):
54-
"""Compute reduced operator, eigen decomposition, and DMD modes.
108+
"""Compute reduced operator, eigen-decomposition, and DMD modes.
55109
"""
56110
s_inv = pt.diag(1.0 / self._svd.s)
57-
operator = (
58-
self._svd.U.conj().T @ self._dm[:, 1:] @ self._svd.V @ s_inv
59-
)
111+
operator = self._compute_operator()
60112
val, vec = pt.linalg.eig(operator)
61-
# type conversion is currently not implemented for pt.complex32
62-
# such that the dtype for the modes is always pt.complex64
63113
phi = (
64-
self._dm[:, 1:].type(val.dtype) @ self._svd.V.type(val.dtype)
114+
self._Y.type(val.dtype) @ self._svd.V.type(val.dtype)
65115
@ s_inv.type(val.dtype) @ vec
66116
)
67117
return val, vec, phi
68118

119+
def _compute_amplitudes(self):
120+
"""Compute amplitudes for exact DMD modes.
121+
122+
If *optimal* is False, the amplitudes are computed based on the first
123+
snapshot in the data matrix; otherwise, a least-squares problem as
124+
introduced by Janovic et al. is solved (refer to the documentation
125+
in the constructor for more information).
126+
"""
127+
if self._optimal:
128+
vander = pt.vander(self.eigvals, self._dm.shape[-1], True)
129+
P = (self.modes.conj().T @ self.modes) * \
130+
(vander @ vander.conj().T).conj()
131+
q = pt.diag(vander @ self._dm.type(P.dtype).conj().T @
132+
self.modes).conj()
133+
else:
134+
P = self._modes
135+
q = self._X[:, 0].type(P.dtype)
136+
return pt.linalg.lstsq(P, q).solution
137+
69138
def partial_reconstruction(self, mode_indices: Set[int]) -> pt.Tensor:
70139
"""Reconstruct data matrix with limited number of modes.
71140
@@ -79,11 +148,30 @@ def partial_reconstruction(self, mode_indices: Set[int]) -> pt.Tensor:
79148
mode_indices = pt.tensor(list(mode_indices), dtype=pt.int64)
80149
mode_mask[mode_indices] = 1.0
81150
reconstruction = (self.modes * mode_mask) @ self.dynamics
82-
if self._dm.dtype in (pt.complex64, pt.complex32):
151+
if self._dm.dtype in (pt.complex128, pt.complex64, pt.complex32):
83152
return reconstruction.type(self._dm.dtype)
84153
else:
85154
return reconstruction.real.type(self._dm.dtype)
86155

156+
def top_modes(self, n: int = 10, integral: bool = False) -> pt.Tensor:
157+
"""Get the indices of the first n most important modes.
158+
159+
Note that the conjugate complex modes for real data matrices are
160+
not filtered out.
161+
162+
:param n: number of indices to return; defaults to 10
163+
:type n: int
164+
:param integral: if True, the modes are sorted according to their
165+
integral contribution; defaults to False
166+
:type integral: bool, optional
167+
:return: indices of top n modes sorted by amplitude or integral
168+
contribution
169+
:rtype: pt.Tensor
170+
"""
171+
importance = self.integral_contribution if integral else self.amplitude
172+
n = min(n, importance.shape[0])
173+
return importance.abs().topk(n).indices
174+
87175
@property
88176
def required_memory(self) -> int:
89177
"""Compute the memory size in bytes of the DMD.
@@ -101,6 +189,10 @@ def required_memory(self) -> int:
101189
def svd(self) -> SVD:
102190
return self._svd
103191

192+
@property
193+
def operator(self) -> pt.Tensor:
194+
return self._compute_operator()
195+
104196
@property
105197
def modes(self) -> pt.Tensor:
106198
return self._modes
@@ -123,24 +215,66 @@ def growth_rate(self) -> pt.Tensor:
123215

124216
@property
125217
def amplitude(self) -> pt.Tensor:
126-
return pt.linalg.pinv(self._modes) @ self._dm[:, 0].type(self._modes.dtype)
218+
return self._amplitude
127219

128220
@property
129221
def dynamics(self) -> pt.Tensor:
130222
return pt.diag(self.amplitude) @ pt.vander(self.eigvals, self._dm.shape[-1], True)
131223

224+
@property
225+
def integral_contribution(self) -> pt.Tensor:
226+
"""Integral contribution of individual modes according to J. Kou et al. 2017.
227+
228+
DOI: https://doi.org/10.1016/j.euromechflu.2016.11.015
229+
"""
230+
return self.modes.norm(dim=0)**2 * self.dynamics.abs().sum(dim=1)
231+
132232
@property
133233
def reconstruction(self) -> pt.Tensor:
134234
"""Reconstruct an approximation of the training data.
135235
136236
:return: reconstructed training data
137237
:rtype: pt.Tensor
138238
"""
139-
if self._dm.dtype in (pt.complex64, pt.complex32):
239+
if self._dm.dtype in (pt.complex128, pt.complex64, pt.complex32):
140240
return (self._modes @ self.dynamics).type(self._dm.dtype)
141241
else:
142242
return (self._modes @ self.dynamics).real.type(self._dm.dtype)
143243

244+
@property
245+
def reconstruction_error(self) -> pt.Tensor:
246+
"""Compute the reconstruction error.
247+
248+
:return: difference between reconstruction and data matrix
249+
:rtype: pt.Tensor
250+
"""
251+
return self.reconstruction - self._dm
252+
253+
@property
254+
def projection_error(self) -> pt.Tensor:
255+
"""Compute the difference between Y and AX.
256+
257+
:return: projection error
258+
:rtype: pt.Tensor
259+
"""
260+
YH = (self.modes @ pt.diag(self.eigvals)) @ \
261+
(pt.linalg.pinv(self.modes) @ self._X.type(self.modes.dtype))
262+
if self._Y.dtype in (pt.complex128, pt.complex64, pt.complex32):
263+
return YH - self._Y
264+
else:
265+
return YH.real.type(self._Y.dtype) - self._Y
266+
267+
@property
268+
def tlsq_error(self) -> Tuple[pt.Tensor, pt.Tensor]:
269+
"""Compute the *noise* in X and Y.
270+
271+
:return: noise in X and Y
272+
:rtype: Tuple[pt.Tensor, pt.Tensor]
273+
"""
274+
if not self._tlsq:
275+
print("Warning: noise is only removed if tlsq=True")
276+
return self._dm[:, :-1] - self._X, self._dm[:, 1:] - self._Y
277+
144278
def __repr__(self):
145279
return f"{self.__class__.__qualname__}(data_matrix, rank={self._svd.rank})"
146280

0 commit comments

Comments
 (0)