Skip to content

Commit 5725a0e

Browse files
committed
update schedule
1 parent cea0f2e commit 5725a0e

File tree

1 file changed

+13
-12
lines changed

1 file changed

+13
-12
lines changed

content/loca24.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ The workshop focuses on optimization applied to solving problems in imaging and
1111
- [Aurélien Bellet](http://researchers.lille.inria.fr/abellet/) (Inria)
1212
- [Jérôme Bolte](https://www.tse-fr.eu/fr/people/jerome-bolte) (Toulouse School of Economics)
1313
- [Claire Boyer](https://perso.lpsm.paris/~cboyer/) (Université Paris-Saclay)
14-
- [Julie Delon](https://judelo.github.io/) (Université Paris-Descartes)
1514
- [Anna Korba](https://akorba.github.io/) (ENSAE)
1615
- [Jérôme Malick](https://membres-ljk.imag.fr/Jerome.Malick/index.html) (CNRS, Université Grenoble-Alpes)
1716
- [Gabriel Peyré](https://www.gpeyre.com/) (CNRS, École Normale Supérieure)
1817
- [Gabriele Steidl](https://page.math.tu-berlin.de/~steidl/) (TU Berlin)
18+
- [Eloi Tanguy](https://eloitanguy.github.io/) (Université Paris-Descartes)
1919

2020
## Location
2121

@@ -35,9 +35,10 @@ The workshop will take place at the [I3S](https://www.i3s.unice.fr/en/) institut
3535
- 09:00-09:30: Welcome
3636
- 09:30-10:30: Jérôme Bolte
3737

38-
**TBA**
38+
**A bestiary of counterexamples in smooth convex optimization**
3939

40-
Abstract: TBA
40+
Abstract: Counterexamples to some old-standing optimization problems in the smooth convex coercive setting will be provided. For instance, block-coordinate descent, steepest descent with exact search or Bregman descent methods do not generally converge. Other failures of various desirable features will be discussed: directional convergence of Cauchy’s gradient curves, convergence of Newton’s flow, finite length of Tikhonov path, convergence of central paths, or smooth Kurdyka-Lojasiewicz inequalities.
41+
All examples are planar. These examples rely on a new convex interpolation result: given a decreasing sequence of positively curved C^k smooth convex compact sets in the plane, we can interpolate these sets through the sublevel sets of a C^k smooth convex function where k ≥ 2 is arbitrary.
4142

4243
- 10:30-11:00: Coffee break
4344
- 11:00-12:00: Jérôme Malick
@@ -47,21 +48,21 @@ Abstract: TBA
4748
This talk will be a gentle introduction to — and a passionate advocacy for — distributionally robust optimization (DRO). Beyond the classical empirical risk minimization paradigm in machine learning, DRO has the ability to effectively address data uncertainty and distribution ambiguity, thus paving the way to more robust and fair models. In this talk, I will highlight the key mathematical ideas, the main algorithmic challenges, and some versatile applications of DRO. I will insist on the statistical properties of DRO with Wasserstein uncertainty, and I will finally present an easy-to-use toolbox (with scikit-learn and PyTorch interfaces) to make your own models more robust.
4849

4950
- 12:00-14:00: Lunch
50-
- 14:00-15:00: Session poster
51-
- 15:00-16:00: Julie Delon
52-
53-
**TBA**
54-
55-
Abstract: TBA
56-
57-
- 16:00-16:30: Coffee break
58-
- 16:30-17:30: Claire Boyer
51+
- 14:00-15:00: Claire Boyer
5952

6053
**A primer on physics-informed learning**
6154

6255
Abstract: Physics-informed machine learning combines the expressiveness of data-based approaches with the interpretability of physical models. In this context, we consider a general regression problem where the empirical risk is regularized by a partial differential equation that quantifies the physical inconsistency.
6356
Practitioners often resort to physics-informed neural networks (PINNs) to solve this kind of problem. After discussing some strengths and limitations of PINNs, we prove that for linear differential priors, the problem can be formulated directly as a kernel regression task, giving a rigorous framework to analyze physics-informed ML. In particular, the physical prior can help in boosting the estimator convergence.
6457

58+
- 15:00-15:30: Coffee break
59+
- 15:30-16:30: Eloi Tanguy
60+
61+
**Optimisation Properties of the Discrete Sliced Wasserstein Distance**
62+
63+
Abstract: For computational reasons, the Sliced Wasserstein distance is commonly used in practice to compare discrete probability measures with uniform weights and the same amount of points. We will address the properties of this energy as a function of the support of one of the measures. We study the regularity and optimisation properties of this energy, as well as its Monte Carlo approximation (estimating the expected SW using samples on the projections), including both the asymptotic and non-asymptotic statistical properties of the estimation. Finally, we show that in a certain sense, stochastic gradient descent methods that minimise these energies converge to (generalised) critical points, with an extension to training generative neural networks.
64+
65+
6566

6667
### September 25: Conference
6768

0 commit comments

Comments
 (0)