You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -35,9 +35,10 @@ The workshop will take place at the [I3S](https://www.i3s.unice.fr/en/) institut
35
35
- 09:00-09:30: Welcome
36
36
- 09:30-10:30: Jérôme Bolte
37
37
38
-
**TBA**
38
+
**A bestiary of counterexamples in smooth convex optimization**
39
39
40
-
Abstract: TBA
40
+
Abstract: Counterexamples to some old-standing optimization problems in the smooth convex coercive setting will be provided. For instance, block-coordinate descent, steepest descent with exact search or Bregman descent methods do not generally converge. Other failures of various desirable features will be discussed: directional convergence of Cauchy’s gradient curves, convergence of Newton’s flow, finite length of Tikhonov path, convergence of central paths, or smooth Kurdyka-Lojasiewicz inequalities.
41
+
All examples are planar. These examples rely on a new convex interpolation result: given a decreasing sequence of positively curved C^k smooth convex compact sets in the plane, we can interpolate these sets through the sublevel sets of a C^k smooth convex function where k ≥ 2 is arbitrary.
41
42
42
43
- 10:30-11:00: Coffee break
43
44
- 11:00-12:00: Jérôme Malick
@@ -47,21 +48,21 @@ Abstract: TBA
47
48
This talk will be a gentle introduction to — and a passionate advocacy for — distributionally robust optimization (DRO). Beyond the classical empirical risk minimization paradigm in machine learning, DRO has the ability to effectively address data uncertainty and distribution ambiguity, thus paving the way to more robust and fair models. In this talk, I will highlight the key mathematical ideas, the main algorithmic challenges, and some versatile applications of DRO. I will insist on the statistical properties of DRO with Wasserstein uncertainty, and I will finally present an easy-to-use toolbox (with scikit-learn and PyTorch interfaces) to make your own models more robust.
48
49
49
50
- 12:00-14:00: Lunch
50
-
- 14:00-15:00: Session poster
51
-
- 15:00-16:00: Julie Delon
52
-
53
-
**TBA**
54
-
55
-
Abstract: TBA
56
-
57
-
- 16:00-16:30: Coffee break
58
-
- 16:30-17:30: Claire Boyer
51
+
- 14:00-15:00: Claire Boyer
59
52
60
53
**A primer on physics-informed learning**
61
54
62
55
Abstract: Physics-informed machine learning combines the expressiveness of data-based approaches with the interpretability of physical models. In this context, we consider a general regression problem where the empirical risk is regularized by a partial differential equation that quantifies the physical inconsistency.
63
56
Practitioners often resort to physics-informed neural networks (PINNs) to solve this kind of problem. After discussing some strengths and limitations of PINNs, we prove that for linear differential priors, the problem can be formulated directly as a kernel regression task, giving a rigorous framework to analyze physics-informed ML. In particular, the physical prior can help in boosting the estimator convergence.
64
57
58
+
- 15:00-15:30: Coffee break
59
+
- 15:30-16:30: Eloi Tanguy
60
+
61
+
**Optimisation Properties of the Discrete Sliced Wasserstein Distance**
62
+
63
+
Abstract: For computational reasons, the Sliced Wasserstein distance is commonly used in practice to compare discrete probability measures with uniform weights and the same amount of points. We will address the properties of this energy as a function of the support of one of the measures. We study the regularity and optimisation properties of this energy, as well as its Monte Carlo approximation (estimating the expected SW using samples on the projections), including both the asymptotic and non-asymptotic statistical properties of the estimation. Finally, we show that in a certain sense, stochastic gradient descent methods that minimise these energies converge to (generalised) critical points, with an extension to training generative neural networks.
0 commit comments