You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/loca24.md
+53-10Lines changed: 53 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,24 +33,64 @@ The workshop will take place at the [I3S](https://www.i3s.unice.fr/en/) institut
33
33
### September 24: Conference
34
34
35
35
- 09:00-09:30: Welcome
36
-
- 09:30-10:30:
36
+
- 09:30-10:30: Jérôme Bolte
37
+
38
+
**TBA**
39
+
40
+
Abstract: TBA
41
+
37
42
- 10:30-11:00: Coffee break
38
-
- 11:00-12:00:
43
+
- 11:00-12:00: Aurélien Bellet
44
+
45
+
**Differentially Private Optimization with Coordinate Descent and Fixed-Point Iterations**
46
+
47
+
Abstract: Machine learning models are known to leak information about individual data points used to train them. Differentially private optimization aims to address this problem by training models with strong differential privacy guarantees. This is achieved by adding controlled noise to the optimization process, for instance during the gradient computation steps in the case of the popular DP-SGD algorithm. In this talk, I will discuss how to beyond DP-SGD by (i) introducing private coordinate descent algorithms that can better exploit the problem structure, and (ii) leveraging the framework of fixed-point iterations to design and analyze new private optimization algorithms for centralized and federated settings.
48
+
39
49
- 12:00-14:00: Lunch
40
50
- 14:00-15:00: Session poster
41
-
- 15:00-16:00:
51
+
- 15:00-16:00: Julie Delon
52
+
53
+
**TBA**
54
+
55
+
Abstract: TBA
56
+
42
57
- 16:00-16:30: Coffee break
43
-
- 16:30-17:30:
58
+
- 16:30-17:30: Claire Boyer
59
+
60
+
**A primer on physics-informed learning**
61
+
62
+
Abstract: Physics-informed machine learning combines the expressiveness of data-based approaches with the interpretability of physical models. In this context, we consider a general regression problem where the empirical risk is regularized by a partial differential equation that quantifies the physical inconsistency.
63
+
Practitioners often resort to physics-informed neural networks (PINNs) to solve this kind of problem. After discussing some strengths and limitations of PINNs, we prove that for linear differential priors, the problem can be formulated directly as a kernel regression task, giving a rigorous framework to analyze physics-informed ML. In particular, the physical prior can help in boosting the estimator convergence.
64
+
44
65
45
66
### September 25: Conference
46
67
47
-
- 09:30-10:30:
68
+
- 09:30-10:30: Anna Korba
69
+
70
+
**Implicit Diffusion: Efficient Optimization through Stochastic Sampling**
71
+
72
+
Abstract: We present a new algorithm to optimize distributions defined implicitly by parameterized stochastic diffusions. Doing so allows us to modify the outcome distribution of sampling processes by optimizing over their parameters. We introduce a general framework for first-order optimization of these processes, that performs jointly, in a single loop, optimization and sampling steps. This approach is inspired by recent advances in bilevel optimization and automatic implicit differentiation, leveraging the point of view of sampling as optimization over the space of probability distributions. We provide theoretical guarantees on the performance of our method, as well as experimental results demonstrating its effectiveness. We apply it to training energy-based models and finetuning denoising diffusions.
73
+
48
74
- 10:30-11:00: Coffee break
49
-
- 11:00-12:00:
75
+
- 11:00-12:00: Gabriel Peyré
76
+
77
+
**Transformers are Universal In-context Learners**
78
+
79
+
Abstract: Transformers deep networks define “in-context mappings'”, which enable them to predict new tokens based on a given set of tokens (such as a prompt in NLP applications or a set of patches for vision transformers). This work studies the ability of these architectures to handle an arbitrarily large number of context tokens. To mathematically and uniformly address the expressivity of these architectures, we consider that the mapping is conditioned on a context represented by a probability distribution of tokens (discrete for a finite number of tokens). The related notion of smoothness corresponds to continuity in terms of the Wasserstein distance between these contexts. We demonstrate that deep transformers are universal and can approximate continuous in-context mappings to arbitrary precision, uniformly over compact token domains. A key aspect of our results, compared to existing findings, is that for a fixed precision, a single transformer can operate on an arbitrary (even infinite) number of tokens. Additionally, it operates with a fixed embedding dimension of tokens (this dimension does not increase with precision) and a fixed number of heads (proportional to the dimension). The use of MLP layers between multi-head attention layers is also explicitly controlled. This is a joint work with Takashi Furuya (Shimane Univ.) and Maarten de Hoop (Rice Univ.).
80
+
50
81
- 12:00-14:00: Lunch
51
-
- 14:00-15:00:
82
+
- 14:00-15:00: Jérôme Malick
83
+
84
+
**TBA**
85
+
86
+
Abstract: TBA
87
+
52
88
- 15:00-15:30: Coffee break
53
-
- 15:30-16:30:
89
+
- 15:30-16:30: Gabriele Steidl
90
+
91
+
**TBA**
92
+
93
+
Abstract: TBA
54
94
55
95
## Scientific committee
56
96
- Laure Blanc-Féraud
@@ -61,6 +101,9 @@ The workshop will take place at the [I3S](https://www.i3s.unice.fr/en/) institut
0 commit comments