« Le Séminaire Palaisien » | Tony Silveti-Falls & Erwan Allys

Each seminar session is divided into two scientific presentations of 40 minutes each: 30 minutes of talk and 10 minutes of questions. Tony Silveti-Falls & Erwan Allys will host the May 2025 session!
Registration is free but compulsory, subject to availability. A buffet will be served at the end of the seminar.
Résumé
In this talk, I discuss optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball and their application to training huge neural networks. We propose a new stochastic family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems. The resulting update rule unifies several existing optimization methods under a single framework. Furthermore, we propose an explicit choice of norm for deep architectures, which, as a side benefit, leads to the transferability of hyperparameters across model sizes. Experimentally, we demonstrate significant speedups on nanoGPT training without any reliance on Adam. The proposed method is memory-efficient, requiring only one set of model weights and one set of gradients, which can be stored in half-precision.
Résumé
TBA