ML for neuroscience and reproducibility for ML: from bilevel optimization to benchopt
Mathurin Massias - INRIA Lyon, Ockham Team
In the first part of the talk we will study the problem of epileptic foci localization from a Machine Learning point of view. We tackle this problem using sparse precision matrices estimated with the Graphical Lasso. We provide a framework and algorithm for tuning the hyperparameters of the Graphical Lasso via a bilevel optimization problem solved with a first-order method. In particular, we derive the Jacobian of the Graphical Lasso solution with respect to its regularization hyperparameters. This is a joint work with Can Pouliquen and Titouan Vayer. In the second part, I will take a step back and broach the problem of experimental reproducibility and transparency in Machine Learning. I will present the Benchopt initiative (benchopt.github.io), a set of tools to develop, share and publish benchmarks in optimization and machine learning. I will highlight how the use of Benchopt empowers researchers with more rigorous approaches, allowing them to publish better science.
Mathurin Massias is a permanent researcher in the Ockham team of Inria Lyon. His work is on optimization for Machine Learning, in particular on designing faster and better algorithms that use less resources in the context of evergrowing data dimension. He is particularly interested in frugal methods through the modern avatars of sparsity, involving non smooth optimization and proximal operators. He obtained is PhD from Telecom Paris and Inria in 2019 on fast solvers for inverse problems in neuroscience, for which he obtained the PhD prize of the Programme Gaspard Monge Optimization. With a strong concern for reproducibility in Machine Learning, he is core developer of several open source python packages, and an associate editor for the Computo journal.
Wednesday July 19th, 15:00
Room 705, DIMA, Via Dodecaneso 35