From Score Matching to Diffusion: a fine-grained error analysis in the Gaussian setting
Title
From Score Matching to Diffusion: a fine-grained error analysis in the Gaussian setting
Speaker
Samuel Hurault - ENS Paris
Abstract
Sampling from an unknown distribution, accessible only through discrete samples, is a fundamental problem at the core of generative AI. The current state-of-the-art methods follow a two-step process: first, estimating the score function (the gradient of a smoothed log-distribution) and then applying a diffusion-based sampling algorithm -- such as Langevin or Diffusion models. The resulting distribution's correctness can be impacted by four major factors: the generalization and optimization errors in score matching, and the discretization and minimal noise amplitude in the diffusion. In this paper, we make the sampling error explicit when using a diffusion sampler in the Gaussian setting. We provide a sharp analysis of the Wasserstein sampling error that arises from these four error sources. This allows us to rigorously track how the anisotropy of the data distribution (encoded by its power spectrum) interacts with key parameters of the end-to-end sampling method, including the number of initial samples, the stepsizes in both score matching and diffusion, and the noise amplitude. This result provides a foundation for further analysis of the tradeoffs involved in optimizing sampling accuracy.
Bio
Samuel Hurault is a CNRS researcher at Université Gustave Eiffel. He did his Ph.D. at Université de Bordeaux under the supervision of Nicolas Papadakis and Arthur Leclaire, followed by a postdoctoral position at ENS Paris with Gabriel Peyré. His research focuses on the theoretical analysis of the integration of deep denoising priors into image generation and restoration algorithms.
When
Thursday, November 20th, 12:00
Where
Room 322, UniGe DIBRIS/DIMA, Via Dodecaneso 35