Stochastic Normalizing Flows for Inverse Problems: a Markov Chains Viewpoint
Title
Stochastic Normalizing Flows for Inverse Problems: a Markov Chains Viewpoint
Speaker
Johannes Hertrich - TU Berlin
Abstract
Normalizing flows aim to learn the underlying probability distribution of given samples. For this, we train a diffeomorphism which pushes forward a simple latent distribution to the data distribution. However, recent results show that normalizing flows suffer from topolgical constraints and limited expressiveness. Stochastic normalizing flows can overcome these topological constraints and improve the expressiveness of normalizing flow architectures by combining deterministic, learnable flow transformations with stochastic sampling methods. We consider stochastic normalizing flows from a Markov chain point of view. In particular, we replace transition densities by general Markov kernels and establish proofs via Radon-Nikodym derivatives which allows to incorporate distributions without densities in a sound way. Further, we generalize the results for sampling from posterior distributions as required in inverse problems. The performance of the proposed conditional stochastic normalizing flow is demonstrated by numerical examples. This is joint work with P. Hagemann and G. Steidl.
Bio
Johannes Hertrich received his B.Sc. and M.Sc. degree in mathematics at TU Kaiserslautern, Germany, in 2018 and 2020, respectively. He is currently a Ph.D. student at TU Berlin, Germany. In particular, he is interested in image processing, stochastics, inverse problems and machine learning.
When
May 9th 2022, 15:00
Where
Room 706, UniGe DIMA, Via Dodecaneso 35, Genova, Italy.
Streaming will be available at the link below.