Jump to Content

Generalizing Hamiltonian Monte Carlo with Neural Networks

Daniel Levy
Jascha Sohl-dickstein
ICLR (2018)

Abstract

We present a general-purpose method to train Markov Chain Monte Carlo kernels (parameterized by deep neural networks) that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jump distance, a proxy for mixing speed. We demonstrate significant empirical gains (up to $124\times$ greater effective sample size) on a collection of simple but challenging distributions. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code is included as supplemental material, and will be open-sourced with the camera-ready paper.