Abstract
Generating samples from a continuous probability density is a central algorithmic problem across statistics, engineering, and the sciences. For high-dimensional settings, Hamiltonian Monte Carlo (HMC) is the default algorithm across mainstream software packages. However, despite the extensive line of work on HMC and its widespread empirical success, it remains unclear how many iterations of HMC are required as a function of the dimension d. On one hand, a variety of results show that Metropolized HMC converges in O(d^{1/4}) iterations from a warm start close to stationarity. On the other hand, Metropolized HMC is significantly slower without a warm start, e.g., requiring \Omega(d^{1/2}) iterations even for simple target distributions such as isotropic Gaussians. Finding a warm start is therefore the computational bottleneck for HMC.
We resolve this issue for the well-studied setting of sampling from a probability distribution satisfying strong log-concavity (or isoperimetry) and third-order derivative bounds. We prove that \emph{non-Metropolized} HMC generates a warm start in \tilde{O}(d^{1/4}) iterations, after which we can exploit the warm start using Metropolized HMC. Our final complexity of \tilde{O}(d^{1/4}) is the fastest algorithm for high-accuracy sampling under these assumptions, improving over the prior best of \tilde{O}(d^{1/2}). This closes the long line of work on the dimensional complexity of MHMC for such settings, and also provides a simple warm-start prescription for practical implementations.