Massively Parallel Exact Inference for Hawkes Processes

arXiv cs.LG / 4/3/2026

📰 News

Key Points

  • The paper addresses the computational bottleneck of maximum likelihood estimation for multivariate Hawkes processes, which scales naively as O(N^2) with the number of events.

Abstract

Multivariate Hawkes processes are a widely used class of self-exciting point processes, but maximum likelihood estimation naively scales as O(N^2) in the number of events. The canonical linear exponential Hawkes process admits a faster O(N) recurrence, but prior work evaluates this recurrence sequentially, without exploiting parallelization on modern GPUs. We show that the Hawkes process intensity can be expressed as a product of sparse transition matrices admitting a linear-time associative multiply, enabling computation via a parallel prefix scan. This yields a simple yet massively parallelizable algorithm for maximum likelihood estimation of linear exponential Hawkes processes. Our method reduces the computational complexity to approximately O(N/P) with P parallel processors, and naturally yields a batching scheme to maintain constant memory usage, avoiding GPU memory constraints. Importantly, it computes the exact likelihood without any additional assumptions or approximations, preserving the simplicity and interpretability of the model. We demonstrate orders-of-magnitude speedups on simulated and real datasets, scaling to thousands of nodes and tens of millions of events, substantially beyond scales reported in prior work. We provide an open-source PyTorch library implementing our optimizations.