Abstract
In this work, we systematically benchmark two recently developed deep density methods for nonlinear filtering. We model the filtering density of a discretely observed stochastic differential equation through the associated Fokker--Planck equation, coupled with Bayesian updates at discrete observation times. The two filters: the deep splitting filter and the deep backward stochastic differential equation filter, are both based on Feynman--Kac formulas, Euler--Maruyama discretizations and neural networks. The two methods are extended to logarithmic formulations providing sound, robust, and positivity-preserving density approximations in increasing state dimension. Comparing to the classical bootstrap particle filter and an ensemble Kalman filter, we benchmark the methods on numerous examples. In the low-dimensional examples the particle filters work well, but when we scale up to a partially observed 100-dimensional Lorenz-96 model, the particle-based methods fail and the logarithmic deep backward stochastic differential equation filter prevails. In terms of computational efficiency, the deep density methods reduce inference time by roughly two to five orders of magnitude relative to the particle-based filters.