FluidFlow: a flow-matching generative model for fluid dynamics surrogates on unstructured meshes

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes FluidFlow, a conditional flow-matching generative model for building scalable CFD surrogate models that can answer many-query fluid dynamics needs more efficiently than high-fidelity simulations.
  • Unlike approaches that require mesh interpolation, FluidFlow is designed to work directly with CFD data on both structured and unstructured meshes while preserving geometric fidelity.
  • FluidFlow is trained using physically meaningful conditioning parameters and is implemented with two neural network backbones, U-Net and a diffusion transformer (DiT).
  • Experiments on two benchmark tasks—airfoil boundary pressure coefficient prediction and full 3D aircraft pressure/friction prediction on large unstructured meshes—show lower error than strong MLP baselines and better generalization across operating conditions.
  • The transformer-based variant is highlighted as enabling scalable learning on large unstructured datasets while maintaining high predictive accuracy, positioning flow-matching generative modeling as a promising surrogate framework for engineering and scientific applications.

Abstract

Computational fluid dynamics (CFD) provides high-fidelity simulations of fluid flows but remains computationally expensive for many-query applications. In recent years deep learning (DL) has been used to construct data-driven fluid-dynamic surrogate models. In this work we consider a different learning paradigm and embrace generative modelling as a framework for constructing scalable fluid-dynamics surrogate models. We introduce FluidFlow, a generative model based on conditional flow-matching, a recent alternative to diffusion models that learns deterministic transport maps between noise and data distributions. FluidFlow is specifically designed to operate directly on CFD data defined on both structured and unstructured meshes alike, without the needs to perform any mesh interpolation pre-processing and preserving geometric fidelity. We assess the capabilities of FluidFlow using two different core neural network architectures, a U-Net and diffusion transformer (DiT), and condition their learning on physically meaningful parameters. The methodology is validated on two benchmark problems of increasing complexity: prediction of pressure coefficients along an airfoil boundary across different operating conditions, and prediction of pressure and friction coefficients over a full three-dimensional aircraft geometry discretized on a large unstructured mesh. In both cases, FluidFlow outperform strong multilayer perceptron baselines, achieving significantly lower error metrics and improved generalisation across operating conditions. Notably, the transformer-based architecture enables scalable learning on large unstructured datasets while maintaining high predictive accuracy. These results demonstrate that flow-matching generative models provide an effective and flexible framework for surrogate modelling in fluid dynamics, with potential for realistic engineering and scientific applications.