Null-Space Flow Matching for MIMO Channel Estimation in Latency-Constrained Systems

arXiv cs.LG / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the need for accurate, low-latency channel state information (CSI) acquisition in MIMO systems, noting that diffusion/score-based generative models can be too slow at inference time.
  • It introduces a null-space flow matching framework that splits pilot-limited CSI estimation into recovering the range-space from noisy pilots and iteratively generating/refining only the ambiguous null-space using a flow-matching generative prior.
  • To meet strict latency constraints, the authors use a power-law time schedule to allocate a limited number of refinement steps efficiently during inference.
  • They further improve robustness with a noise-aware adaptive correction strategy that suppresses channel noise along the refinement trajectory.
  • Experiments show competitive NMSE at roughly a 3 ms latency budget, with improved estimation accuracy and faster inference than both model-based and generative baselines.

Abstract

Accurate yet low-latency channel state information (CSI) acquisition is essential for multiple-input multiple-output (MIMO) communication systems. While advanced deep generative models, such as score-based and diffusion models, enable high-fidelity CSI reconstruction from limited pilot observations, they often suffer from high inference latency. To achieve accurate CSI estimation under stringent latency constraints, this paper proposes a null-space flow matching (FM) framework that decomposes pilot-limited MIMO channel estimation into a range-space reconstruction problem and a null-space generation problem. Specifically, the range-space component of the channel is directly recovered from noisy pilot observations, while only the ambiguous null-space component is iteratively refined using an FM-based generative prior. To further improve the robustness of the proposed framework, we introduce a power-law time schedule to better allocate the limited number of refinement steps, along with a noise-aware adaptive correction strategy to suppress channel noise on the refinement trajectory. Experimental results demonstrate that our method achieves a competitive normalized mean square error (NMSE) even under a strict latency budget of around 3 ms, while delivering superior estimation accuracy and faster inference than both model-based and generative baselines.