LWM-Temporal: Sparse Spatio-Temporal Attention for Wireless Channel Representation Learning
arXiv cs.LG / 3/12/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces LWM-Temporal, a new model in the Large Wireless Models family designed to learn universal channel embeddings for spatiotemporal wireless channels.
- It introduces Sparse Spatio-Temporal Attention (SSTA) in the angle–delay–time domain to limit interactions to physically plausible neighborhoods and reduce attention complexity by about an order of magnitude while maintaining geometry-consistent dependencies.
- It uses a self-supervised, physics-informed masking curriculum that simulates occlusions, pilot sparsity, and measurement impairments to learn transferable representations.
- Experiments show improvements in channel prediction across mobility regimes, especially for long horizons and limited finetuning data, highlighting geometry-aware architectures and pretraining for transferable spatiotemporal wireless representations.




