RFPrompt: Prompt-Based Expert Adaptation of the Large Wireless Model for Modulation Classification

arXiv cs.LG / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses automatic modulation classification (AMC) in real-world settings where models must remain robust against distribution shifts from hardware impairments, new propagation environments, and unseen recording conditions.
  • It proposes RFPrompt, a parameter-efficient, prompt-based adaptation method that adds learnable deep prompt tokens while freezing the pretrained wireless foundation model backbone to avoid overwriting pretrained representations.
  • The approach is evaluated on the Large Wireless Model (LWM), a mixture-of-experts wireless foundation model, across both standard and out-of-distribution (OOD) modulation-classification scenarios.
  • Experiments show that prompt-based adaptation improves robustness under distribution shift and limited supervision, especially on real over-the-air IQ data, while maintaining strong parameter efficiency.
  • Overall, the results indicate that prompt learning is an effective strategy for adapting wireless foundation models to challenging downstream RF environments without retraining the full model.

Abstract

Automatic modulation classification (AMC) in real-world deployments demands robustness to distribution shifts arising from hardware impairments, unseen propagation environments, and recording conditions never encountered during training. Although wireless foundation models offer a promising starting point for robust RF representation learning, an important open question is how to adapt them efficiently to out-of-distribution (OOD) downstream tasks without overwriting the structure learned during large-scale pre-training. In this paper, we investigate prompt-based adaptation as a general mechanism for OOD transfer in wireless foundation models. We propose RFPrompt, a parameter-efficient framework that introduces learnable deep prompt tokens while keeping the pretrained backbone frozen, enabling task-specific adaptation with minimal trainable parameters. We instantiate and evaluate this approach on the Large Wireless Model (LWM), a mixture-of-experts wireless foundation model, and study its behavior under both standard and OOD modulation-classification settings. Results show that prompt-based adaptation consistently improves robustness under distribution shift and limited supervision, particularly on real-world over-the-air IQ data, while preserving strong parameter efficiency. These findings suggest that prompt learning is a practical and effective strategy for adapting wireless foundation models to challenging downstream RF environments.