Cross-Validated Cross-Channel Self-Attention and Denoising for Automatic Modulation Classification
arXiv cs.LG / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper targets a limitation of deep learning Automatic Modulation Classification (AMC): performance drops sharply at low/noisy SNR because conventional feature extraction can suppress both discriminative signal structure and interference-relevant information.
- It proposes an AMC architecture that combines a cross-channel self-attention block (linking in-phase and quadrature components) with dual-path deep residual shrinkage denoising blocks to preserve modulation features while reducing noise.
- Experiments on the RML2018.01a dataset use stratified sampling across 24 modulation classes and 26 SNR levels, showing that the denoising depth is a key factor for robustness in low and moderate SNR regimes.
- Compared with benchmark models (PET-CGDNN, MCLDNN, DAE), the method reports accuracy gains across -8 dB to +2 dB SNR, including a particularly large improvement over DAE.
- Cross-validation results report mean accuracy of 62.6% and macro-F1 of 62.9%, and ablation studies emphasize that feature-preserving denoising plus cross-channel attention are essential for low-to-medium SNR robustness.
Related Articles

Emerging Properties in Unified Multimodal Pretraining
Dev.to

Build a Profit-Generating AI Agent with LangChain: A Step-by-Step Tutorial
Dev.to

Open source AI is winning — but here's why I still pay $2/month for Claude API
Dev.to

AI Agents Need Real Email Infrastructure
Dev.to

Beyond the Prompt: Why AI Agents Are Hitting the Deployment Wall
Dev.to