PReD: An LLM-based Foundation Multimodal Model for Electromagnetic Perception, Recognition, and Decision
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces PReD, described as the first LLM-based foundation multimodal model targeted specifically at electromagnetic (EM) perception, recognition, and decision-making in a closed loop.
- To address EM-domain data scarcity and limited domain knowledge integration, the authors built the PReD-1.3M multitask dataset and a corresponding evaluation benchmark, PReD-Bench.
- PReD is trained on multiple signal representations—time-domain waveforms, frequency-domain spectrograms, and constellation diagrams—covering communication and radar signal features.
- The model supports tasks spanning detection, modulation and protocol recognition, parameter estimation, RF fingerprint recognition, and even anti-jamming decision-making.
- Experiments report state-of-the-art results on PReD-Bench, suggesting vision-aligned foundation-model approaches can significantly improve EM-signal understanding and reasoning.



