CAMEL-CLIP: Channel-aware Multimodal Electroencephalography-text Alignment for Generalizable Brain Foundation Models
arXiv cs.LG / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- CAMEL-CLIP introduces a channel-aware multimodal EEG-text alignment model designed to be robust to heterogeneous EEG channel configurations.
- The model employs three key components: channel attribute-based positional encoding, dynamic channel projection, and dual-level contrastive learning (channel-level and sample-level).
- Experimental results show state-of-the-art performance under linear probing and better performance than existing foundation models that rely on full finetuning.
- The approach aims to enable more generalizable brain foundation models across diverse downstream EEG tasks and channel setups.




