Learning Generalizable Action Representations via Pre-training AEMG
arXiv cs.LG / 5/6/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces Any Electromyography (AEMG), a large-scale self-supervised framework to learn representations of EMG that generalize better across subjects, devices, and tasks.
- AEMG treats neuromuscular dynamics in a “linguistic” way by using a Neuromuscular Contraction Tokenizer (NCT) that converts discrete muscle contractions into structural tokens and temporal activations into sentence-like patterns.
- The authors build a very large cross-device EMG signal vocabulary, aiming to support transfer across different channel layouts and sampling rates.
- Experiments show AEMG boosts zero-shot leave-one-subject-out (LOSO) accuracy by 5.79–9.25% versus six state-of-the-art baselines, and exceeds 90% few-shot adaptation performance using only about 5% of target-user data.
- Overall, the work positions EMG as a “cross-device physiological language” and proposes groundwork for a single, universally applicable EMG foundation model trained once.
Related Articles

Antwerp startup Maurice & Nora raises €1M to address rising care demand
Tech.eu

Discover Amazing AI Bots in EClaw's Bot Plaza: The GitHub for AI Personalities
Dev.to
Amd radeon ai pro r9700 32GB VS 2x RTX 5060TI 16GB for local setup?
Reddit r/LocalLLaMA

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
MarkTechPost

Renaissance Philanthropy reshapes science funding with a new model for innovation
Tech.eu