MoCHA: Denoising Caption Supervision for Motion-Text Retrieval
arXiv cs.CV / 3/26/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Text-motion retrieval methods often treat each caption as a single deterministic positive, but captions for the same motion vary due to both motion-recoverable semantics and annotator-specific or context-dependent style that isn’t inferable from 3D joints alone.
- The paper introduces MoCHA, a caption canonicalization framework that reduces within-motion embedding variance by projecting each caption onto the motion-recoverable content before encoding, yielding tighter positive clusters and better embedding separation.
- MoCHA is presented as a preprocessing step compatible with any retrieval architecture, with two implementations: an LLM-based canonicalizer (GPT-5.2) and a distilled FlanT5 variant that avoids using an LLM at inference time.
- On MotionPatches and evaluated on HumanML3D and KIT-ML, MoCHA reports new SOTA results, including +3.1pp T2M R@1 on HumanML3D with the LLM variant and +10.3pp on KIT-ML, while the LLM-free T5 variant also delivers sizable gains.
- Canonicalization reportedly cuts within-motion text-embedding variance by 11–19% and markedly improves cross-dataset transfer, with large bidirectional improvements (H→K and K→H).
Related Articles
Speaking of VoxtralResearchVoxtral TTS: A frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents.
Mistral AI Blog
Why I Switched from Cloud AI to a Dedicated AI Box (And Why You Should Too)
Dev.to
Anyone who has any common sense knows that AI agents in marketing just don’t exist.
Dev.to
How to Use MiMo V2 API for Free in 2026: Complete Guide
Dev.to
The Agent Memory Problem Nobody Solves: A Practical Architecture for Persistent Context
Dev.to