AI Navigate

Micro-AU CLIP: Fine-Grained Contrastive Learning from Local Independence to Global Dependency for Micro-Expression Action Unit Detection

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Micro-AU CLIP, a framework for micro-expression AU detection that decomposes the process into local semantic independence modeling (LSI) and global semantic dependency (GSD) modeling.
  • In LSI, Patch Token Attention (PTA) maps local features within an AU region to a shared feature space to better capture locality.
  • In GSD, Global Dependency Attention (GDA) and Global Dependency Loss (GDLoss) model the global relationships between different AUs under specific emotional states, improving AU feature representations.
  • To address CLIP's limitations in micro-semantic alignment, the authors design a microAU contrastive loss (MiAUCL) for fine-grained visual-text alignment of AU features.
  • The approach enables emotion-label-free micro-AU recognition and reportedly achieves state-of-the-art performance on micro-AU detection experiments.

Abstract

Micro-expression (ME) action units (Micro-AUs) provide objective clues for fine-grained genuine emotion analysis. Most existing Micro-AU detection methods learn AU features from the whole facial image/video, which conflicts with the inherent locality of AU, resulting in insufficient perception of AU regions. In fact, each AU independently corresponds to specific localized facial muscle movements (local independence), while there is an inherent dependency between some AUs under specific emotional states (global dependency). Thus, this paper explores the effectiveness of the independence-to-dependency pattern and proposes a novel micro-AU detection framework, micro-AU CLIP, that uniquely decomposes the AU detection process into local semantic independence modeling (LSI) and global semantic dependency (GSD) modeling. In LSI, Patch Token Attention (PTA) is designed, mapping several local features within the AU region to the same feature space; In GSD, Global Dependency Attention (GDA) and Global Dependency Loss (GDLoss) are presented to model the global dependency relationships between different AUs, thereby enhancing each AU feature. Furthermore, considering CLIP's native limitations in micro-semantic alignment, a microAU contrastive loss (MiAUCL) is designed to learn AU features by a fine-grained alignment of visual and text features. Also, Micro-AU CLIP is effectively applied to ME recognition in an emotion-label-free way. The experimental results demonstrate that Micro-AU CLIP can fully learn fine-grained micro-AU features, achieving state-of-the-art performance.