AttriBE: Quantifying Attribute Expressivity in Body Embeddings for Recognition and Identification
arXiv cs.CV / 5/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes person re-identification embeddings by introducing “expressivity,” measured as mutual information between learned features and target attributes using a secondary neural network.
- Experiments on three transformer-based ReID models show BMI is the most expressively encoded attribute, especially in deeper layers, while pose peaks in intermediate layers and changes across training epochs.
- In cross-spectral person identification across infrared bands (short-, medium-, and long-wave), pitch becomes as expressive as BMI and attribute expressivity increases monotonically with depth, indicating greater reliance on structural cues when bridging modalities.
- The authors conclude that transformer-based ReID embeddings contain an attribute hierarchy, with morphometric information persistently represented and pose contributing more strongly under cross-spectral conditions.
- The findings provide a quantitative way to study and potentially mitigate fairness/generalization risks caused by attribute leakage (e.g., gender, pose, BMI).
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER