SHOE: Semantic HOI Open-Vocabulary Evaluation Metric

arXiv cs.CV / 4/3/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that conventional HOI evaluation metrics like mAP inadequately assess open-vocabulary HOI detection because they treat HOI labels as discrete strings and ignore semantic equivalence.
  • It introduces SHOE, a semantic evaluation framework that decomposes each predicted HOI into verb and object components and computes semantic similarity between prediction and ground truth.
  • SHOE estimates semantic similarity using an averaged scoring approach across multiple large language models (LLMs), producing a similarity-based score rather than relying on exact lexical match.
  • Experiments on standard benchmarks such as HICO-DET show SHOE better matches human judgments than existing metrics, reporting 85.73% agreement with average human ratings.
  • The authors state they will release the SHOE evaluation metric publicly to support future research on semantically grounded, open-ended multimodal interaction understanding.

Abstract

Open-vocabulary human-object interaction (HOI) detection is a step towards building scalable systems that generalize to unseen interactions in real-world scenarios and support grounded multimodal systems that reason about human-object relationships. However, standard evaluation metrics, such as mean Average Precision (mAP), treat HOI classes as discrete categorical labels and fail to credit semantically valid but lexically different predictions (e.g., "lean on couch" vs. "sit on couch"), limiting their applicability for evaluating open-vocabulary predictions that go beyond any predefined set of HOI labels. We introduce SHOE (Semantic HOI Open-Vocabulary Evaluation), a new evaluation framework that incorporates semantic similarity between predicted and ground-truth HOI labels. SHOE decomposes each HOI prediction into its verb and object components, estimates their semantic similarity using the average of multiple large language models (LLMs), and combines them into a similarity score to evaluate alignment beyond exact string match. This enables a flexible and scalable evaluation of both existing HOI detection methods and open-ended generative models using standard benchmarks such as HICO-DET. Experimental results show that SHOE scores align more closely with human judgments than existing metrics, including LLM-based and embedding-based baselines, achieving an agreement of 85.73% with the average human ratings. Our work underscores the need for semantically grounded HOI evaluation that better mirrors human understanding of interactions. We will release our evaluation metric to the public to facilitate future research.