RegFormer: Transferable Relational Grounding for Efficient Weakly-Supervised Human-Object Interaction Detection

arXiv cs.CV / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • RegFormer is proposed as an efficient interaction recognition module for weakly-supervised human-object interaction (HOI) detection using only image-level annotations rather than detailed localization labels.
  • The method addresses prior scaling limits from enumerating many human–object instance pairs by using spatially grounded guidance and locality-aware interaction learning.
  • It mitigates false positives from non-interactive human-object combinations by learning localized interaction cues that better separate humans, objects, and their true interactions.
  • The model is designed to transfer from image-level interaction reasoning to instance-level HOI reasoning without additional training, aiming to reach accuracy comparable to fully supervised approaches.
  • Code is released publicly via the provided GitHub repository, supporting reproducibility and adoption.

Abstract

Weakly-supervised Human-Object Interaction (HOI) detection is essential for scalable scene understanding, as it learns interactions from only image-level annotations. Due to the lack of localization signals, prior works typically rely on an external object detector to generate candidate pairs and then infer their interactions through pairwise reasoning. However, this framework often struggles to scale due to the substantial computational cost incurred by enumerating numerous instance pairs. In addition, it suffers from false positives arising from non-interactive combinations, which hinder accurate instance-level HOI reasoning. To address these issues, we introduce Relational Grounding Transformer (RegFormer), a versatile interaction recognition module for efficient and accurate HOI reasoning. Under image-level supervision, RegFormer leverages spatially grounded signals as guidance for the reasoning process and promotes locality-aware interaction learning. By learning localized interaction cues, our module distinguishes humans, objects, and their interactions, enabling direct transfer from image-level interaction reasoning to precise and efficient instance-level reasoning without additional training. Our extensive experiments and analyses demonstrate that RegFormer effectively learns spatial cues for instance-level interaction reasoning, operates with high efficiency, and even achieves performance comparable to fully supervised models. Our code is available at https://github.com/mlvlab/RegFormer.