HiCrew: Hierarchical Reasoning for Long-Form Video Understanding via Question-Aware Multi-Agent Collaboration

arXiv cs.AI / 4/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HiCrew, a hierarchical multi-agent framework aimed at improving long-form video understanding under challenges like spatiotemporal redundancy and long-range narrative dependencies.
  • It preserves temporal coherence for causal reasoning by using a Hybrid Tree structure that combines shot boundary detection with relevance-guided hierarchical clustering in semantically coherent segments.
  • HiCrew adds a Question-Aware Captioning mechanism that generates intent-driven, question-precise semantic descriptions from visual prompts.
  • A Planning Layer dynamically selects agent roles and execution paths based on question complexity, replacing rigid, pre-defined multi-agent workflows.
  • Experiments on EgoSchema and NExT-QA show strong gains, especially for temporal and causal reasoning tasks that benefit from HiCrew’s structure-preserving design.

Abstract

Long-form video understanding remains fundamentally challenged by pervasive spatiotemporal redundancy and intricate narrative dependencies that span extended temporal horizons. While recent structured representations compress visual information effectively, they frequently sacrifice temporal coherence, which is critical for causal reasoning. Meanwhile, existing multi-agent frameworks operate through rigid, pre-defined workflows that fail to adapt their reasoning strategies to question-specific demands. In this paper, we introduce HiCrew, a hierarchical multi-agent framework that addresses these limitations through three core contributions. First, we propose a Hybrid Tree structure that leverages shot boundary detection to preserve temporal topology while performing relevance-guided hierarchical clustering within semantically coherent segments. Second, we develop a Question-Aware Captioning mechanism that synthesizes intent-driven visual prompts to generate precision-oriented semantic descriptions. Third, we integrate a Planning Layer that dynamically orchestrates agent collaboration by adaptively selecting roles and execution paths based on question complexity. Extensive experiments on EgoSchema and NExT-QA validate the effectiveness of our approach, demonstrating strong performance across diverse question types with particularly pronounced gains in temporal and causal reasoning tasks that benefit from our hierarchical structure-preserving design.