SGTA: Scene-Graph Based Multi-Modal Traffic Agent for Video Understanding

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SGTA is a modular framework for traffic video understanding that builds structured scene graphs from roadside video via detection, tracking, and lane extraction.
  • It pairs scene-graph queries with multi-modal visual reasoning using tool-based steps to answer diverse traffic-related video questions.
  • The approach uses ReAct to interleave large-language-model reasoning traces with explicit tool invocations, aiming for more interpretable decision-making.
  • Experiments on the TUMTraffic VideoQA dataset show competitive accuracy across multiple question types while providing transparent reasoning traces.
  • The work suggests that combining structured representations (scene graphs) with multi-modal agentic reasoning can improve both performance and interpretability for traffic video QA.

Abstract

We present Scene-Graph Based Multi-Modal Traffic Agent (SGTA), a modular framework for traffic video understanding that combines structured scene graphs with multi-modal reasoning. It constructs a traffic scene graph from roadside videos using detection, tracking, and lane extraction, followed by tool-based reasoning over both symbolic graph queries and visual inputs. SGTA adopts ReAct to process interleaved reasoning traces from large language models with tool invocations, enabling interpretable decision-making for complex video questions. Experiments on selected TUMTraffic VideoQA dataset sample demonstrate that SGTA achieves competitive accuracy across multiple question types while providing transparent reasoning steps. These results highlight the potential of integrating structured scene representations with multi-modal agents for traffic video understanding.