ForeSea: AI Forensic Search with Multi-modal Queries for Video Surveillance

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ForeSeaQA, a new benchmark for video question answering in surveillance scenarios that uses image-and-text (multimodal) queries with timestamped event annotations to enable evaluation of retrieval, temporal grounding, and multimodal reasoning.
  • It argues prior surveillance search methods (tracking pipelines, CLIP-based approaches, and VideoRAG) struggle due to manual filtering burdens, shallow attribute capture, and weak temporal reasoning, especially in long multi-camera footage.
  • The proposed ForeSea system uses a three-stage plug-and-play pipeline: a tracking module to filter irrelevant footage, a multimodal embedding module to index clips, and inference that retrieves top-K candidates for a Video LLM to answer and localize events.
  • On ForeSeaQA, ForeSea reportedly improves accuracy by 3.5% and temporal IoU by 11.0 compared with prior VideoRAG models, positioning it as a first-of-its-kind approach for complex multimodal queries with precise temporal grounding.

Abstract

Despite decades of work, surveillance still struggles to find specific targets across long, multi-camera video. Prior methods -- tracking pipelines, CLIP based models, and VideoRAG -- require heavy manual filtering, capture only shallow attributes, and fail at temporal reasoning. Real-world searches are inherently multimodal (e.g., "When does this person join the fight?" with the person's image), yet this setting remains underexplored. Also, there are no proper benchmarks to evaluate those setting - asking video with multimodal queries. To address this gap, we introduce ForeSeaQA, a new benchmark specifically designed for video QA with image-and-text queries and timestamped annotations of key events. The dataset consists of long-horizon surveillance footage paired with diverse multimodal questions, enabling systematic evaluation of retrieval, temporal grounding, and multimodal reasoning in realistic forensic conditions. Not limited to this benchmark, we propose ForeSea, an AI forensic search system with a 3-stage, plug-and-play pipeline. (1) A tracking module filters irrelevant footage; (2) a multimodal embedding module indexes the remaining clips; and (3) during inference, the system retrieves top-K candidate clips for a Video Large Language Model (VideoLLM) to answer queries and localize events. On ForeSeaQA, ForeSea improves accuracy by 3.5% and temporal IoU by 11.0 over prior VideoRAG models. To our knowledge, ForeSeaQA is the first benchmark to support complex multimodal queries with precise temporal grounding, and ForeSea is the first VideoRAG system built to excel in this setting.