AI Navigate

Thinking in Streaming Video

arXiv cs.CV / 3/16/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • ThinkStream introduces a streaming video reasoning framework that incrementally updates understanding as new video observations arrive, reducing latency for interactive agents.
  • It employs a Watch--Think--Speak paradigm where the model performs short reasoning updates at each step and decides when enough evidence has accumulated to respond.
  • The framework uses Reasoning-Compressed Streaming Memory (RCSM) to store intermediate reasoning traces as compact semantic memory, replacing outdated visual tokens while preserving essential context for long-horizon tasks.
  • A Streaming Reinforcement Learning with Verifiable Rewards scheme is proposed to align incremental reasoning and response timing with streaming interaction requirements, with experiments showing improved performance, low latency, and reduced memory usage; code and models will be released on GitHub.

Abstract

Real-time understanding of continuous video streams is essential for interactive assistants and multimodal agents operating in dynamic environments. However, most existing video reasoning approaches follow a batch paradigm that defers reasoning until the full video context is observed, resulting in high latency and growing computational cost that are incompatible with streaming scenarios. In this paper, we introduce ThinkStream, a framework for streaming video reasoning based on a Watch--Think--Speak paradigm that enables models to incrementally update their understanding as new video observations arrive. At each step, the model performs a short reasoning update and decides whether sufficient evidence has accumulated to produce a response. To support long-horizon streaming, we propose Reasoning-Compressed Streaming Memory (RCSM), which treats intermediate reasoning traces as compact semantic memory that replaces outdated visual tokens while preserving essential context. We further train the model using a Streaming Reinforcement Learning with Verifiable Rewards scheme that aligns incremental reasoning and response timing with the requirements of streaming interaction. Experiments on multiple streaming video benchmarks show that ThinkStream significantly outperforms existing online video models while maintaining low latency and memory usage. Code, models and data will be released at https://github.com/johncaged/ThinkStream