Thinking in Streaming Video
arXiv cs.CV / 3/16/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- ThinkStream introduces a streaming video reasoning framework that incrementally updates understanding as new video observations arrive, reducing latency for interactive agents.
- It employs a Watch--Think--Speak paradigm where the model performs short reasoning updates at each step and decides when enough evidence has accumulated to respond.
- The framework uses Reasoning-Compressed Streaming Memory (RCSM) to store intermediate reasoning traces as compact semantic memory, replacing outdated visual tokens while preserving essential context for long-horizon tasks.
- A Streaming Reinforcement Learning with Verifiable Rewards scheme is proposed to align incremental reasoning and response timing with streaming interaction requirements, with experiments showing improved performance, low latency, and reduced memory usage; code and models will be released on GitHub.
