VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenization

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VideoFlexTok, a video tokenization method that uses a variable-length, coarse-to-fine sequence of tokens rather than a fixed spatiotemporal 3D token grid.
  • Early (coarse) tokens are designed to capture more abstract information like semantics and motion, while later (fine) tokens progressively add fine-grained detail.
  • Using a generative flow decoder, the approach can reconstruct realistic videos from any chosen token count, enabling compute-adaptive fidelity.
  • Experiments on class- and text-to-video generation indicate improved training efficiency, including comparable quality with a significantly smaller model (1.1B vs 5.2B parameters).
  • The method supports longer video generation under limited compute by training on 10-second, 81-frame clips with far fewer tokens (672), outperforming the token budget requirements of comparable 3D-grid tokenizers.

Abstract

Visual tokenizers map high-dimensional raw pixels into a compressed representation for downstream modeling. Beyond compression, tokenizers dictate what information is preserved and how it is organized. A de facto standard approach to video tokenization is to represent a video as a spatiotemporal 3D grid of tokens, each capturing the corresponding local information in the original signal. This requires the downstream model that consumes the tokens, e.g., a text-to-video model, to learn to predict all low-level details "pixel-by-pixel" irrespective of the video's inherent complexity, leading to high learning complexity. We present VideoFlexTok, which represents videos with a variable-length sequence of tokens structured in a coarse-to-fine manner -- where the first tokens (emergently) capture abstract information, such as semantics and motion, and later tokens add fine-grained details. The generative flow decoder enables realistic video reconstructions from any token count. This representation structure allows adapting the token count according to downstream needs and encoding videos longer than the baselines with the same budget. We evaluate VideoFlexTok on class- and text-to-video generative tasks and show that it leads to more efficient training compared to 3D grid tokens, e.g., achieving comparable generation quality (gFVD and ViCLIP Score) with a 5x smaller model (1.1B vs 5.2B). Finally, we demonstrate how VideoFlexTok can enable long video generation without prohibitive computational cost by training a text-to-video model on 10-second 81-frame videos with only 672 tokens, 8x fewer than a comparable 3D grid tokenizer.