(1D) Ordered Tokens Enable Efficient Test-Time Search

arXiv cs.AI / 4/20/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines whether the way tokens are structured in autoregressive generative models affects how effectively test-time search can steer generation using a verifier.
  • Using image generation experiments, it finds that recent 1D ordered tokenizers with coarse-to-fine structure scale better at test time than traditional 2D grid-based tokenizations.
  • The authors argue coarse-to-fine intermediate states carry more semantic meaning, allowing verifiers to evaluate them reliably and thus enabling more effective steering.
  • They further show that ordered token structure allows training-free text-to-image generation driven purely by test-time search over token sequences when guided by an image-text verifier.
  • The study compares multiple classical search strategies (best-of-N, beam search, lookahead) and analyzes how different verifiers and AR priors interact with token structures, yielding guidance for inference-time scaling.

Abstract

Tokenization is a key component of autoregressive (AR) generative models, converting raw data into more manageable units for modeling. Commonly, tokens describe local information, such as regions of pixels in images or word pieces in text, and AR generation predicts these tokens in a fixed order. A worthwhile question is whether token structures affect the ability to steer the generation through test-time search, where multiple candidate generations are explored and evaluated by a verifier. Using image generation as our testbed, we hypothesize that recent 1D ordered tokenizers with coarse-to-fine structure can be more amenable to search than classical 2D grid structures. This is rooted in the fact that the intermediate states in coarse-to-fine sequences carry semantic meaning that verifiers can reliably evaluate, enabling effective steering during generation. Through controlled experiments, we find that AR models trained on coarse-to-fine ordered tokens exhibit improved test-time scaling behavior compared to grid-based counterparts. Moreover, we demonstrate that, thanks to the ordered structure, pure test-time search over token sequences (i.e., without training an AR model) can perform training-free text-to-image generation when guided by an image-text verifier. Beyond this, we systematically study how classical search algorithms (best-of-N, beam search, lookahead search) interact with different token structures, as well as the role of different verifiers and AR priors. Our results highlight the impact of token structure on inference-time scalability and provide practical guidance for test-time scaling in AR models.