End-to-End Autoregressive Image Generation with 1D Semantic Tokenizer

arXiv cs.CV / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an end-to-end autoregressive image generation framework that jointly trains a 1D semantic tokenizer alongside the generative model, enabling supervision of the tokenizer directly from generation outcomes.
  • Unlike prior two-stage methods that separately train tokenizers and image generators, the approach optimizes reconstruction and generation together in a single pipeline.
  • The authors explore using vision foundation models to improve 1D tokenizers, aiming to strengthen autoregressive image modeling.
  • The resulting autoregressive model reports strong quality, achieving an FID score of 1.48 on ImageNet 256×256 generation without guidance, which the authors describe as state of the art.

Abstract

Autoregressive image modeling relies on visual tokenizers to compress images into compact latent representations. We design an end-to-end training pipeline that jointly optimizes reconstruction and generation, enabling direct supervision from generation results to the tokenizer. This contrasts with prior two-stage approaches that train tokenizers and generative models separately. We further investigate leveraging vision foundation models to improve 1D tokenizers for autoregressive modeling. Our autoregressive generative model achieves strong empirical results, including a state-of-the-art FID score of 1.48 without guidance on ImageNet 256x256 generation.

End-to-End Autoregressive Image Generation with 1D Semantic Tokenizer | AI Navigate