Semantic-Aware Prefix Learning for Token-Efficient Image Generation

arXiv cs.CV / 3/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing visual tokenizers for latent image generation are often trained with reconstruction-dominated objectives, producing latent codes that may be weakly grounded in high-level semantics.
  • It proposes SMAP (Semantic-Aware Prefix tokenizer), which injects class-level semantic conditions into a query-based 1D tokenization framework and makes semantics functionally necessary via a tail token dropping strategy.
  • The method forces semantic conditioning and early latent prefixes to increasingly carry the training burden as the available token budget decreases.
  • To ensure the learned latent space supports generation beyond reconstruction, the authors introduce CARD, a hybrid Causal AutoRegressive plus Diffusion generator.
  • Experiments on ImageNet reportedly show SMAP improves reconstruction quality across discrete and continuous tokenization setups and yields strong downstream generation performance even with compact token budgets.

Abstract

Visual tokenizers play a central role in latent image generation by bridging high-dimensional images and tractable generative modeling. However, most existing tokenizers are still trained with reconstruction-dominated objectives, which often yield latent representations that are only weakly grounded in high-level semantics. Recent approaches improve semantic alignment, but typically treat semantic signals as auxiliary regularization rather than making them functionally necessary for representation learning. We propose SMAP, a SeMantic-Aware Prefix tokenizer that injects class-level semantic conditions into a query-based 1D tokenization framework. To make semantics indispensable during training, SMAP introduces a tail token dropping strategy, which forces semantic conditions and early latent prefixes to bear increasing responsibility under progressively reduced token budgets. To verify that the resulting latent space is useful for generation rather than reconstruction alone, we further introduce CARD, a hybrid Causal AutoRegressive--Diffusion generator. Extensive experiments on ImageNet show that SMAP consistently improves reconstruction quality across discrete and continuous tokenization settings, and that its semantically grounded latent space yields strong downstream generation performance under compact token budgets.