MacTok: Robust Continuous Tokenization for Image Generation

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MacTok, a masked 1D continuous image tokenizer designed to learn compact, smooth latent representations for efficient image generation.
  • It addresses posterior collapse common in variational tokenizer training with few tokens by combining random masking regularization and DINO-guided semantic masking to force informative latent encoding.
  • MacTok uses global and local representation alignment to preserve discriminative semantic information even in a highly compressed 1D latent space.
  • Experiments on ImageNet show competitive gFID at 256×256 and state-of-the-art results at 512×512 with SiT-XL, while reducing token usage by up to 64×.
  • The findings suggest masking plus semantic guidance can reliably prevent collapse, enabling high-fidelity visual tokenization with only 64 or 128 tokens.

Abstract

Continuous image tokenizers enable efficient visual generation, and those based on variational frameworks can learn smooth, structured latent representations through KL regularization. Yet this often leads to posterior collapse when using fewer tokens, where the encoder fails to encode informative features into the compressed latent space. To address this, we introduce \textbf{MacTok}, a \textbf{M}asked \textbf{A}ugmenting 1D \textbf{C}ontinuous \textbf{Tok}enizer that leverages image masking and representation alignment to prevent collapse while learning compact and robust representations. MacTok applies both random masking to regularize latent learning and DINO-guided semantic masking to emphasize informative regions in images, forcing the model to encode robust semantics from incomplete visual evidence. Combined with global and local representation alignment, MacTok preserves rich discriminative information in a highly compressed 1D latent space, requiring only 64 or 128 tokens. On ImageNet, MacTok achieves a competitive gFID of 1.44 at 256\times256 and a state-of-the-art 1.52 at 512\times512 with SiT-XL, while reducing token usage by up to 64\times. These results confirm that masking and semantic guidance together prevent posterior collapse and achieve efficient, high-fidelity tokenization.