AI Navigate

UniCompress: Token Compression for Unified Vision-Language Understanding and Generation

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • UniCompress introduces a plug-in token compression mechanism to reduce the number of visual tokens in unified vision-language models while preserving performance on both image understanding and generation tasks.
  • The method uses learnable global meta tokens to guide compression and decompression and is designed to be lightweight and modular, enabling integration into existing models without full retraining.
  • Experiments show token counts can be reduced by up to 4x, with substantial gains in inference latency and training cost and only minimal degradation in performance.
  • The approach addresses compute and memory overhead in resource-constrained deployments (e.g., embodied AI), making real-world multimodal systems more practical.

Abstract

Unified models aim to support both understanding and generation by encoding images into discrete tokens and processing them alongside text within a single autoregressive framework. This unified design offers architectural simplicity and cross-modal synergy, which facilitates shared parameterization, consistent training objectives, and seamless transfer between modalities. However, the large number of visual tokens required by such models introduces substantial computation and memory overhead, and this inefficiency directly hinders deployment in resource constrained scenarios such as embodied AI systems. In this work, we propose a unified token compression algorithm UniCompress that significantly reduces visual token count while preserving performance on both image understanding and generation tasks. Our method introduces a plug-in compression and decompression mechanism guided with learnable global meta tokens. The framework is lightweight and modular, enabling efficient integration into existing models without full retraining. Experimental results show that our approach reduces image tokens by up to 4 times, achieves substantial gains in inference latency and training cost, and incurs only minimal performance degradation, which demonstrates the promise of token-efficient unified modeling for real world multimodal applications.