AI Navigate

UniCom: Unified Multimodal Modeling via Compressed Continuous Semantic Representations

arXiv cs.CV / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces UniCom, a unified multimodal modeling framework that uses compressed continuous semantic representations to bridge modality gaps without relying on discrete visual tokenizers.
  • It shows that reducing channel dimension via an attention-based semantic compressor is more effective than spatial downsampling for both reconstruction and generation tasks.
  • A transfusion architecture is proposed and demonstrated to outperform query-based designs in convergence and consistency.
  • Experiments report state-of-the-art generation performance among unified models and highlight strong controllability in image editing while maintaining image consistency without relying on a VAE.

Abstract

Current unified multimodal models typically rely on discrete visual tokenizers to bridge the modality gap. However, discretization inevitably discards fine-grained semantic information, leading to suboptimal performance in visual understanding tasks. Conversely, directly modeling continuous semantic representations (e.g., CLIP, SigLIP) poses significant challenges in high-dimensional generative modeling, resulting in slow convergence and training instability. To resolve this dilemma, we introduce UniCom, a unified framework that harmonizes multimodal understanding and generation via compressed continuous representation. We empirically demonstrate that reducing channel dimension is significantly more effective than spatial downsampling for both reconstruction and generation. Accordingly, we design an attention-based semantic compressor to distill dense features into a compact unified representation. Furthermore, we validate that the transfusion architecture surpasses query-based designs in convergence and consistency. Experiments demonstrate that UniCom achieves state-of-the-art generation performance among unified models. Notably, by preserving rich semantic priors, it delivers exceptional controllability in image editing and maintains image consistency even without relying on VAE.