Let ViT Speak: Generative Language-Image Pre-training

arXiv cs.CV / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces GenLIP (Generative Language-Image Pre-training), a minimalist pretraining framework for Vision Transformers aimed at multimodal large language models (MLLMs).
  • GenLIP aligns vision encoders with autoregressive LLM behavior by having a ViT predict language tokens directly from visual tokens using a standard language modeling objective, without contrastive batch construction or an extra text decoder.
  • The authors claim three main benefits: simplicity via a single joint transformer for visual and textual tokens, scalability with both data and model size, and competitive or better performance on multimodal benchmarks.
  • Using about 8B samples from Recap-DataComp-1B, GenLIP reportedly matches or exceeds strong baselines while relying on substantially less pretraining data, and further gains are shown after continued multi-resolution pretraining for detail-heavy tasks like OCR and chart understanding.

Abstract

In this paper, we present \textbf{Gen}erative \textbf{L}anguage-\textbf{I}mage \textbf{P}re-training (GenLIP), a minimalist generative pretraining framework for Vision Transformers (ViTs) designed for multimodal large language models (MLLMs). To better align vision encoders with the autoregressive nature of LLMs, GenLIP trains a ViT to predict language tokens directly from visual tokens using a standard language modeling objective, without contrastive batch construction or an additional text decoder. This design offers three key advantages: (1) \textbf{Simplicity}: a single transformer jointly models visual and textual tokens; (2) \textbf{Scalability}: it scales effectively with both data and model size; and (3) \textbf{Performance}: it achieves competitive or superior results across diverse multimodal benchmarks. Trained on 8B samples from Recap-DataComp-1B, GenLIP matches or surpasses strong baselines despite using substantially less pretraining data. After continued pretraining on multi-resolution images at native aspect ratios, GenLIP further improves on detail-sensitive tasks such as OCR and chart understanding, making it a strong foundation for vision encoders in MLLMs.

Let ViT Speak: Generative Language-Image Pre-training | AI Navigate