Hierarchical Pre-Training of Vision Encoders with Large Language Models

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HIVE (Hierarchical Pre-Training of Vision Encoders), a framework that improves vision-language alignment by adding hierarchical cross-attention between a vision encoder and an LLM rather than treating them as independent modules.
  • HIVE fuses structured visual features across multiple layers, which the authors argue enhances representation learning and improves gradient flow compared with approaches that flatten image embeddings.
  • A three-stage training strategy is proposed to progressively align the vision encoder with the LLM, aiming for stable optimization and more effective multimodal fusion.
  • Experiments on image classification and multiple vision-language benchmarks (including MME, GQA, OK-VQA, and ScienceQA) show HIVE outperforming self-attention-based methods.
  • The results suggest hierarchical visual feature integration can yield more efficient and expressive vision-language models, motivating future work on structured cross-modal architectures.

Abstract

The field of computer vision has experienced significant advancements through scalable vision encoders and multimodal pre-training frameworks. However, existing approaches often treat vision encoders and large language models (LLMs) as independent modules, limiting the integration of hierarchical visual features. In this work, we propose HIVE (Hierarchical Pre-Training of Vision Encoders), a novel framework that enhances vision-language alignment by introducing hierarchical cross-attention between the vision encoder and LLM. Unlike conventional methods that flatten image embeddings, HIVE enables structured feature fusion across multiple layers, improving gradient flow and representation learning. To optimize this interaction, we introduce a three-stage training strategy that progressively aligns the vision encoder with the LLM, ensuring stable optimization and effective multimodal fusion. Empirical evaluations demonstrate that HIVE achieves superior performance not only in image classification but also on various vision-language tasks, outperforming self-attention-based methods in benchmarks such as MME, GQA, OK-VQA, and ScienceQA. Our results highlight the benefits of hierarchical feature integration, paving the way for more efficient and expressive vision-language models.

Hierarchical Pre-Training of Vision Encoders with Large Language Models | AI Navigate