AI Navigate

DiveUp: Learning Feature Upsampling from Diverse Vision Foundation Models

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • DiveUp proposes a multi-VFM relational guidance framework that uses diverse vision foundation models as experts to regularize feature upsampling and prevent propagation of inaccurate spatial structures from any single model.
  • It introduces a universal relational feature representation, the local center-of-mass field, to reconcile unaligned feature spaces across different VFMs and enable cross-model interaction.
  • The framework includes a spikiness-aware selection strategy that evaluates spatial reliability and filters out high-norm artifacts, aggregating guidance only from the most reliable expert at each local region.
  • DiveUp is encoder-agnostic and jointly trainable, enabling universal upsampling of features from diverse VFMs without per-model retraining.
  • Experiments show state-of-the-art performance on multiple dense prediction tasks, demonstrating the effectiveness of multi-expert relational guidance, with code and models released on GitHub.

Abstract

Recently, feature upsampling has gained increasing attention owing to its effectiveness in enhancing vision foundation models (VFMs) for pixel-level understanding tasks. Existing methods typically rely on high-resolution features from the same foundation model to achieve upsampling via self-reconstruction. However, relying solely on intra-model features forces the upsampler to overfit to the source model's inherent location misalignment and high-norm artifacts. To address this fundamental limitation, we propose DiveUp, a novel framework that breaks away from single-model dependency by introducing multi-VFM relational guidance. Instead of naive feature fusion, DiveUp leverages diverse VFMs as a panel of experts, utilizing their structural consensus to regularize the upsampler's learning process, effectively preventing the propagation of inaccurate spatial structures from the source model. To reconcile the unaligned feature spaces across different VFMs, we propose a universal relational feature representation, formulated as a local center-of-mass (COM) field, that extracts intrinsic geometric structures, enabling seamless cross-model interaction. Furthermore, we introduce a spikiness-aware selection strategy that evaluates the spatial reliability of each VFM, effectively filtering out high-norm artifacts to aggregate guidance from only the most reliable expert at each local region. DiveUp is a unified, encoder-agnostic framework; a jointly-trained model can universally upsample features from diverse VFMs without requiring per-model retraining. Extensive experiments demonstrate that DiveUp achieves state-of-the-art performance across various downstream dense prediction tasks, validating the efficacy of multi-expert relational guidance. Our code and models are available at: https://github.com/Xiaoqiong-Liu/DiveUp