Cross-Attentive Multiview Fusion of Vision-Language Embeddings

arXiv cs.CV / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes CAMFusion, a multiview transformer that cross-attends across vision-language descriptors from multiple viewpoints to produce unified per-3D-instance embeddings.
  • It addresses limitations of prior 3D lifting methods that either back-project and average descriptors or heuristically pick a single view, both of which can yield weaker 3D representations.
  • The authors introduce multiview consistency as a self-supervised signal to improve fusion quality alongside a standard supervised loss.
  • CAMFusion is reported to outperform naive averaging and single-view selection methods and to achieve state-of-the-art performance on 3D semantic/instance classification benchmarks, including zero-shot results on out-of-domain datasets.

Abstract

Vision-language models have been key to the development of open-vocabulary 2D semantic segmentation. Lifting these models from 2D images to 3D scenes, however, remains a challenging problem. Existing approaches typically back-project and average 2D descriptors across views, or heuristically select a single representative one, often resulting in suboptimal 3D representations. In this work, we introduce a novel multiview transformer architecture that cross-attends across vision-language descriptors from multiple viewpoints and fuses them into a unified per-3D-instance embedding. As a second contribution, we leverage multiview consistency as a self-supervision signal for this fusion, which significantly improves performance when added to a standard supervised target-class loss. Our Cross-Attentive Multiview Fusion, which we denote with its acronym CAMFusion, not only consistently outperforms naive averaging or single-view descriptor selection, but also achieves state-of-the-art results on 3D semantic and instance classification benchmarks, including zero-shot evaluations on out-of-domain datasets.