CanViT: Toward Active-Vision Foundation Models

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CanViT, described as the first task- and policy-agnostic Active-Vision Foundation Model (AVFM) aimed at scalable, general-purpose active computer vision.
  • CanViT couples a retinotopic Vision Transformer backbone with a spatiotopic latent “canvas” workspace, using a novel Canvas Attention asymmetric cross-attention to support efficient sequential glimpsing.
  • The method separates “thinking” (backbone) from “memory” (canvas) by removing canvas self-attention and fully-connected layers, targeting low-latency sequential inference and better scalability to large scenes.
  • It proposes a label-free active vision pretraining scheme—policy-agnostic passive-to-active dense latent distillation—reconstructing scene-wide DINOv3 embeddings from randomized sequences of low-resolution glimpses.
  • Reported results show strong performance (e.g., 38.5% mIoU on ADE20K from a single glimpse with a frozen model) and improved segmentation/classification accuracy with more glimpses, along with generalization to longer rollouts, larger scenes, and new policies.

Abstract

Active computer vision promises efficient, biologically plausible perception through sequential, localized glimpses, but lacks scalable general-purpose architectures and pretraining pipelines. As a result, Active-Vision Foundation Models (AVFMs) have remained unexplored. We introduce CanViT, the first task- and policy-agnostic AVFM. CanViT uses scene-relative RoPE to bind a retinotopic Vision Transformer backbone and a spatiotopic scene-wide latent workspace, the canvas. Efficient interaction with this high-capacity working memory is supported by Canvas Attention, a novel asymmetric cross-attention mechanism. We decouple thinking (backbone-level) and memory (canvas-level), eliminating canvas-side self-attention and fully-connected layers to achieve low-latency sequential inference and scalability to large scenes. We propose a label-free active vision pretraining scheme, policy-agnostic passive-to-active dense latent distillation: reconstructing scene-wide DINOv3 embeddings from sequences of low-resolution glimpses with randomized locations, zoom levels, and lengths. We pretrain CanViT-B from a random initialization on 13.2 million ImageNet-21k scenes -- an order of magnitude more than previous active models -- and 1 billion random glimpses, in 166 hours on a single H100. On ADE20K segmentation, a frozen CanViT-B achieves 38.5% mIoU in a single low-resolution glimpse, outperforming the best active model's 27.6% with 19.5x fewer inference FLOPs and no fine-tuning, as well as its FLOP- or input-matched DINOv3 teacher. Given additional glimpses, CanViT-B reaches 45.9% ADE20K mIoU. On ImageNet-1k classification, CanViT-B reaches 81.2% top-1 accuracy with frozen teacher probes. CanViT generalizes to longer rollouts, larger scenes, and new policies. Our work closes the wide gap between passive and active vision on semantic segmentation and demonstrates the potential of AVFMs as a new research axis.