Perceptio: Perception Enhanced Vision Language Models via Spatial Token Generation

arXiv cs.CV / 3/20/2026

📰 NewsModels & Research

Key Points

  • Perceptio introduces a perception-enhanced LVLM that enables explicit 2D/3D spatial reasoning by emitting spatial tokens (semantic segmentation tokens and depth tokens) during autoregressive generation.
  • It tokenizes dense depth with a VQVAE codebook distilled from a monocular teacher and integrates SAM2 semantic segmentation tokens inside the LLM to ground spatial reasoning before answering.
  • The approach uses composite depth-token objectives (marker, token, and count losses) and a soft-merging technique to stabilize depth token generation and differentiable reconstruction.
  • A multi-task co-training regime across diverse datasets lets the model learn perception tokens for multiple downstream tasks, building on InternVL.
  • On benchmarks, Perceptio achieves state-of-the-art results, boosting RefCOCO-series segmentation metrics, improving spatial understanding accuracy by 10.3%, and increasing MMBench accuracy by 1.0%, demonstrating that explicit spatial chain-of-thought strengthens LVLM grounding.

Abstract

Large Vision Language Models (LVLMs) excel at semantic understanding but struggle with fine grained spatial grounding, as the model must implicitly infer complex geometry without ever producing a spatial interpretation. We present Perceptio, a perception enhanced LVLM with 2D and 3D spatial reasoning abilities, enabled via explicit semantic segmentation tokens and depth tokens generated directly within the autoregressive sequence. Concretely, we (i) distill a VQVAE depth codebook from a strong monocular teacher to tokenize dense depth into compact sequences, and (ii) integrate SAM2 based semantic segmentation tokens and VQ-VAE depth tokens inside the LLM so the model first emits spatial tokens and then answers. To stabilize depth token generation, we introduce novel composite depth-token objectives (marker, token, and count losses) and a soft-merging technique for differentiable reconstruction. We adopt a multi-task co-training strategy across diverse datasets, letting the model learn perception tokens to tackle multiple downstream tasks. Building on InternVL, Perceptio achieves state-of-the-art performance across benchmarks: improving referring expression segmentation by +0.8/+1.4/+1.1 cIoU on RefCOCO/+/g HardBLINK spatial understanding accuracy by 10.3%, and MMBench accuracy by 1.0%, demonstrating that explicit spatial chain-of-thought materially strengthens spatial grounding in LVLMs.