AI Navigate

From Drop-off to Recovery: A Mechanistic Analysis of Segmentation in MLLMs

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study performs a layerwise linear probing across the vision encoder, adapter, and LLM to assess segmentation capacity in Multimodal LLMs.
  • It uses an intervention-based attention knockout analysis to test whether cross-token attention progressively refines visual representations and improves token labeling.
  • The results show the adapter causes a drop-off in segmentation representations, while LLM layers recover through attention-mediated refinement, with correctly classified tokens guiding neighbors.
  • Early image token recovery is limited by causal attention, but bidirectional attention among image tokens alleviates this constraint and improves spatial consistency.
  • The work provides a mechanistic account of how MLLMs process visual information for segmentation and informs future design of segmentation-capable models.

Abstract

Multimodal Large Language Models (MLLMs) are increasingly applied to pixel-level vision tasks, yet their intrinsic capacity for spatial understanding remains poorly understood. We investigate segmentation capacity through a layerwise linear probing evaluation across the entire MLLM pipeline: vision encoder, adapter, and LLM. We further conduct an intervention based attention knockout analysis to test whether cross-token attention progressively refines visual representations, and an evaluation of bidirectional attention among image tokens on spatial consistency. Our analysis reveals that the adapter introduces a segmentation representation drop-off, but LLM layers progressively recover through attention-mediated refinement, where correctly classified tokens steer misclassified neighbors toward the correct label. At early image token positions, this recovery is bounded by causal attention, which bidirectional attention among image tokens alleviates. These findings provide a mechanistic account of how MLLMs process visual information for segmentation, informing the design of future segmentation-capable models.