Detached Skip-Links and $R$-Probe: Decoupling Feature Aggregation from Gradient Propagation for MLLM OCR

arXiv cs.CV / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that skip pathways in multi-layer feature fusion create direct back-propagation paths from high-level objectives to early visual layers, overwriting low-level signals and destabilizing training in multimodal LLMs for OCR tasks.
  • It proposes Detached Skip-Links, a minimal modification that reuses shallow features in the forward pass while stopping gradients through the skip branch during joint training, reducing gradient interference without adding learnable parameters.
  • It introduces R-Probe, a diagnostic tool that measures pixel-level reconstructability of projected visual tokens using a shallow decoder initialized from the first quarter of the LLM layers to assess whether fine-grained information is preserved.
  • Across multiple ViT backbones and benchmarks, and up to 7M training samples, the approach consistently improves OCR-centric tasks and yields gains on general multimodal tasks.

Abstract

Multimodal large language models (MLLMs) excel at high-level reasoning yet fail on OCR tasks where fine-grained visual details are compromised or misaligned. We identify an overlooked optimization issue in multi-layer feature fusion. Skip pathways introduce direct back-propagation paths from high-level semantic objectives to early visual layers. This mechanism overwrites low-level signals and destabilizes training. To mitigate this gradient interference, we propose Detached Skip-Links, a minimal modification that reuses shallow features in the forward pass while stopping gradients through the skip branch during joint training. This asymmetric design reduces gradient interference, improving stability and convergence without adding learnable parameters. To diagnose whether fine-grained information is preserved and usable by an LLM, we introduce R-Probe, which measures pixel-level reconstructability of projected visual tokens using a shallow decoder initialized from the first quarter of the LLM layers. Across multiple ViT backbones and multimodal benchmarks, and at scales up to 7M training samples, our approach consistently improves OCR-centric benchmarks and delivers clear gains on general multimodal tasks.

Detached Skip-Links and $R$-Probe: Decoupling Feature Aggregation from Gradient Propagation for MLLM OCR | AI Navigate