AI Navigate

ZeroSense:How Vision matters in Long Context Compression

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • It introduces a new evaluation framework that decouples Multimodal Large Language Models' downstream capabilities from long-context compression (VTC) quality, enabling purer assessment of VTC performance.
  • It presents the ZeroSense Benchmark, designed to ensure low semantic correlation among test samples so that evaluations reflect VTC quality rather than downstream inference.
  • It finds that VTC quality and downstream task accuracy can diverge significantly, highlighting limitations of current metrics that rely on task performance.
  • It reports extensive experiments across multiple datasets that demonstrate the necessity of decoupled evaluation for reliable VTC assessment and benchmarking.

Abstract

Recent visual-text compression (VTC) methods, typified by DeepSeek-OCR, report impressive high token compression ratios for long-context modeling tasks by leveraging text-to-image rendering. However, existing evaluation protocols heavily rely on downstream task performance. Such evaluation metrics fail to accurately measure text preservation due to the strong inherent linguistic priors of Multimodal Large Language Models (MLLMs). In this work, we introduce a new evaluation framework that decouples MLLMs' capabilities to faithfully assess VTC quality. Within this framework, we further introduce the ZeroSense Benchmark to ensure low semantic correlation of testing samples. By eliminating contextual dependencies, our benchmark guarantees that the evaluation results are purely reflective of VTC quality, unaffected by the semantic inference capabilities of downstream models. Extensive experiments across multiple datasets demonstrate that VTC quality and downstream task accuracy diverge significantly, highlighting the necessity of our decoupled evaluation framework.