AI Navigate

Frequency-Modulated Visual Restoration for Matryoshka Large Multimodal Models

arXiv cs.CL / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • FMVR introduces a plug-and-play visual restoration strategy that preserves semantic information in LMMs when visual token budgets are reduced.
  • It disentangles visual representations into high- and low-frequency components using AvgPool and MaxPool, which are then modulated by lightweight learnable parameters.
  • The high-frequency component from AvgPool acts as a saliency filter to enhance salient visual semantics, while the low-frequency component from MaxPool serves as an anti-saliency filter to strengthen weaker semantics.
  • Integrated with Matryoshka Representation Learning, FMVR enables elastic adjustment of visual tokens during inference and achieves substantial FLOPs reductions (up to 89% on LLaVA-1.5-7B) with nearly full accuracy, with code to be released.

Abstract

Large Multimodal Models (LMMs) struggle to adapt varying computational budgets due to numerous visual tokens. Previous methods attempted to reduce the number of visual tokens before or within LLMs. However, these strategies inevitably result in the loss of visual semantic. To address these issues, we introduce FMVR, a plug-and-play and extremely simple Frequency-Modulated Visual Restoration strategy to boost the reasoning ability of LMMs under visual token reduction. Specifically, FMVR disentangles the visual representation of fewer visual tokens into low- and high-frequency components through AvgPool and MaxPool. The derived frequencies are subsequently modulated using lightweight learnable parameters. The high-frequency from AvgPool acts as a saliency filter to enhance saliency visual semantics, while the low-frequency from MaxPool acts as an anti-saliency filter to strengthen weak visual semantics. It enables the preservation of visual semantics dominated by few visual tokens and the restoration of diluted visual semantics. Additionally, we inject FMVR into Matryoshka Representation Learning to learn coarse-to-fine visual token sets, thus enabling to elastically adjust the number of visual tokens during inference while maintaining comparable performance. Experiments across 10 image-based and 4 video-based bench marks demonstrate that FMVR-LLaVA reduce the FLOPs of LLaVA-1.5-7B by 89%, while maintaining almost 100% of the original accuracy. The code will be open.