AI Navigate

HIFICL: High-Fidelity In-Context Learning for Multimodal Tasks

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper notes that In-Context Learning for large multimodal models is sensitive to demonstration configurations and computationally expensive.
  • It introduces High-Fidelity In-Context Learning (HIFICL) with virtual key-value pairs as learnable context to more faithfully model the ICL mechanism.
  • HIFICL uses a low-rank factorization for stable, regularized training and frames the approach as context-aware parameter-efficient fine-tuning.
  • Extensive experiments on multimodal benchmarks show HIFICL consistently outperforms existing approximation methods, and the code is publicly available.

Abstract

In-Context Learning (ICL) is a significant paradigm for Large Multimodal Models (LMMs), using a few in-context demonstrations (ICDs) for new task adaptation. However, its performance is sensitive to demonstration configurations and computationally expensive. Mathematically, the influence of these demonstrations can be decomposed into a dynamic mixture of the standard attention output and the context values. Current approximation methods simplify this process by learning a "shift vector". Inspired by the exact decomposition, we introduce High-Fidelity In-Context Learning (HIFICL) to more faithfully model the ICL mechanism. HIFICL consists of three key components: 1) a set of "virtual key-value pairs" to act as a learnable context, 2) a low-rank factorization for stable and regularized training, and 3) a simple end-to-end training objective. From another perspective, this mechanism constitutes a form of context-aware Parameter-Efficient Fine-Tuning (PEFT). Extensive experiments show that HiFICL consistently outperforms existing approximation methods on several multimodal benchmarks. The code is available at https://github.com/bbbandari/HiFICL.