AI Navigate

Parallel In-context Learning for Large Vision Language Models

arXiv cs.CV / 3/18/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces Parallel In-Context Learning (Parallel-ICL) for LVLMs to reduce inference latency by partitioning long demonstrations into chunks, processing them in parallel, and fusing predictions at the logit level using a weighted Product-of-Experts ensemble.
  • It employs clustering-based context chunking to maximize inter-chunk diversity and similarity-based weighting to emphasize query-relevant chunks.
  • Experiments on VQA, image captioning, and classification show Parallel-ICL achieving performance comparable to full-context MM-ICL while significantly speeding up inference.
  • The approach addresses the accuracy-efficiency trade-off in MM-ICL and enables dynamic task adaptation with substantially reduced inference overhead.

Abstract

Large vision-language models (LVLMs) employ multi-modal in-context learning (MM-ICL) to adapt to new tasks by leveraging demonstration examples. While increasing the number of demonstrations boosts performance, they incur significant inference latency due to the quadratic computational cost of Transformer attention with respect to the context length. To address this trade-off, we propose Parallel In-Context Learning (Parallel-ICL), a plug-and-play inference algorithm. Parallel-ICL partitions the long demonstration context into multiple shorter, manageable chunks. It processes these chunks in parallel and integrates their predictions at the logit level, using a weighted Product-of-Experts (PoE) ensemble to approximate the full-context output. Guided by ensemble learning theory, we introduce principled strategies for Parallel-ICL: (i) clustering-based context chunking to maximize inter-chunk diversity and (ii) similarity-based context compilation to weight predictions by query relevance. Extensive experiments on VQA, image captioning, and classification benchmarks demonstrate that Parallel-ICL achieves performance comparable to full-context MM-ICL, while significantly improving inference speed. Our work offers an effective solution to the accuracy-efficiency trade-off in MM-ICL, enabling dynamic task adaptation with substantially reduced inference overhead.