Beyond Feature Fusion: Contextual Bayesian PEFT for Multimodal Uncertainty Estimation

arXiv cs.LG / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CoCo-LoRA, a multimodal PEFT method that estimates uncertainty for text prediction using both text-derived signals and audio context.
  • It extends deterministic LoRA and unimodal Bayesian low-rank adapters by conditioning a variational posterior on an audio-derived context signal to better capture uncertainty from real-world acoustic factors.
  • CoCo-LoRA projects a pooled audio embedding once into a shared context space and then uses lightweight layer-wise heads to modulate uncertainty and updates in a global-to-local, depth-specific way without expensive high-dimensional multimodal fusion.
  • The approach confines stochasticity to a compact latent component within the low-rank space, aiming to retain PEFT scalability while producing audio-sensitive, heteroscedastic uncertainty.
  • Experiments across multiple tasks and backbone combinations show CoCo-LoRA matches or outperforms text-only PEFT and standard feature-fusion baselines, especially when reliable adaptation is crucial for high-coverage labels.

Abstract

We introduce CoCo-LoRA, a multimodal, uncertainty-aware parameter-efficient fine-tuning method for text prediction tasks accompanied by audio context. Existing PEFT approaches such as LoRA are efficient but typically deterministic, while recent Bayesian low-rank adapters model uncertainty in a lightweight way yet remain largely unimodal and condition uncertainty primarily on internal text features. This leaves them poorly equipped to reflect uncertainty driven by external acoustic factors such as background noise, channel variability, or speaking style, which can materially affect reliability in speech-centered applications. CoCo-LoRA addresses this gap by conditioning a contextual variational posterior in the low-rank space on both local text-derived adapter features and an audio-derived context signal. A pooled audio embedding is projected once into a shared context space and then adapted through lightweight layer-wise heads, enabling global-to-local, depth-specific modulation of the adapter uncertainty and update without high-dimensional multimodal fusion. Stochasticity is confined to a compact latent component in the rank space, preserving PEFT scalability while producing audio-sensitive, heteroscedastic uncertainty. Based on our evaluations across diverse tasks and backbone combinations, CoCo-LoRA consistently matches or outperforms text-only PEFT and conventional feature-fusion transfer baselines, particularly on high-coverage labels where reliable adaptation is critical. The results indicate that using audio as a contextual uncertainty signal, rather than as a fused feature stream, provides a robust and parameter-efficient alternative for multimodal low-resource prediction.