Do Audio-Visual Large Language Models Really See and Hear?

arXiv cs.AI / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents the first mechanistic interpretability study of Audio-Visual Large Language Models (AVLLMs), tracking how audio and visual features change and fuse across layers to generate text outputs.
  • It finds that while AVLLMs learn rich audio semantics at intermediate layers, these audio capabilities often do not appear in final outputs when audio conflicts with vision.
  • Probing shows latent audio information is still present, but later fusion layers disproportionately favor visual representations, suppressing audio cues.
  • The study attributes this modality imbalance to training, noting that the model’s audio behavior closely matches its vision-language base model, suggesting limited additional alignment to audio supervision.
  • Overall, the findings identify a fundamental modality bias in AVLLMs and explain mechanistically how multimodal LLMs integrate audio and vision.

Abstract

Audio-Visual Large Language Models (AVLLMs) are emerging as unified interfaces to multimodal perception. We present the first mechanistic interpretability study of AVLLMs, analyzing how audio and visual features evolve and fuse through different layers of an AVLLM to produce the final text outputs. We find that although AVLLMs encode rich audio semantics at intermediate layers, these capabilities largely fail to surface in the final text generation when audio conflicts with vision. Probing analyses show that useful latent audio information is present, but deeper fusion layers disproportionately privilege visual representations that tend to suppress audio cues. We further trace this imbalance to training: the AVLLM's audio behavior strongly matches its vision-language base model, indicating limited additional alignment to audio supervision. Our findings reveal a fundamental modality bias in AVLLMs and provide new mechanistic insights into how multimodal LLMs integrate audio and vision.