Don't Let the Video Speak: Audio-Contrastive Preference Optimization for Audio-Visual Language Models
arXiv cs.CV / 4/16/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a reliability bottleneck in audio-visual language models caused by cross-modal hallucination, especially when models rely on video “shortcuts” to generate sounds that aren’t actually present in the audio.
- It proposes Audio-Contrastive Preference Optimization (ACPO), a dual-axis preference learning method that uses output-contrastive training to penalize visual descriptions presented as true audio.
- ACPO also applies input-contrastive learning by swapping audio tracks to penalize generations that remain invariant to the real auditory signal.
- Experiments reported in the paper show that ACPO improves faithful audio grounding and reduces video-driven audio hallucination while preserving broader multimodal performance.
Related Articles

Black Hat Asia
AI Business
oh-my-agent is Now Official on Homebrew-core: A New Milestone for Multi-Agent Orchestration
Dev.to
"The AI Agent's Guide to Sustainable Income: From Zero to Profitability"
Dev.to
"The Hidden Economics of AI Agents: Survival Strategies in Competitive Markets"
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to