CAF-Score: Calibrating CLAP with LALMs for Reference-free Audio Captioning Evaluation

arXiv cs.CL / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • CAF-Score is a reference-free evaluation metric for audio captioning that calibrates CLAP's coarse semantic alignment with the fine-grained understanding of Large Audio-Language Models (LALMs).
  • It combines contrastive audio-text embeddings with LALM-style reasoning to detect syntactic inconsistencies and subtle hallucinations in captions.
  • In BRACE benchmark experiments, CAF-Score achieves the highest correlation with human judgments and can outperform traditional reference-based metrics in challenging scenarios.
  • The authors provide code and results on GitHub, enabling reproducibility and wider adoption of the metric.

Abstract

While Large Audio-Language Models (LALMs) have advanced audio captioning, robust evaluation remains difficult. Reference-based metrics are expensive and often fail to assess acoustic fidelity, while Contrastive Language-Audio Pretraining (CLAP)-based approaches frequently overlook syntactic errors and fine-grained details. We propose CAF-Score, a reference-free metric that calibrates CLAP's coarse-grained semantic alignment with the fine-grained comprehension and syntactic awareness of LALMs. By combining contrastive audio-text embeddings with LALM reasoning, CAF-Score effectively detects syntactic inconsistencies and subtle hallucinations. Experiments on the BRACE benchmark demonstrate that our approach achieves the highest correlation with human judgments, even outperforming reference-based baselines in challenging scenarios. These results highlight the efficacy of CAF-Score for reference-free audio captioning evaluation. Code and results are available at https://github.com/inseong00/CAF-Score.