JAMMEval: A Refined Collection of Japanese Benchmarks for Reliable VLM Evaluation

arXiv cs.CV / 4/2/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces JAMMEval, a refined set of Japanese VQA benchmarks aimed at producing more reliable evaluation for vision-language models (VLMs).
  • It addresses known benchmark-quality problems such as ambiguous questions, incorrect answers, and examples solvable without visual grounding by systematically refining seven existing Japanese datasets.
  • The refinement is done via two rounds of human annotation, improving both data quality and evaluation reliability.
  • Experiments evaluate both open-weight and proprietary VLMs on JAMMEval, showing scores that better reflect actual model capability, lower run-to-run variance, and improved separation between different model capability tiers.
  • The authors release the dataset and code to support more trustworthy Japanese VLM evaluation going forward.

Abstract

Reliable evaluation is essential for the development of vision-language models (VLMs). However, Japanese VQA benchmarks have undergone far less iterative refinement than their English counterparts. As a result, many existing benchmarks contain issues such as ambiguous questions, incorrect answers, and instances that can be solved without visual grounding, undermining evaluation reliability and leading to misleading conclusions in model comparisons. To address these limitations, we introduce JAMMEval, a refined collection of Japanese benchmarks for reliable VLM evaluation. It is constructed by systematically refining seven existing Japanese benchmark datasets through two rounds of human annotation, improving both data quality and evaluation reliability. In our experiments, we evaluate open-weight and proprietary VLMs on JAMMEval and analyze the capabilities of recent models on Japanese VQA. We further demonstrate the effectiveness of our refinement by showing that the resulting benchmarks yield evaluation scores that better reflect model capability, exhibit lower run-to-run variance, and improve the ability to distinguish between models of different capability levels. We release our dataset and code to advance reliable evaluation of VLMs.

JAMMEval: A Refined Collection of Japanese Benchmarks for Reliable VLM Evaluation | AI Navigate