AI Navigate

Beyond Final Answers: CRYSTAL Benchmark for Transparent Multimodal Reasoning Evaluation

arXiv cs.AI / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • CRYSTAL is a diagnostic benchmark with 6,372 instances that evaluates multimodal reasoning through verifiable intermediate steps and introduces two metrics: Match F1 and Ordered Match F1.
  • The benchmark uses a Delphi-inspired pipeline in which four independent MLLMs generate trajectories that are clustered semantically and validated through human quality gates.
  • Evaluation across 20 MLLMs, including commercial frontier systems not used during benchmark construction, reveals systematic failures invisible to accuracy assessments, such as universal cherry-picking and disordered reasoning.
  • To address these issues, the authors propose the Causal Process Reward (CPR) and CPR-Curriculum, with CPR-Curriculum achieving a +32% improvement in Match F1 via GRPO and reducing reliance on manual step annotation.

Abstract

We introduce **CRYSTAL** (*__C__lear __R__easoning via __Y__ielded __S__teps, __T__raceability and __L__ogic*), a diagnostic benchmark with 6,372 instances that evaluates multimodal reasoning through verifiable intermediate steps. We propose two complementary metrics: *Match F1*, which scores step-level precision and recall via semantic similarity matching, and *Ordered Match F1*, which further penalizes disordered reasoning chains. References are constructed through a Delphi-inspired pipeline where four independent MLLMs generate trajectories, aggregated via semantic clustering and validated through human quality gates. Evaluation of 20 MLLMs, including commercial frontier systems not used during benchmark construction, reveals systematic failures invisible to accuracy: universal cherry-picking (precision far exceeds recall), non-monotonic scaling trade-offs, and disordered reasoning where no competitive model preserves more than 60% of matched steps in correct order. Beyond evaluation, we propose the **Causal Process Reward (CPR)**, a multiplicative reward that couples answer correctness with step-level alignment, and **CPR-Curriculum**, which progressively increases reasoning difficulty during training. CPR-Curriculum achieves +32% Match F1 via GRPO where additive reward strategies fail, improving reasoning without manual step annotation.