ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules

arXiv cs.AI / 4/1/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • ScoringBench is introduced as an open benchmark for evaluating tabular foundation models using proper scoring rules that better capture probabilistic forecast quality than point-estimate metrics alone.
  • The benchmark computes multiple distribution-aware metrics (e.g., CRPS, CRLS, Interval Score, Energy Score, weighted CRPS, and Brier Score) alongside standard regression measures like RMSE and R².
  • Experiments with fine-tuned versions of realTabPFN v2.5 and TabICL show that model rankings change depending on the chosen scoring rule, indicating no single pretraining objective is universally best.
  • The authors argue that proper metric selection is crucial for high-stakes domains where tail behavior and asymmetric risk are important, such as finance and clinical research.
  • ScoringBench provides a public leaderboard and live preview, with updates managed via git pull requests to support transparency, traceability, and reproducibility.

Abstract

Tabular foundation models such as TabPFN and TabICL already produce full predictive distributions yet prevailing regression benchmarks evaluate them almost exclusively via point estimate metrics RMSE R2 These aggregate measures often obscure model performance in the tails of the distribution a critical deficit for high stakes decision making in domains like finance and clinical research where asymmetric risk profiles are the norm We introduce ScoringBench an open benchmark that computes a comprehensive suite of proper scoring rules like CRPS CRLS Interval Score Energy Score weighted CRPS and Brier Score alongside standard point metrics providing a richer picture of probabilistic forecast quality We evaluate realTabPFNv2.5 fine tuned with different scoring rule objectives and TabICL relative to untuned realTabPFNv2.5 across a suite of regression benchmarks Our results confirm that model rankings depend on the chosen scoring rule and that no single pretraining objective is universally optimal This demonstrates that for applications sensitive to extreme events the choice of evaluation metric is as much a domain specific requirement as the data itself ScoringBench is available at https://github.com/jonaslandsgesell/ScoringBench A live preview of the current leaderboard is available at https://scoringbench.bolt.host The leaderboard is maintained via git pull requests to ensure transparency traceability agility and reproducibility

ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules | AI Navigate