Beyond Accuracy: Introducing a Symbolic-Mechanistic Approach to Interpretable Evaluation

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that accuracy-only benchmarks can be misleading because they cannot reliably separate true generalization from shortcuts such as memorization, data leakage, or brittle heuristics, particularly in small-data settings.
  • It proposes a mechanism-aware, symbolic-mechanistic evaluation method that uses task-relevant symbolic rules together with mechanistic interpretability to generate algorithmic pass/fail judgments.
  • The authors demonstrate the approach on NL-to-SQL by training two identical models: one forced to memorize by omitting schema information and another grounded with schema.
  • While standard evaluation reports high field-name accuracy for the memorization model, the symbolic-mechanistic evaluation pinpoints that it violates schema generalization rules, exposing a failure not visible through accuracy metrics.

Abstract

Accuracy-based evaluation cannot reliably distinguish genuine generalization from shortcuts like memorization, leakage, or brittle heuristics, especially in small-data regimes. In this position paper, we argue for mechanism-aware evaluation that combines task-relevant symbolic rules with mechanistic interpretability, yielding algorithmic pass/fail scores that show exactly where models generalize versus exploit patterns. We demonstrate this on NL-to-SQL by training two identical architectures under different conditions: one without schema information (forcing memorization), one with schema (enabling grounding). Standard evaluation shows the memorization model achieves 94% field-name accuracy on unseen data, falsely suggesting competence. Our symbolic-mechanistic evaluation reveals this model violates core schema generalization rules, a failure invisible to accuracy metrics.