Beyond Accuracy: Introducing a Symbolic-Mechanistic Approach to Interpretable Evaluation
arXiv cs.LG / 2026/3/26
💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper argues that accuracy-only benchmarks can be misleading because they cannot reliably separate true generalization from shortcuts such as memorization, data leakage, or brittle heuristics, particularly in small-data settings.
- It proposes a mechanism-aware, symbolic-mechanistic evaluation method that uses task-relevant symbolic rules together with mechanistic interpretability to generate algorithmic pass/fail judgments.
- The authors demonstrate the approach on NL-to-SQL by training two identical models: one forced to memorize by omitting schema information and another grounded with schema.
- While standard evaluation reports high field-name accuracy for the memorization model, the symbolic-mechanistic evaluation pinpoints that it violates schema generalization rules, exposing a failure not visible through accuracy metrics.



