Generalizing Numerical Reasoning in Table Data through Operation Sketches and Self-Supervised Learning
arXiv cs.LG / 4/24/2026
📰 NewsModels & Research
Key Points
- The paper addresses a common limitation in table-based numerical reasoning: models often perform well within a dataset but fail under domain shift due to shortcut learning from table headers.
- It proposes TaNOS, a continual pre-training framework that uses header anonymization, operation sketches for minimal structural cues, and self-supervised program-first generation of correctness-guaranteed program-question pairs.
- By separating domain semantics from numerical operation structure, TaNOS improves transferability of numerical reasoning to new table distributions.
- On FinQA, an 8B instruction-tuned model trained with TaNOS reaches 80.13% execution accuracy using only 10% of training data, beating an SFT baseline (73.97%) trained with full data and outperforming proprietary systems mentioned in the abstract.
- In cross-domain domain-shift tests, TaNOS shows a nearly negligible gap (under 2 percentage points), while standard SFT exhibits gaps over 10 percentage points, indicating substantially better robustness.
Related Articles

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA

Building a Visual Infrastructure Layer: How We’re Solving the "Visual Trust Gap" for E-com
Dev.to
DeepSeek-V4 Runs on Huawei Ascend Chips at 85% Utilization — Here's What That Means for AI Infrastructure and Pricing
Dev.to