Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale
arXiv cs.CL / 3/26/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses high cost and latency in production text-to-SQL systems that depend on large proprietary API LLMs and long, schema-heavy prompts.
- It proposes a two-phase supervised fine-tuning method that trains a self-hosted 8B model to internalize the full database schema, cutting input tokens by over 99% (from ~17k to <100).
- The approach is implemented in a cricket-stats conversational bot for CriQ (Dream11’s sister app), replacing expensive external API calls with efficient local inference.
- Reported performance shows 98.4% execution success and 92.5% semantic accuracy, outperforming a prompt-engineered baseline using Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy).
- Overall, the work presents a scalable, domain-specialized path to low-latency, high-precision text-to-SQL using smaller self-hosted models and schema-aware fine-tuning.
Related Articles
AgentDesk vs Hiring Another Consultant: A Cost Comparison
Dev.to
"Why Your AI Agent Needs a System 1"
Dev.to
When should we expect TurboQuant?
Reddit r/LocalLLaMA
AI as Your Customs Co-Pilot: Automating HS Code Chaos in Southeast Asia
Dev.to
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Dev.to