Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses high cost and latency in production text-to-SQL systems that depend on large proprietary API LLMs and long, schema-heavy prompts.
  • It proposes a two-phase supervised fine-tuning method that trains a self-hosted 8B model to internalize the full database schema, cutting input tokens by over 99% (from ~17k to <100).
  • The approach is implemented in a cricket-stats conversational bot for CriQ (Dream11’s sister app), replacing expensive external API calls with efficient local inference.
  • Reported performance shows 98.4% execution success and 92.5% semantic accuracy, outperforming a prompt-engineered baseline using Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy).
  • Overall, the work presents a scalable, domain-specialized path to low-latency, high-precision text-to-SQL using smaller self-hosted models and schema-aware fine-tuning.

Abstract

Applying large, proprietary API-based language models to text-to-SQL tasks poses a significant industry challenge: reliance on massive, schema-heavy prompts results in prohibitive per-token API costs and high latency, hindering scalable production deployment. We present a specialized, self-hosted 8B-parameter model designed for a conversational bot in CriQ, a sister app to Dream11, India's largest fantasy sports platform with over 250 million users, that answers user queries about cricket statistics. Our novel two-phase supervised fine-tuning approach enables the model to internalize the entire database schema, eliminating the need for long-context prompts. This reduces input tokens by over 99%, from a 17k-token baseline to fewer than 100, and replaces costly external API calls with efficient local inference. The resulting system achieves 98.4% execution success and 92.5% semantic accuracy, substantially outperforming a prompt-engineered baseline using Google's Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy). These results demonstrate a practical path toward high-precision, low-latency text-to-SQL applications using domain-specialized, self-hosted language models in large-scale production environments.

Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale | AI Navigate