AI Navigate

Knowledge Distillation with Structured Chain-of-Thought for Text-to-SQL

arXiv cs.CL / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The article discusses the enterprise challenge of deploying Text-to-SQL due to cost, security, and performance, highlighting the trade-off between expensive proprietary LLMs and lower-performing SLMs.
  • It proposes Struct-SQL, a knowledge distillation framework that trains an SLM to emulate a powerful LLM by using a structured reasoning representation derived from a query execution plan as a formal blueprint.
  • It reports an absolute improvement of 8.1 percentage points over an unstructured CoT distillation baseline, demonstrating the effectiveness of structured reasoning for Text-to-SQL.
  • It finds that the gain largely comes from a reduction in syntactic errors, suggesting that teaching a model to reason with a structured logical blueprint improves reliability of SQL generation in SLMs.

Abstract

Deploying accurate Text-to-SQL systems at the enterprise level faces a difficult trilemma involving cost, security and performance. Current solutions force enterprises to choose between expensive, proprietary Large Language Models (LLMs) and low-performing Small Language Models (SLMs). Efforts to improve SLMs often rely on distilling reasoning from large LLMs using unstructured Chain-of-Thought (CoT) traces, a process that remains inherently ambiguous. Instead, we hypothesize that a formal, structured reasoning representation provides a clearer, more reliable teaching signal, as the Text-to-SQL task requires explicit and precise logical steps. To evaluate this hypothesis, we propose Struct-SQL, a novel Knowledge Distillation (KD) framework that trains an SLM to emulate a powerful large LLM. Consequently, we adopt a query execution plan as a formal blueprint to derive this structured reasoning. Our SLM, distilled with structured CoT, achieves an absolute improvement of 8.1% over an unstructured CoT distillation baseline. A detailed error analysis reveals that a key factor in this gain is a marked reduction in syntactic errors. This demonstrates that teaching a model to reason using a structured logical blueprint is beneficial for reliable SQL generation in SLMs.