Cheaper, Better, Faster, Stronger: Robust Text-to-SQL without Chain-of-Thought or Fine-Tuning

arXiv cs.CL / 4/29/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper addresses the high inference cost of state-of-the-art text-to-SQL methods that rely on Chain-of-Thought, self-consistency, and/or fine-tuning.
  • It proposes “N-rep” consistency, which improves robustness by using multiple representations of the same schema input rather than requiring extra reasoning calls.
  • N-rep achieves BIRD benchmark scores comparable to more expensive approaches while reducing average cost to about $0.039 per query.
  • The method avoids both reasoning-style techniques (no chain-of-thought) and fine-tuning, enabling the use of smaller, cheaper models.
  • The authors claim N-rep is the top-performing text-to-SQL approach within its cost range according to their experiments.

Abstract

LLMs are effective at code generation tasks like text-to-SQL, but is it worth the cost? Many state-of-the-art approaches use non-task-specific LLM techniques including Chain-of-Thought (CoT), self-consistency, and fine-tuning. These methods can be costly at inference time, sometimes requiring over a hundred LLM calls with reasoning, incurring average costs of up to \$0.46 per query, while fine-tuning models can cost thousands of dollars. We introduce "N-rep" consistency, a more cost-efficient text-to-SQL approach that achieves similar BIRD benchmark scores as other more expensive methods, at only \$0.039 per query. N-rep leverages multiple representations of the same schema input to mitigate weaknesses in any single representation, making the solution more robust and allowing the use of smaller and cheaper models without any reasoning or fine-tuning. To our knowledge, N-rep is the best-performing text-to-SQL approach in its cost range.