SLM Finetuning for Natural Language to Domain Specific Code Generation in Production

arXiv cs.LG / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates fine-tuning small language models (a few billion parameters) to improve natural-language-to-domain-specific code generation in production settings with strict latency constraints.
  • It reports that fine-tuned variants of Mistral and other models outperform larger models on test datasets in both performance and latency, while also addressing issues like hallucinations and limited long-context retention.
  • Fine-tuning is positioned as a way to embed domain knowledge directly into model weights, reducing dependence on runtime context and potentially lowering system complexity versus retrieval-augmented generation.
  • The authors show that the resulting models can be further fine-tuned for customer-specific scenarios without degrading general performance, and they validate improvements through load testing and production deployment.

Abstract

Many applications today use large language models for code generation; however, production systems have strict latency requirements that can be difficult to meet with large models. Small language models with a few billion parameters are resource efficient but may suffer from limited reasoning, hallucinations, or poor retention of longer context. Fine tuning improves task specific accuracy by embedding domain knowledge directly into model weights, reducing reliance on runtime context. We previously implemented a baseline natural language to code generation approach using a retrieval augmented generation pipeline that dynamically selected few shot examples to embed domain specific language context for a large language model. In this study, we evaluate small language models for generating domain specific language from natural language by fine tuning variants of Mistral and other models on a dataset of natural language code pairs. Our results show that the fine-tuned models achieve improved performance and latency on test datasets compared to larger models. We also demonstrate that the trained model can be further fine-tuned for customer specific scenarios without degrading general performance, helping resolve production issues. Load testing followed by production deployment confirmed optimal performance in terms of latency and quality. These findings demonstrate that task specific fine tuning with small language models provides an efficient, faster, and cost-effective alternative to large language models for domain specific language generation.