Compiling Deterministic Structure into SLM Harnesses
arXiv cs.AI / 4/21/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper targets enterprise deployment constraints for small language models (SLMs), where they lack self-correction and frontier models are too costly or limited by data sovereignty.
- It proposes Semantic Gradient Descent (SGDe), a teacher-student method that compiles agentic workflows into discrete execution plans (DAGs, system prompts, and deterministic code) rather than relying on stochastic training alone.
- SGDe uses a “frontier teacher” to generate natural-language critiques that act like directional gradients to iteratively refine the SLM’s workflow artifacts in a discrete semantic space.
- The authors formalize SGDe under a PAC learning framework and claim convergence with as few as three training examples on targeted synthetic tasks by treating the teacher as a statistical prior.
- Experiments on an adversarially synthesized GSM-Hard-derived benchmark show strong gains over prior prompt optimizers (up to 91.3% at m=5 and 99.3% at m=3), supported by two deterministic structures: capability offloading to a Python runtime and structural consensus via variance-limited reasoning subgraphs.
Related Articles

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Why I Built byCode: A 100% Local, Privacy-First AI IDE
Dev.to

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register
v0.21.1
Ollama Releases

How I Built an AI Agent That Investigates Cloud Bill Spikes (Architecture Inside)
Dev.to