Long-Document QA with Chain-of-Structured-Thought and Fine-Tuned SLMs
arXiv cs.CL / 4/1/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes LiteCoST for long-document question answering by consolidating dispersed evidence into a structured, auditable output like tables, graphs, or aligned chunks.
- It introduces Chain-of-Structured-Thought (CoST), a schema-aware prompting template that guides a stronger LLM to generate both a step-wise reasoning trace and the corresponding structured output, including normalization, alignment, and verification/refinement.
- LiteCoST uses two-stage fine-tuning of small language models (SLMs) on LLM-generated CoST data: supervised fine-tuning for structural alignment followed by GRPO with multiple rewards for answer/format quality and process consistency.
- Experiments claim LLM-comparable accuracy on multi-domain long-document QA using 3B/7B SLMs, while achieving 2–4x lower latency than GPT-4o and DeepSeek-R1 (671B).
- The authors provide code via the referenced GitHub repository to enable reproduction and further experimentation.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

How to Create AI Videos in 20 Minutes (3 Free Tools, Zero Experience)
Dev.to

Agent Self-Discovery: How AI Agents Find Their Own Wallets
Dev.to