Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models
arXiv cs.CL / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper evaluates source faithfulness and traceability of LLM-generated explanations in retrieval-augmented generation (RAG) for programming education using 90 Stack Overflow questions grounded in three textbooks, benchmarking six LLMs with source-adherence metrics.
- Results show that non-RAG models have 0% median source adherence, while baseline RAG achieves only modest median adherence (22–40%), indicating explanations often remain only partially grounded in the cited sources.
- Building on illocutionary theory, the authors propose illocutionary macro-planning and implement it via chain-of-illocution prompting (CoI), which decomposes a query into implicit explanatory sub-questions to better drive retrieval.
- CoI produces statistically significant improvements in source adherence for most models (up to 63%), though absolute adherence remains moderate and some models see weak or non-significant gains.
- A user study (165 retained participants) finds that improved source adherence does not reduce user satisfaction, relevance, or perceived correctness, supporting the practical value of the prompting approach.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to