Demonstrations, CoT, and Prompting: A Theoretical Analysis of ICL
arXiv cs.LG / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper provides a theoretical analysis of In-Context Learning (ICL) under mild assumptions, linking demonstration design, Chain-of-Thought prompting, the number of demonstrations, and prompt templates to generalization.
- It derives an upper bound on the ICL test loss, showing that performance depends on the quality of demonstrations (quantified via Lipschitz properties), the model's intrinsic ICL capability, and the degree of distribution shift.
- It analyzes Chain-of-Thought prompting as a form of task decomposition, beneficial when demonstrations are well-chosen for each substep and the subtasks are easier to learn.
- It discusses how ICL's sensitivity to prompt templates varies with the number of demonstrations and provides experiments that corroborate the theoretical insights.
Related Articles
CRM Development That Drives Growth
Dev.to

Karpathy's Autoresearch: Improving Agentic Coding Skills
Dev.to
How to Write AI Prompts That Actually Work
Dev.to
[D] Any other PhD students feel underprepared and that the bar is too low?
Reddit r/MachineLearning
Automating the Perfect Pitch: An AI Framework for Boutique PR
Dev.to