From Context to Skills: Can Language Models Learn from Context Skillfully?
arXiv cs.AI / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Many real-world language-model tasks require learning and reasoning over long, complex contexts that go beyond the model’s fixed parametric knowledge, motivating “context learning.”
- The paper proposes Ctx2Skill, which performs inference-time skill augmentation by autonomously discovering, refining, and selecting context-specific natural-language skills without human supervision or external feedback.
- Ctx2Skill uses a multi-agent self-play loop (Challenger/Reasoner with a neutral Judge) plus Proposer/Generator components that analyze failures and turn them into targeted skill updates for both sides.
- To maintain robustness and avoid adversarial collapse or over-specialization, it introduces a Cross-time Replay mechanism that selects skill sets providing the best balance across representative cases.
- Experiments on four CL-bench context-learning tasks show that the learned skills can be plugged into multiple backbone language models and consistently improve solving rates.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER