When Context Sticks: Studying Interference in In-Context Learning
arXiv cs.LG / 4/28/2026
💬 OpinionModels & Research
Key Points
- The paper studies “context stickiness” in in-context learning, where earlier prompt examples can continue to bias a transformer’s predictions for later tasks.
- Using synthetic regression benchmarks with linear-to-quadratic task switches, the authors measure how misleading context increases prediction error and how quickly models recover after the switch.
- Results show persistent interference: adding more prior misleading linear examples consistently worsens quadratic prediction quality, while adding more correct quadratic examples helps but eventually shows diminishing returns.
- The study finds that training curriculum strongly affects robustness, with sequential training on the target function class enabling the fastest recovery, while random training leads to the least resilient behavior.
Related Articles

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to

Real-Time Monitoring for AI Agents: Beyond Log Streaming
Dev.to
Top 10 Physical AI Models Powering Real-World Robots in 2026
MarkTechPost