From Plan to Action: How Well Do Agents Follow the Plan?
arXiv cs.CL / 4/15/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Programming agents (e.g., SWE-agent) are often instructed to follow a task-specific plan, but the paper argues that it has been unclear how reliably they comply and how that affects whether solutions are truly reached via correct reasoning.
- An extensive evaluation using 16,991 trajectories across four LLMs on SWE-bench Verified and SWE-bench Pro tests eight plan variations and finds that omitting a plan leads agents to revert to internally learned workflows that can be incomplete, overfit, or inconsistently applied.
- Providing the standard plan improves issue resolution, while periodic plan reminders can reduce plan violations and increase task success.
- A key result is that even a poor plan can hurt performance more than providing no plan at all, and adding extra early phases can degrade outcomes when they conflict with the model’s internal problem-solving strategy.
- The findings suggest a research gap: instead of relying on encoded task plans, future fine-tuning should teach adaptive reason-and-act plan following to prevent memorized or misaligned workflow behavior.
Related Articles

Black Hat Asia
AI Business
Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]
Reddit r/MachineLearning

I built a trading intelligence MCP server in 2 days — here's how
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Reddit r/LocalLLaMA