From Plan to Action: How Well Do Agents Follow the Plan?

arXiv cs.CL / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Programming agents (e.g., SWE-agent) are often instructed to follow a task-specific plan, but the paper argues that it has been unclear how reliably they comply and how that affects whether solutions are truly reached via correct reasoning.
  • An extensive evaluation using 16,991 trajectories across four LLMs on SWE-bench Verified and SWE-bench Pro tests eight plan variations and finds that omitting a plan leads agents to revert to internally learned workflows that can be incomplete, overfit, or inconsistently applied.
  • Providing the standard plan improves issue resolution, while periodic plan reminders can reduce plan violations and increase task success.
  • A key result is that even a poor plan can hurt performance more than providing no plan at all, and adding extra early phases can degrade outcomes when they conflict with the model’s internal problem-solving strategy.
  • The findings suggest a research gap: instead of relying on encoded task plans, future fine-tuning should teach adaptive reason-and-act plan following to prevent memorized or misaligned workflow behavior.

Abstract

Agents aspire to eliminate the need for task-specific prompt crafting through autonomous reason-act-observe loops. Still, they are commonly instructed to follow a task-specific plan for guidance, e.g., to resolve software issues following phases for navigation, reproduction, patch, and validation. Unfortunately, it is unknown to what extent agents actually follow such instructed plans. Without such an analysis, determining the extent agents comply with a given plan, it is impossible to assess whether a solution was reached through correct strategic reasoning or through other means, e.g., data contamination or overfitting to a benchmark. This paper presents the first extensive, systematic analysis of plan compliance in programming agents, examining 16,991 trajectories from SWE-agent across four LLMs on SWE-bench Verified and SWE-bench Pro under eight plan variations. Without an explicit plan, agents fall back on workflows internalized during training, which are often incomplete, overfit, or inconsistently applied. Providing the standard plan improves issue resolution, and we observe that periodic plan reminders can mitigate plan violations and improve task success. A subpar plan hurts performance even more than no plan at all. Surprisingly, augmenting a plan with additional task-relevant phases in the early stage can degrade performance, particularly when these phases do not align with the model's internal problem-solving strategy. These findings highlight a research gap: fine-tuning paradigms that teach models to follow instructed plans, rather than encoding task-specific plans in them. This requires teaching models to reason and act adaptively, rather than memorizing workflows.