Self-Guided Plan Extraction for Instruction-Following Tasks with Goal-Conditional Reinforcement Learning

arXiv cs.AI / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SuperIgor, a framework for instruction-following tasks that lets a language model generate and iteratively refine high-level plans without relying on predefined subtasks.
  • It uses iterative co-training with a goal-conditional RL agent: the RL agent learns to follow the generated plans, while the language model adapts and modifies the plans using RL feedback and preference signals.
  • By removing much of the need for manually annotated datasets and replacing it with self-generated plans, the approach aims to reduce annotation overhead for instruction-following benchmarks.
  • Experiments in complex, stochastic environments show improved instruction adherence versus baseline methods and strong generalization to previously unseen instructions.

Abstract

We introduce SuperIgor, a framework for instruction-following tasks. Unlike prior methods that rely on predefined subtasks, SuperIgor enables a language model to generate and refine high-level plans through a self-learning mechanism, reducing the need for manual dataset annotation. Our approach involves iterative co-training: an RL agent is trained to follow the generated plans, while the language model adapts and modifies these plans based on RL feedback and preferences. This creates a feedback loop where both the agent and the planner improve jointly. We validate our framework in environments with rich dynamics and stochasticity. Results show that SuperIgor agents adhere to instructions more strictly than baseline methods, while also demonstrating strong generalization to previously unseen instructions.