The Essence of Balance for Self-Improving Agents in Vision-and-Language Navigation

arXiv cs.CV / 4/22/2026

📰 NewsModels & Research

Key Points

  • Vision-and-language navigation (VLN) agents can self-improve from policy-induced experience, but this only works when behavioral diversity and learning stability are carefully balanced to produce a reliable learning signal.
  • Simply increasing diversity can destabilize learning signals, while overly strict stability constraints reduce exploration, causing early commitment that harms reliable self-improvement.
  • The paper introduces Stability-Diversity Balance (SDB), a plug-and-play method that generates multiple latent behavioral hypotheses per decision step by applying controlled shifts to instruction-conditioned hidden states.
  • SDB uses reliability-aware soft evaluation and aggregation, plus an explicit regularizer to prevent hypothesis drift or collapse, improving stable self-improvement while retaining training signals.
  • Experiments on R2R, SOON, and REVERIE show consistent gains, including on REVERIE val-unseen where SPL increases from 33.73 to 35.93 and OSR from 51.07 to 54.25.

Abstract

In vision-and-language navigation (VLN), self-improvement from policy-induced experience, using only standard VLN action supervision, critically depends on balancing behavioral diversity and learning stability, which governs whether the agent can extract a reliable learning signal for improvement. Increasing behavioral diversity is necessary to expose alternative action hypotheses but can destabilize policy-induced learning signals, whereas overly conservative stability constraints suppress exploration and induce early commitment, making reliable self-improvement difficult. To address this challenge, we propose Stability-Diversity Balance (SDB), a plug-and-play mechanism for balanced self-improvement in VLN. SDB expands each decision step into multiple latent behavioral hypotheses by applying controlled shifts in the instruction-conditioned hidden states, and then performs reliability-aware soft evaluation and aggregation to retain diverse yet instruction-consistent alternatives during learning. An explicit regularizer further constrains hypothesis interactions, preventing excessive drift or premature collapse of hypothesis diversity and stabilizing self-improvement without discarding training signals. Experiments on R2R, SOON, and REVERIE show consistent improvements; for example, on REVERIE val-unseen, SDB improves SPL from 33.73 to 35.93 and OSR from 51.07 to 54.25.