Belief-State RWKV for Reinforcement Learning under Partial Observability

arXiv cs.LG / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Belief-State RWKV” for reinforcement learning under partial observability by interpreting the RWKV recurrent state as an explicit belief state rather than an opaque hidden vector.
  • It replaces conditioning on a single hidden summary h_t with an uncertainty-aware belief state b_t = (μ_t, Σ_t), so the policy/value can use both memory and estimated confidence.
  • The approach targets a limitation of fixed-state recurrent policies: they may accumulate evidence but do not necessarily represent how confident that evidence is.
  • The authors include a theoretical program and a pilot RL experiment using hidden episode-level observation noise plus test-time noise sweeps.
  • Results indicate belief-state policies nearly match the strongest recurrent baseline overall and improve returns in the hardest in-distribution regime and under a held-out noise shift, with ablations suggesting this belief readout currently beats more structured extensions like gated memory control and privileged belief targets.

Abstract

We propose a stronger formulation of RL on top of RWKV-style recurrent sequence models, in which the fixed-size recurrent state is explicitly interpreted as a belief state rather than an opaque hidden vector. Instead of conditioning policy and value on a single summary h_t, we maintain a compact uncertainty-aware state b_t = (\mu_t, \Sigma_t) derived from RWKV-style recurrent statistics and let control depend on both memory and uncertainty. This design targets a key weakness of plain fixed-state policies in partially observed settings: they may store evidence, but not necessarily confidence. We present the method, a theoretical program, and a pilot RL experiment with hidden episode-level observation noise together with a test-time noise sweep. The pilot shows that belief-state policies nearly match the best recurrent baseline overall while slightly improving return on the hardest in-distribution regime and under a held-out noise shift. Additional ablations show that this simple belief readout is currently stronger than two more structured extensions, namely gated memory control and privileged belief targets, underscoring the need for richer benchmarks.