Diagnosing Non-Markovian Observations in Reinforcement Learning via Prediction-Based Violation Scoring

arXiv cs.LG / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The paper addresses how real-world RL observations often violate the Markov property due to factors like correlated noise, latency, and partial observability, which standard metrics fail to diagnose separately from other suboptimalities.
  • It proposes a prediction-based violation scoring method that uses a random forest to remove nonlinear Markov-compliant dynamics and then ridge regression to test whether history improves prediction error on residuals, producing a bounded score in [0,1] without requiring causal graph construction.
  • Across six common RL environments and three algorithms (PPO, A2C, SAC), the authors find that higher AR(1) noise intensity is often associated with higher non-Markovian violation scores (notably in high-dimensional locomotion tasks), with reported Spearman correlations up to 0.78.
  • Under training-time noise, most environment–algorithm pairs show statistically significant reward degradation, and the study also documents an “inversion” failure mode in low-dimensional settings where the random forest can absorb the noise signal, lowering the score as violations increase.
  • A utility experiment suggests the score can identify partial observability and help guide architecture selection, recovering performance losses attributed to non-Markovian observations, with reproducible code released on GitHub.

Abstract

Reinforcement learning algorithms assume that observations satisfy the Markov property, yet real-world sensors frequently violate this assumption through correlated noise, latency, or partial observability. Standard performance metrics conflate Markov breakdowns with other sources of suboptimality, leaving practitioners without diagnostic tools for such violations. This paper introduces a prediction-based scoring method that quantifies non-Markovian structure in observation trajectories. A random forest first removes nonlinear Markov-compliant dynamics; ridge regression then tests whether historical observations reduce prediction error on the residuals beyond what the current observation provides. The resulting score is bounded in [0, 1] and requires no causal graph construction. Evaluation spans six environments (CartPole, Pendulum, Acrobot, HalfCheetah, Hopper, Walker2d), three algorithms (PPO, A2C, SAC), controlled AR(1) noise at six intensity levels, and 10 seeds per condition. In post-hoc detection, 7 of 16 environment-algorithm pairs, primarily high-dimensional locomotion tasks, show significant positive monotonicity between noise intensity and the violation score (Spearman rho up to 0.78, confirmed under repeated-measures analysis); under training-time noise, 13 of 16 pairs exhibit statistically significant reward degradation. An inversion phenomenon is documented in low-dimensional environments where the random forest absorbs the noise signal, causing the score to decrease as true violations grow, a failure mode analyzed in detail. A practical utility experiment demonstrates that the proposed score correctly identifies partial observability and guides architecture selection, fully recovering performance lost to non-Markovian observations. Source code to reproduce all results is provided at https://github.com/NAVEENMN/Markovianes.