AI Navigate

Regularized Latent Dynamics Prediction is a Strong Baseline For Behavioral Foundation Models

arXiv cs.AI / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Regularized Latent Dynamics Prediction (RLDP) adds an orthogonality regularization to latent state features to maintain diversity and prevent collapse.
  • The approach aims to be a simple, competitive baseline that can match or surpass complex representation-learning objectives for zero-shot RL.
  • It shows robustness by performing well in low-coverage data scenarios where prior methods struggle.
  • The work positions RLDP as a strong baseline for Behavioral Foundation Models, potentially reducing the need for extensive representation learning for BFMs.

Abstract

Behavioral Foundation Models (BFMs) produce agents with the capability to adapt to any unknown reward or task. These methods, however, are only able to produce near-optimal policies for the reward functions that are in the span of some pre-existing state features, making the choice of state features crucial to the expressivity of the BFM. As a result, BFMs are trained using a variety of complex objectives and require sufficient dataset coverage, to train task-useful spanning features. In this work, we examine the question: are these complex representation learning objectives necessary for zero-shot RL? Specifically, we revisit the objective of self-supervised next-state prediction in latent space for state feature learning, but observe that such an objective alone is prone to increasing state-feature similarity, and subsequently reducing span. We propose an approach, Regularized Latent Dynamics Prediction (RLDP), that adds a simple orthogonality regularization to maintain feature diversity and can match or surpass state-of-the-art complex representation learning methods for zero-shot RL. Furthermore, we empirically show that prior approaches perform poorly in low-coverage scenarios where RLDP still succeeds.