AI Navigate

Reward Prediction with Factorized World States

arXiv cs.CL / 3/11/2026

Signals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces StateFactory, a method that factorizes unstructured observations into a hierarchical object-attribute structure using language models, enabling better reward prediction.
  • StateFactory estimates rewards as semantic similarity between the current state and the goal state under hierarchical constraints, promoting strong generalization across domains.
  • The approach is evaluated on a new RewardPrediction benchmark with diverse domains and shows promising zero-shot results, outperforming existing models like VLWM-critic and LLM-as-a-Judge.
  • Improved reward prediction quality translates into better agent planning, resulting in significant success rate gains on tasks like AlfWorld and ScienceWorld compared to reactive and previous planning methods.
  • This research highlights the benefits of structured world state representations for reinforcement learning agents to generalize reward models without relying heavily on supervised training biases.

Computer Science > Computation and Language

arXiv:2603.09400 (cs)
[Submitted on 10 Mar 2026]

Title:Reward Prediction with Factorized World States

View a PDF of the paper titled Reward Prediction with Factorized World States, by Yijun Shen and 6 other authors
View PDF HTML (experimental)
Abstract:Agents must infer action outcomes and select actions that maximize a reward signal indicating how close the goal is to being reached. Supervised learning of reward models could introduce biases inherent to training data, limiting generalization to novel goals and environments. In this paper, we investigate whether well-defined world state representations alone can enable accurate reward prediction across domains. To address this, we introduce StateFactory, a factorized representation method that transforms unstructured observations into a hierarchical object-attribute structure using language models. This structured representation allows rewards to be estimated naturally as the semantic similarity between the current state and the goal state under hierarchical constraint. Overall, the compact representation structure induced by StateFactory enables strong reward generalization capabilities. We evaluate on RewardPrediction, a new benchmark dataset spanning five diverse domains and comprising 2,454 unique action-observation trajectories with step-wise ground-truth rewards. Our method shows promising zero-shot results against both VLWM-critic and LLM-as-a-Judge reward models, achieving 60% and 8% lower EPIC distance, respectively. Furthermore, this superior reward quality successfully translates into improved agent planning performance, yielding success rate gains of +21.64% on AlfWorld and +12.40% on ScienceWorld over reactive system-1 policies and enhancing system-2 agent planning. Project Page: this https URL
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2603.09400 [cs.CL]
  (or arXiv:2603.09400v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.09400
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yijun Shen [view email]
[v1] Tue, 10 Mar 2026 09:12:20 UTC (2,802 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.