Live LTL Progress Tracking: Towards Task-Based Exploration

arXiv cs.LG / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Live LTL Progress Tracking,” a framework for monitoring and representing an autonomous agent’s progress on complex, multi-stage tasks in reinforcement learning (RL).
  • It takes an LTL (linear temporal logic) specification and builds a “tracking vector” that updates at every time step of a trajectory rollout, labeling parts of the task as true, false, or “open” when outcomes are indeterminate.
  • By applying the tracking vector to an LTL formula tree, the method encodes fine-grained execution information along a trajectory, enabling richer performance metrics, more diverse exploration, and reward shaping.
  • The authors present the framework and algorithm, include a working example, and outline how it could integrate into RL models, with future applications targeting task-space exploration and finding diverse solutions.

Abstract

Motivated by the challenge presented by non-Markovian objectives in reinforcement learning (RL), we present a novel framework to track and represent the progress of autonomous agents through complex, multi-stage tasks. Given a specification in finite linear temporal logic (LTL), the framework establishes a 'tracking vector' which updates at each time step in a trajectory rollout. The values of the vector represent the status of the specification as the trajectory develops, assigning true, false, or 'open' labels (where 'open' is used for indeterminate cases). Applied to an LTL formula tree, the tracking vector can be used to encode detailed information about how a task is executed over a trajectory, providing a potential tool for new performance metrics, diverse exploration, and reward shaping. In this paper, we formally present the framework and algorithm, collectively named Live LTL Progress Tracking, give a simple working example, and demonstrate avenues for its integration into RL models. Future work will apply the framework to problems such as task-space exploration and diverse solution-finding in RL.