Sample Complexity Bounds for Stochastic Shortest Path with a Generative Model

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes the sample complexity of learning an ε-optimal policy for the Stochastic Shortest Path (SSP) problem, assuming access to a generative model.
  • It proves a worst-case lower bound showing that any algorithm needs Ω(SAB⋆^3/(c_min ε^2)) samples to achieve ε-optimality with high probability.
  • A key implication is that if c_min = 0, SSP may become unlearnable in general, making SSP learning strictly harder than in finite-horizon or discounted MDP settings.
  • The authors also provide matching (up to logarithmic factors) algorithms: one for the general case, and another for the c_min = 0 case that requires the optimal policy to have bounded hitting time to the goal state.
  • Overall, the work characterizes when SSP learning is possible and quantifies the sample cost under different structural assumptions.

Abstract

We study the sample complexity of learning an \epsilon-optimal policy in the Stochastic Shortest Path (SSP) problem. We first derive sample complexity bounds when the learner has access to a generative model. We show that there exists a worst-case SSP instance with S states, A actions, minimum cost c_{\min}, and maximum expected cost of the optimal policy over all states B_{\star}, where any algorithm requires at least \Omega(SAB_{\star}^3/(c_{\min}\epsilon^2)) samples to return an \epsilon-optimal policy with high probability. Surprisingly, this implies that whenever c_{\min} = 0 an SSP problem may not be learnable, thus revealing that learning in SSPs is strictly harder than in the finite-horizon and discounted settings. We complement this lower bound with an algorithm that matches it, up to logarithmic factors, in the general case, and an algorithm that matches it up to logarithmic factors even when c_{\min} = 0, but only under the condition that the optimal policy has a bounded hitting time to the goal state.