Diffusion Policy with Bayesian Expert Selection for Active Multi-Target Tracking

arXiv cs.RO / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses active multi-target tracking for mobile robots by combining diffusion-policy action generation with explicit uncertainty-aware selection among multiple expert strategies.
  • It formulates expert selection as an offline contextual bandit problem and introduces a Bayesian framework to estimate each expert’s expected tracking performance from the robot’s current belief state.
  • A multi-head Variational Bayesian Last Layer (VBLL) model provides both point estimates and predictive uncertainty for the performance of each candidate strategy.
  • Using an offline “pessimism” principle, the method applies a Lower Confidence Bound (LCB) criterion to choose the expert with the best worst-case predicted performance, reducing the risk of acting on unreliable strategy estimates.
  • Experiments in simulated indoor multi-target tracking show improved performance over a base diffusion policy and standard expert-gating approaches such as Mixture-of-Experts.

Abstract

Active multi-target tracking requires a mobile robot to balance exploration for undetected targets with exploitation of uncertain tracked ones. Diffusion policies have emerged as a powerful approach for capturing diverse behavioral strategies by learning action sequences from expert demonstrations. However, existing methods implicitly select among strategies through the denoising process, without uncertainty quantification over which strategy to execute. We formulate expert selection for diffusion policies as an offline contextual bandit problem and propose a Bayesian framework for pessimistic, uncertainty-aware strategy selection. A multi-head Variational Bayesian Last Layer (VBLL) model predicts the expected tracking performance of each expert strategy given the current belief state, providing both a point estimate and predictive uncertainty. Following the pessimism principle for offline decision-making, a Lower Confidence Bound (LCB) criterion then selects the expert whose worst-case predicted performance is best, avoiding overcommitment to experts with unreliable predictions. The selected expert conditions a diffusion policy to generate corresponding action sequences. Experiments on simulated indoor tracking scenarios demonstrate that our approach outperforms both the base diffusion policy and standard gating methods, including Mixture-of-Experts selection and deterministic regression baselines.