Softmax gradient policy for variance minimization and risk-averse multi armed bandits

arXiv cs.AI / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies a risk-averse multi-armed bandit setting that prioritizes selecting the arm with the lowest reward variance instead of the highest expected reward.
  • It uses a softmax-parameterized policy and introduces a new algorithm whose objective is based on an unbiased estimate constructed from two independent draws from the arm distribution.
  • The authors prove convergence of the proposed variance-minimizing/risk-averse method under natural assumptions.
  • Numerical experiments are provided to demonstrate practical behavior and to inform implementation choices, including extensions to settings that balance mean reward and variance.
  • Overall, the work broadens bandit theory toward stability-focused decision-making and offers a method that can be adapted to general risk-aware optimization trade-offs.

Abstract

Algorithms for the Multi-Armed Bandit (MAB) problem play a central role in sequential decision-making and have been extensively explored both theoretically and numerically. While most classical approaches aim to identify the arm with the highest expected reward, we focus on a risk-aware setting where the goal is to select the arm with the lowest variance, favoring stability over potentially high but uncertain returns. To model the decision process, we consider a softmax parameterization of the policy; we propose a new algorithm to select the minimal variance (or minimal risk) arm and prove its convergence under natural conditions. The algorithm constructs an unbiased estimate of the objective by using two independent draws from the current's arm distribution. We provide numerical experiments that illustrate the practical behavior of these algorithms and offer guidance on implementation choices. The setting also covers general risk-aware problems where there is a trade-off between maximizing the average reward and minimizing its variance.