AI Navigate

VEPO: Variable Entropy Policy Optimization for Low-Resource Language Foundation Models

arXiv cs.CL / 3/20/2026

📰 NewsModels & Research

Key Points

  • VEPO applies Reinforcement Learning with Verifiable Rewards to enforce deterministic constraints such as prescribed sequence length, robust format consistency, and linguistically well-formed output during training.
  • A variable entropy mechanism enables the model to dynamically balance literal fidelity and semantic naturalness by adjusting the exploration-exploitation trade-off.
  • The approach integrates entropy-tempered advantage estimation with asymmetric clipping to maintain robust exploration and mitigate policy collapse during learning.
  • Empirical evaluations on FLORES-200, COMET-22, and chrF show substantial gains in tokenization efficiency and translation quality for underrepresented languages, bridging performance gaps.

Abstract

Large language models frequently exhibit suboptimal performance on low resource languages, primarily due to inefficient subword segmentation and systemic training data imbalances. In this paper, we propose Variable Entropy Policy Optimization (VEPO), which leverages Reinforcement Learning with Verifiable Rewards to incorporate deterministic structural constraints into the policy alignment process. This framework ensures prescribed sequence length, robust format consistency, and rigorous linguistic well formedness, all enforced during training. Central to our approach is a variable entropy mechanism that enables the model to dynamically calibrate the equilibrium between literal fidelity and semantic naturalness by modulating the exploration exploitation manifold. By integrating entropy tempered advantage estimation with asymmetric clipping, VEPO sustains robust exploration while mitigating policy collapse. Empirical evaluations across 90 FLORES-200, COMET-22, chrF directions demonstrate that VEPO yields substantial improvements in both tokenization efficiency and translation quality, bridging the performance gap for underrepresented languages.