Intrinsic Mutual Information as a Modulator for Preference Optimization

arXiv cs.LG / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces RMiPO, a lightweight framework for offline preference optimization of LLMs that targets limitations of methods like DPO, particularly the need for extensive hyperparameter tuning.
  • RMiPO uses intrinsic, response-level mutual information to modulate preferences, dynamically decoupling preference contributions with minimal extra computation.
  • Experiments show RMiPO delivers consistently better performance than existing offline preference optimization approaches.
  • The method also reduces training overhead by more than 15%, improving efficiency without sacrificing alignment gains.
  • The authors provide an open-source implementation at the linked GitHub repository.

Abstract

Offline preference optimization methods, such as Direct Preference Optimization (DPO), offer significant advantages in aligning Large Language Models (LLMs) with human values. However, achieving optimal performance with these methods typically involves additional hyperparameter tuning, resulting in substantial time overhead. Although prior work has proposed a range of improvements, these methods remain limited in effectiveness and have not fully eliminated reliance on hyperparameter tuning. In this work, we propose RMiPO, a lightweight and efficient framework for offline preference optimization. RMiPO leverages intrinsic Response-level Mutual information for Preference Optimization with hyperparameter modulation, dynamically decoupling preference contributions at negligible additional computational cost. Extensive experimental results demonstrate that RMiPO achieves consistently superior performance over existing methods while reducing training overhead by more than 15\%. Our code is available at https://github.com/liavonpenn/rmipo.