Maximizing mutual information between user-contexts and responses improve LLM personalization with no additional data

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Mutual Information Preference Optimization (MIPO), a contrastive data augmentation method that builds preference pairs by conditioning on the correct prompt for a positive response and on a random unrelated prompt for a negative response.
  • It leverages Direct Preference Optimization (DPO) to maximize the pointwise conditional mutual information between prompts and model responses, enabling improved personalization without external supervision.
  • Experiments with Llama- and Qwen-Instruct models show 3-40% improvements on personalization tasks with real-user data, and 1-18% gains on math and multiple-choice tasks without any additional data.
  • The findings suggest a promising self-improvement direction for LLMs, reducing reliance on labeled data while potentially benefiting a range of tasks.

Abstract

While post-training has successfully improved large language models (LLMs) across a variety of domains, these gains heavily rely on human-labeled data or external verifiers. Existing data has already been exploited, and new high-quality data is expensive to collect. More fundamentally, true intelligence goes far beyond tasks that are easily verifiable. Therefore, we need self-improvement frameworks that allow models to improve without external oversight. We propose *Mutual Information Preference Optimization (MIPO)*, a contrastive data augmentation method that constructs preference pairs by generating a positive response conditioning on the correct prompt, and a negative response by conditioning on a random, unrelated prompt. We show that using Direct Preference Optimization (DPO) to learn from this paired data maximizes pointwise conditional mutual information (MI) (under the base LLM) between prompts and model responses. Empirical results with various-sized Llama- and Qwen-Instruct models show that when used to maximize MI between user context and response, MIPO provides an effective personalization technique, achieving 3-40% improvements on personalization tasks using real-user datasets compared to strong baselines. Surprisingly, MIPO can also be applied to improve performance on math and multiple-choice problems, yielding 1-18% **without any additional data or human supervision**. These results suggest a promising direction for self-improvement.
広告