AI Navigate

追加データなしでユーザー文脈と応答の間の相互情報量を最大化し、LLMの個人化を改善する

arXiv cs.AI / 2026/3/23

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • 本論文は、適切なプロンプトを条件として正の応答を得る一方、無関係なランダムなプロンプトを条件づけて負の応答を得ることで、対照的データ拡張法である Mutual Information Preference Optimization(MIPO)を提案する。
  • Direct Preference Optimization(DPO)を活用して、プロンプトとモデル応答の間の各点の条件付き相互情報量を最大化し、外部監視なしにパーソナライズを改善する。
  • Llama および Qwen-Instruct モデルを用いた実験は、実データを用いた個人化タスクで3~40%の改善を、追加データなしで数学と多肢選択問題タスクで1~18%の向上を示した。
  • これらの知見は、LLMの自己改善に向けた有望な方向性を示唆し、ラベル付きデータへの依存を減らしつつ、幅広いタスクに利益をもたらす可能性がある。

Abstract

While post-training has successfully improved large language models (LLMs) across a variety of domains, these gains heavily rely on human-labeled data or external verifiers. Existing data has already been exploited, and new high-quality data is expensive to collect. More fundamentally, true intelligence goes far beyond tasks that are easily verifiable. Therefore, we need self-improvement frameworks that allow models to improve without external oversight. We propose *Mutual Information Preference Optimization (MIPO)*, a contrastive data augmentation method that constructs preference pairs by generating a positive response conditioning on the correct prompt, and a negative response by conditioning on a random, unrelated prompt. We show that using Direct Preference Optimization (DPO) to learn from this paired data maximizes pointwise conditional mutual information (MI) (under the base LLM) between prompts and model responses. Empirical results with various-sized Llama- and Qwen-Instruct models show that when used to maximize MI between user context and response, MIPO provides an effective personalization technique, achieving 3-40% improvements on personalization tasks using real-user datasets compared to strong baselines. Surprisingly, MIPO can also be applied to improve performance on math and multiple-choice problems, yielding 1-18% **without any additional data or human supervision**. These results suggest a promising direction for self-improvement.