Foresight Optimization for Strategic Reasoning in Large Language Models

arXiv cs.CL / 4/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current reasoning-focused LLMs struggle with decision-making in multi-agent settings because they lack explicit foresight modeling of an opponent’s future actions.
  • It proposes Foresight Policy Optimization (FoPO), which blends opponent modeling into LLM policy optimization so models can jointly consider self-interest and counterpart influence.
  • The authors introduce two curated self-play datasets, Cooperative RSA and Competitive Taboo, designed with clear rules and moderate difficulty to study FoPO systematically.
  • Experiments show FoPO improves strategic reasoning across multiple LLMs and also generalizes better to out-of-domain strategic scenarios than standard reasoning optimization baselines.

Abstract

Reasoning capabilities in large language models (LLMs) have generally advanced significantly. However, it is still challenging for existing reasoning-based LLMs to perform effective decision-making abilities in multi-agent environments, due to the absence of explicit foresight modeling. To this end, strategic reasoning, the most fundamental capability to anticipate the counterpart's behaviors and foresee its possible future actions, has been introduced to alleviate the above issues. Strategic reasoning is fundamental to effective decision-making in multi-agent environments, yet existing reasoning enhancement methods for LLMs do not explicitly capture its foresight nature. In this work, we introduce Foresight Policy Optimization (FoPO) to enhance strategic reasoning in LLMs, which integrates opponent modeling principles into policy optimization, thereby enabling explicit consideration of both self-interest and counterpart influence. Specifically, we construct two curated datasets, namely Cooperative RSA and Competitive Taboo, equipped with well-designed rules and moderate difficulty to facilitate a systematic investigation of FoPO in a self-play framework. Our experiments demonstrate that FoPO significantly enhances strategic reasoning across LLMs of varying sizes and origins. Moreover, models trained with FoPO exhibit strong generalization to out-of-domain strategic scenarios, substantially outperforming standard LLM reasoning optimization baselines.