Let's Have a Conversation: Designing and Evaluating LLM Agents for Interactive Optimization

arXiv cs.AI / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that effective optimization depends on correctly modeling objectives, constraints, and trade-offs, which often requires iterative interaction with stakeholders rather than one-shot solution attempts.
  • It proposes a scalable evaluation methodology for conversation-based LLM optimization agents, using role-play decision agents with internal utility functions to generate thousands of stakeholder-like dialogues.
  • In a school scheduling case study, the authors find that one-shot evaluation severely underestimates performance, while the same agent achieves substantially higher-quality solutions through interactive conversations.
  • The study further shows that optimization agents tailored with domain-specific prompts and structured tools outperform general-purpose chatbots in solution quality while reaching good results in fewer interactions.
  • The work highlights how operations-research expertise can improve the design and reliability of interactive optimization agent deployments, bridging AI and practical optimization needs.

Abstract

Optimization is as much about modeling the right problem as solving it. Identifying the right objectives, constraints, and trade-offs demands extensive interaction between researchers and stakeholders. Large language models can empower decision-makers with optimization capabilities through interactive optimization agents that can propose, interpret and refine solutions. However, it is fundamentally harder to evaluate a conversation-based interaction than traditional one-shot approaches. This paper proposes a scalable and replicable methodology for evaluating optimization agents through conversations. We build LLM-powered decision agents that role-play diverse stakeholders, each governed by an internal utility function but communicating like a real decision-maker. We generate thousands of conversations in a school scheduling case study. Results show that one-shot evaluation is severely limiting: the same optimization agent converges to much higher-quality solutions through conversations. Then, this paper uses this methodology to demonstrate that tailored optimization agents, endowed with domain-specific prompts and structured tools, can lead to significant improvements in solution quality in fewer interactions, as compared to general-purpose chatbots. These findings provide evidence of the benefits of emerging solutions at the AI-optimization interface to expand the reach of optimization technologies in practice. They also uncover the impact of operations research expertise to facilitate interactive deployments through the design of effective and reliable optimization agents.