AI Navigate

UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • UtilityMax Prompting is introduced as a formal framework for multi-objective prompting that uses influence diagrams and a utility function to maximize expected utility in LLM outputs.
  • The approach replaces natural-language prompts with formal mathematical specifications to reduce ambiguity when balancing multiple objectives.
  • The authors validate the framework on the MovieLens 1M dataset across Claude Sonnet 4.6, GPT-5.4, and Gemini 2.5 Pro, showing improvements in precision and NDCG over natural-language baselines.
  • The work highlights potential for more predictable, objective-driven LLM behavior and could influence future prompt engineering and model optimization workflows.

Abstract

The success of a Large Language Model (LLM) task depends heavily on its prompt. Most use-cases specify prompts using natural language, which is inherently ambiguous when multiple objectives must be simultaneously satisfied. In this paper we introduce UtilityMax Prompting, a framework that specifies tasks using formal mathematical language. We reconstruct the task as an influence diagram in which the LLM's answer is the sole decision variable. A utility function is defined over the conditional probability distributions within the diagram, and the LLM is instructed to find the answer that maximises expected utility. This constrains the LLM to reason explicitly about each component of the objective, directing its output toward a precise optimization target rather than a subjective natural language interpretation. We validate our approach on the MovieLens 1M dataset across three frontier models (Claude Sonnet 4.6, GPT-5.4, and Gemini 2.5 Pro), demonstrating consistent improvements in precision and Normalized Discounted Cumulative Gain (NDCG) over natural language baselines in a multi-objective movie recommendation task.