Incentive-Aware Multi-Fidelity Optimization for Generative Advertising in Large Language Models

arXiv cs.LG / 4/9/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles generative advertising in LLM outputs by optimizing sponsorship configurations under advertisers’ strategic behavior and the high cost of stochastic text generation.
  • It proposes the Incentive-Aware Multi-Fidelity Mechanism (IAMFM), which combines Vickrey-Clarke-Groves-style incentives with multi-fidelity optimization to maximize expected social welfare.
  • Two instantiations—elimination-based and model-based—are compared, showing performance that depends on the advertisers’ budget levels.
  • To keep VCG payments computationally feasible, the authors introduce Active Counterfactual Optimization, a warm-start method that reuses optimization results for efficient payment calculation.
  • Experiments indicate IAMFM achieves higher expected welfare than single-fidelity baselines, and the framework includes formal approximate strategy-proofness and individual rationality guarantees.

Abstract

Generative advertising in large language model (LLM) responses requires optimizing sponsorship configurations under two strict constraints: the strategic behavior of advertisers and the high cost of stochastic generations. To address this, we propose the Incentive-Aware Multi-Fidelity Mechanism (IAMFM), a unified framework coupling Vickrey-Clarke-Groves (VCG) incentives with Multi-Fidelity Optimization to maximize expected social welfare. We compare two algorithmic instantiations (elimination-based and model-based), revealing their budget-dependent performance trade-offs. Crucially, to make VCG computationally feasible, we introduce Active Counterfactual Optimization, a "warm-start" approach that reuses optimization data for efficient payment calculation. We provide formal guarantees for approximate strategy-proofness and individual rationality, establishing a general approach for incentive-aligned, budget-constrained generative processes. Experiments demonstrate that IAMFM outperforms single-fidelity baselines across diverse budgets.