Policy-Aware Design of Large-Scale Factorial Experiments

arXiv stat.ML / 4/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how digital platforms can run large-scale factorial online experiments for compositional product decisions under limited traffic, where decentralized A/B testing struggles with interaction effects.
  • It proposes a two-stage, centralized experimental design that samples intervention combinations, uses low-rank tensor completion to infer untested outcomes, and prunes weak factor levels via estimated marginal contributions.
  • In the second stage, it selects the best-performing policy using sequential halving over the surviving combinations rather than estimating every treatment effect.
  • The authors provide theoretical results, including simple-regret bounds and identification guarantees, showing complexity depends on low-rank degrees of freedom and factor separation structure rather than full factorial size.
  • Offline experiments on a product-bundling task built from 100M Taobao interactions show substantial gains over one-shot tensor completion and best-arm benchmarks, especially with low budgets and high noise.

Abstract

Digital firms routinely run many online experiments on shared user populations. When product decisions are compositional, such as combinations of interface elements, flows, messages, or incentives, the number of feasible interventions grows combinatorially, while available traffic remains limited. Overlapping experiments can therefore generate interaction effects that are poorly handled by decentralized A/B testing. We study how to design large-scale factorial experiments when the objective is not to estimate every treatment effect, but to identify a high-performing policy under a fixed experimentation budget. We propose a two-stage design that centralizes overlapping experiments into a single factorial problem and models expected outcomes as a low-rank tensor. In the first stage, the platform samples a subset of intervention combinations, uses tensor completion to infer performance on untested combinations, and eliminates weak factor levels using estimated marginal contributions. In the second stage, it applies sequential halving to the surviving combinations to select a final policy. We establish gap-independent simple-regret bounds and gap-dependent identification guarantees showing that the relevant complexity scales with the degrees of freedom of the low-rank tensor and the separation structure across factor levels, rather than the full factorial size. In an offline evaluation based on a product-bundling problem constructed from 100 million Taobao interactions, the proposed method substantially outperforms one-shot tensor completion and unstructured best-arm benchmarks, especially in low-budget and high-noise settings. These results show how centralized, policy-aware experimentation can make combinatorial product design operationally feasible at platform scale.