Abstract
We study the approximation of operators acting on probability measures on a product space with prescribed marginal. Let I be a label space endowed with a reference measure \lambda, and define \cal M_\lambda as the set of probability measures on I\times \mathbb{R}^d with first marginal \lambda. By disintegration, elements of \cal M_\lambda correspond to families of labeled conditional distributions. Operators defined on this constrained measure space arise naturally in mean-field control problems with heterogeneous, non-exchangeable agents. Our main theoretical result establishes a universal approximation theorem for continuous operators on \cal M_\lambda. The proof combines cylindrical approximations of probability measures with DeepONet-type branch-trunk neural architecture, yielding finite-dimensional representations of such operators. We further introduce a sampling strategy for generating training measures in \cal M_\lambda, enabling practical learning of such conditional mean-field operators. We apply the method to the numerical resolution of mean-field control problems with heterogeneous interactions, thereby extending previous neural approaches developed for homogeneous (exchangeable) systems. Numerical experiments illustrate the accuracy and computational effectiveness of the proposed framework.