AI Navigate

Improving RCT-Based Treatment Effect Estimation Under Covariate Mismatch via Calibrated Alignment

arXiv cs.LG / 3/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • A new method CALM (Calibrated ALignment under covariate Mismatch) is proposed to improve heterogeneous treatment effect estimation by combining RCTs with observational studies despite covariate mismatch.
  • It learns embeddings to map each data source's features into a common representation and calibrates observational outcome models in the RCT embedding space, preserving causal identification from randomization.
  • Finite-sample risk bounds decompose into alignment error, outcome-model complexity, and calibration complexity, clarifying when embedding-based alignment outperforms imputation.
  • The calibration-based linear variant offers protection against negative transfer, while the neural variant can be vulnerable under severe distributional shift; under sparse linear models, embedding generalizes imputation.
  • In simulations across 51 settings, CALM shows equivalence to linear CATEs for linear cases and a neural-embedding variant winning all nonlinear regimes with large margins.

Abstract

Randomized controlled trials (RCTs) are the gold standard for estimating heterogeneous treatment effects, yet they are often underpowered for detecting effect heterogeneity. Large observational studies (OS) can supplement RCTs for conditional average treatment effect (CATE) estimation, but a key barrier is covariate mismatch: the two sources measure different, only partially overlapping, covariates. We propose CALM (Calibrated ALignment under covariate Mismatch), which bypasses imputation by learning embeddings that map each source's features into a common representation space. OS outcome models are transferred to the RCT embedding space and calibrated using trial data, preserving causal identification from randomization. Finite-sample risk bounds decompose into alignment error, outcome-model complexity, and calibration complexity terms, identifying when embedding alignment outperforms imputation. Under the calibration-based linear variant, the framework provides protection against negative transfer; the neural variant can be vulnerable under severe distributional shift. Under sparse linear models, the embedding approach strictly generalizes imputation. Simulations across 51 settings confirm that (i) calibration-based methods are equivalent for linear CATEs, and (ii) the neural embedding variant wins all 22 nonlinear-regime settings with large margins.