Abstract
Standard resampling ratios (e.g., \alpha \approx 0.632) are widely used as default baselines in ensemble learning for three decades. However, how these ratios interact with a base learner's intrinsic functional complexity in finite samples lacks a exact mathematical characterization. We leverage the Hoeffding-ANOVA decomposition to derive the first exact, finite-sample variance decomposition for subagging, applicable to any symmetric base learner without requiring asymptotic limits or smoothness assumptions. We establish that subagging operates as a deterministic low-pass spectral filter: it preserves low-order structural signals while attenuating c-th order interaction variance by a geometric factor approaching \alpha^c. This decoupling reveals why default baselines often under-regularize high-capacity interpolators, which instead require smaller \alpha to exponentially suppress spurious high-order noise. To operationalize these insights, we propose a complexity-guided adaptive subsampling algorithm, empirically demonstrating that dynamically calibrating \alpha to the learner's complexity spectrum consistently improves generalization over static baselines.