Approximating Uniform Random Rotations by Two-Block Structured Hadamard Rotations in High Dimensions

arXiv cs.LG / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how well a “two-block” structured Hadamard rotation approximates a uniform random rotation in high dimensions, building on practical methods that combine Walsh–Hadamard transforms with random sign diagonals.
  • It proves a positive result: for any fixed coordinate, the two-block transform converges uniformly (over all inputs) to the corresponding coordinate of a truly uniformly rotated vector, with an explicit Kolmogorov-distance bound of order d^{-1/5}.
  • It also proves a negative result: the overall vector distribution produced by the two-block transform can remain measurably different from the uniform-rotation distribution, giving an explicit lower bound on the Wasserstein distance in the worst case.
  • For the specially constructed “extremal” input underlying the lower bound, the authors show a matching asymptotic upper bound, demonstrating that the discrepancy scaling is tight for that case.
  • Overall, the findings reveal a gap between improved one-dimensional marginal approximation and persistent errors in full high-dimensional geometry, explaining both the empirical usefulness and the limitations of structured Hadamard rotations as plug-in replacements.

Abstract

Uniform random rotations are a useful primitive in applications such as fast Johnson-Lindenstrauss embeddings, kernel approximation, communication-efficient learning, and recent AI compression pipelines, but they are computationally expensive to generate and apply in high dimensions. A common practical replacement is repeated structured random rotations built from Walsh-Hadamard transforms and random sign diagonals. Applying the structured random rotation twice has been shown empirically to be useful, but the supporting theory is still limited. In this paper we study the approximation quality achieved when using this two-block structured Hadamard rotation. Our results are both positive and negative. On the positive side, we prove that every fixed coordinate of the two-block transform converges uniformly, over all inputs, to the corresponding coordinate of a uniformly rotated vector, with an explicit Kolmogorov-distance bound of order d^{-1/5}. On the negative side, we prove an explicit lower bound on the Wasserstein distance between the full vector distributions, showing that the two-block transform is not a globally accurate surrogate for a uniform random rotation in the worst case. For the extremal input used in the lower bound, we also prove a matching asymptotic upper bound, showing that the lower-bound scale is sharp for that input. Taken together, the results identify a clear separation between one-dimensional marginal behavior, where approximation improves with dimension, and full high-dimensional geometry, where a nonvanishing discrepancy remains. This provides a partial theoretical explanation for the empirical success of structured Hadamard rotations in some algorithms, while also clarifying the limitations of treating them as drop-in replacements for true uniform random rotations.