AdaHOP: Fast and Accurate Low-Precision Training via Outlier-Pattern-Aware Rotation

arXiv cs.LG / 4/6/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that using a fixed Hadamard transform for low-precision training in LLMs is ineffective because outlier structures differ across weights, activations, and gradients and require different “smoothing directions.”
  • It classifies outlier patterns into three types—row-wise, column-wise, and none—and shows that each pattern pair calls for tailored Hadamard direction or outlier-handling strategies to reduce quantization error.
  • AdaHOP is introduced to adaptively choose, per matrix multiplication, between Inner Hadamard Transform (IHT) and IHT plus selective Outlier Extraction (OE) that routes dominant outliers through a higher-precision path.
  • With hardware-aware Triton kernels, the method reportedly achieves BF16 training quality at MXFP4 precision, while providing up to 3.6× memory compression and 1.8× kernel acceleration versus BF16 full-precision training.

Abstract

Low-precision training (LPT) commonly employs Hadamard transforms to suppress outliers and mitigate quantization error in large language models (LLMs). However, prior methods apply a fixed transform uniformly, despite substantial variation in outlier structures across tensors. Through the first systematic study of outlier patterns across weights, activations, and gradients of LLMs, we show that this strategy is fundamentally flawed: the effectiveness of Hadamard-based suppression depends on how the transform's smoothing direction aligns with the outlier structure of each operand -- a property that varies substantially across layers and computation paths. We characterize these patterns into three types: Row-wise, Column-wise, and None. Each pair requires a tailored transform direction or outlier handling strategy to minimize quantization error. Based on this insight, we propose AdaHOP (Adaptive Hadamard transform with Outlier-Pattern-aware strategy), which assigns each matrix multiplication its optimal strategy: Inner Hadamard Transform (IHT) where inner-dimension smoothing is effective, or IHT combined with selective Outlier Extraction (OE) -- routing dominant outliers to a high-precision path -- where it is not. Combined with hardware-aware Triton kernels, AdaHOP achieves BF16 training quality at MXFP4 precision while delivering up to 3.6X memory compression and 1.8X kernel acceleration} over BF16 full-precision training.

AdaHOP: Fast and Accurate Low-Precision Training via Outlier-Pattern-Aware Rotation | AI Navigate