HiFloat4 Format for Language Model Pre-training on Ascend NPUs

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies HiFloat4, a 4-bit floating-point (FP4) format optimized for Huawei Ascend NPUs, for language model pre-training.
  • It compares HiFloat4 against MXFP4 in large-scale training runs where linear and expert GEMM operations are executed entirely in FP4 precision.
  • Experiments cover both dense model architectures (e.g., Pangu- and LLaMA-style) and mixture-of-experts (MoE) models, including expert-specific GEMMs.
  • The authors propose FP4-specific stabilization techniques that keep relative error within about 1% of full-precision baselines while retaining the efficiency gains of 4-bit compute.
  • Overall, the work provides an empirical view of the practical trade-offs between FP4 formats for NPU-based LLM training and highlights how to mitigate FP4 numerical degradation.

Abstract

Large foundation models have become central to modern machine learning, with performance scaling predictably with model size and data. However, training and deploying such models incur substantial computational and memory costs, motivating the development of low-precision training techniques. Recent work has demonstrated that 4-bit floating-point (FP4) formats--such as MXFP4 and NVFP4--can be successfully applied to linear GEMM operations in large language models (LLMs), achieving up to 4x improvements in compute throughput and memory efficiency compared to higher-precision baselines. In this work, we investigate the recently proposed HiFloat4 FP4 format for Huawei Ascend NPUs and systematically compare it with MXFP4 in large-scale training settings. All experiments are conducted on Ascend NPU clusters, with linear and expert GEMM operations performed entirely in FP4 precision. We evaluate both dense architectures (e.g., Pangu and LLaMA-style models) and mixture-of-experts (MoE) models, where both standard linear layers and expert-specific GEMMs operate in FP4. Furthermore, we explore stabilization techniques tailored to FP4 training that significantly reduce numerical degradation, maintaining relative error within 1% of full-precision baselines while preserving the efficiency benefits of 4-bit computation. Our results provide a comprehensive empirical study of FP4 training on NPUs and highlight the practical trade-offs between FP4 formats in large-scale dense and MoE models.