MF-QAT: Multi-Format Quantization-Aware Training for Elastic Inference

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes multi-format quantization-aware training (MF-QAT), training a single model to remain accurate across multiple numeric quantization formats rather than a single fixed precision.
  • Experiments indicate MF-QAT can achieve performance comparable to single-format QAT at each target precision, and it can generalize even to quantization formats not explicitly seen during training.
  • To support deployment without costly re-training, it introduces a Slice-and-Scale conversion procedure that transforms a high-precision anchor representation into lower MXINT and MXFP formats.
  • The authors present an inference-time pipeline that trains once with MF-QAT, stores one anchor checkpoint (MXINT8/MXFP8), and enables on-the-fly conversion to lower-precision formats with negligible or no additional accuracy loss.
  • Overall, the work enables “elastic” precision scaling at inference time so systems can select the runtime numeric format based on hardware or runtime constraints.

Abstract

Quantization-aware training (QAT) is typically performed for a single target numeric format, while practical deployments often need to choose numerical precision at inference time based on hardware support or runtime constraints. We study multi-format QAT, where a single model is trained to be robust across multiple quantization formats. We find that multi-format QAT can match single-format QAT at each target precision, yielding one model that performs well overall across different formats, even formats that were not seen during training. To enable practical deployment, we propose the Slice-and-Scale conversion procedure for both MXINT and MXFP that converts a high-precision representation into lower-precision formats without re-training. Building on this, we introduce a pipeline that (i) trains a model with multi-format QAT, (ii) stores a single anchor format checkpoint (MXINT8/MXFP8), and (iii) allows on-the-fly conversion to lower MXINT or MXFP formats at runtime with negligible-or no-additional accuracy degradation. Together, these components provide a practical path to elastic precision scaling and allow selecting the runtime format at inference time across diverse deployment targets.