AI Navigate

Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • This study analyzes the sensitivity of transformer layers to four-bit floating point (FP4) quantization formats, MXFP4 and NVFP4, across various Qwen2.5 model sizes (0.5B, 7B, and 14B).
  • It identifies that MLP up- and down-projection layers are the most sensitive to FP4 quantization, while gate and attention projection layers exhibit moderate to low sensitivity.
  • The sensitivity to quantization can appear in early transformer blocks and is not limited to the final blocks, with MXFP4 showing higher sensitivity in early layers.
  • The findings provide a detailed diagnostic understanding of FP4 inference behaviors, helping optimize quantization strategies for large language models to improve efficiency without substantially compromising accuracy.

Computer Science > Hardware Architecture

arXiv:2603.08747 (cs)
[Submitted on 5 Mar 2026]

Title:Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4

View a PDF of the paper titled Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4, by Musa Cim and 2 other authors
View PDF HTML (experimental)
Abstract:Quantization addresses the high resource demand for large language models (LLMs) by alleviating memory pressure and bandwidth congestion and providing significantly scaled compute power with a tolerable impact on accuracy. Four-bit floating point (FP4), the lowest-precision format that preserves essential numerical properties such as exponent and sign, has begun to be adopted in cutting-edge architectures, including Blackwell and AMD CDNA, to support LLM quantization and reduce deployment costs. Although aggressive quantization can yield efficiency gains, the quantization sensitivity of within-transformer layers and whether these sensitivities generalize across existing FP4 formats and model scales remain underexplored. To elucidate quantization sensitivity, this study conducts a systematic analysis of two FP4 formats, MXFP4 and NVFP4, across three Qwen2.5 model scales (0.5B, 7B, and 14B), using controlled component-wise and block-wise isolation methodologies. We observe that MLP up- and down-projection layers consistently dominate in terms of sensitivity, while gate and attention projections are moderately and substantially less sensitive to FP4 quantization, respectively. We further find that sensitivity does not universally localize to the final blocks, but early blocks can be highly sensitive, particularly under MXFP4. Our results provide a diagnostic characterization of the inference behavior of FP4 across components, depths, and FP4 formats.
Subjects: Hardware Architecture (cs.AR); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.08747 [cs.AR]
  (or arXiv:2603.08747v1 [cs.AR] for this version)
  https://doi.org/10.48550/arXiv.2603.08747
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Musa Cim [view email]
[v1] Thu, 5 Mar 2026 14:23:36 UTC (3,915 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Diagnosing FP4 inference: a layer-wise and block-wise sensitivity analysis of NVFP4 and MXFP4, by Musa Cim and 2 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.AR
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.