From Signal Degradation to Computation Collapse: Uncovering the Two Failure Modes of LLM Quantization
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes why reducing LLM precision from 4-bit to 2-bit during post-training quantization often causes a sudden “performance cliff.”
- It identifies two qualitatively different PTQ failure modes: Signal Degradation, driven by cumulative information/precision loss while computation patterns still function, and Computation Collapse, where key components stop working and corrupt signals early in the network.
- The authors propose mechanism-aware interventions and show that training-free repair can mitigate Signal Degradation.
- However, the same kind of repair does not work for Computation Collapse, implying that this issue requires structural reconstruction rather than simple compensation.
- The work provides a systematic diagnostic framework to classify PTQ failures and choose appropriate mitigation strategies.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to