Exploring Silent Data Corruption as a Reliability Challenge in LLM Training

arXiv cs.LG / 4/2/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights Silent Data Corruption (SDC) as a reliability risk in large-scale LLM training, where hardware faults can evade normal detection and appear as harmless numerical noise or severely distort gradients.
  • It presents a controlled fault-injection study at the GPU matrix-multiply instruction level, mapping how fault location, bit positions, kernel functions, and execution stages influence training outcomes.
  • The authors observe distinct “corruption signatures,” including NaN propagation, transient loss/gradient spikes, and persistent parameter divergence that can lead to stalled or divergent pretraining.
  • Based on these signatures, the paper proposes a lightweight detection approach to flag potentially harmful parameter updates.
  • Experiments on LLaMA variants (60M to 1.3B parameters) show that recomputing the most recent training step after detection can substantially mitigate SDC’s impact.

Abstract

As Large Language Models (LLMs) scale in size and complexity, the consequences of failures during training become increasingly severe. A major challenge arises from Silent Data Corruption (SDC): hardware-induced faults that bypass system-level detection mechanisms. SDC may behave like benign numerical noise, but can also cause harmful gradient corruption that leads to loss spikes, divergence, or stalled progress. This work provides a controlled study of how intermittent SDC affects LLM pretraining. Using targeted fault injection at the level of GPU matrix-multiply instructions, we characterize the sensitivity of different bit positions, kernel functions, and execution stages. Our analysis shows that locally originating faults can produce impactful corruption, including NaN propagation, short-lived spikes in loss, gradient norm, and attention logits, as well as persistent parameter divergence. Building on the observed corruption signatures, we propose a lightweight detection method that identifies potentially harmful parameter updates. Experiments on LLaMA models with 60M, 350M, and 1.3B parameters demonstrate that recomputing the most recent training step upon detection can effectively mitigate the impact of these events.