Hidden Reliability Risks in Large Language Models: Systematic Identification of Precision-Induced Output Disagreements

arXiv cs.LG / 4/23/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that LLM behavior can subtly differ across numeric precisions (e.g., bfloat16/float16 vs. int16/int8), and these discrepancies are often missed by standard evaluations.
  • It introduces PrecisionDiff, an automated differential testing framework that generates precision-sensitive test inputs and compares outputs across precisions to find disagreements.
  • The authors demonstrate the approach on an alignment verification task, showing that precision-induced disagreements can correspond to jailbreak divergence inputs that are rejected in one precision but yield harmful outputs in another.
  • Experiments find these cross-precision behavioral disagreements are widespread across multiple open-source aligned LLMs and precision settings, and PrecisionDiff detects them better than vanilla testing.
  • The framework is positioned as a tool for pre-deployment evaluation and for improving precision robustness during training.

Abstract

Large language models (LLMs) are increasingly deployed under diverse numerical precision configurations, including standard floating-point formats (e.g., bfloat16 and float16) and quantized integer formats (e.g., int16 and int8), to meet efficiency and resource constraints. However, minor inconsistencies between LLMs of different precisions are difficult to detect and are often overlooked by existing evaluation methods. In this paper, we present PrecisionDiff, an automated differential testing framework for systematically detecting precision-induced behavioral disagreements in LLMs. PrecisionDiff generates precision-sensitive test inputs and performs cross-precision comparative analysis to uncover subtle divergences that remain hidden under conventional testing strategies. To demonstrate its practical significance, we instantiate PrecisionDiff on the alignment verification task, where precision-induced disagreements manifest as jailbreak divergence-inputs that are rejected under one precision may produce harmful responses under another. Experimental results show that such behavioral disagreements are widespread across multiple open-source aligned LLMs and precision settings, and that PrecisionDiff significantly outperforms vanilla testing methods in detecting these issues. Our work enables automated precision-sensitive test generation, facilitating effective pre-deployment evaluation and improving precision robustness during training.