SafeMath: Inference-time Safety improves Math Accuracy

arXiv cs.CL / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how LLMs can be manipulated via harmful or biased mathematical word problems, especially narrative-style prompts used in educational contexts involving children.
  • It introduces ToxicGSM, a dataset of 1.9k arithmetic problems designed to embed sensitive or harmful context while keeping the underlying math reasoning well-defined.
  • The authors audit existing LLM behavior on ToxicGSM and analyze the trade-off between enforcing safety and preserving mathematical correctness.
  • They propose SafeMath, an inference-time safety alignment technique that reduces harmful outputs while maintaining—and sometimes improving—math reasoning performance by disentangling linguistic harm from math reasoning.
  • The dataset and source code are released publicly to support further systematic research on safety and accuracy in mathematical tasks.

Abstract

Recent research points toward LLMs being manipulated through adversarial and seemingly benign inputs, resulting in harmful, biased, or policy-violating outputs. In this paper, we study an underexplored issue concerning harmful and toxic mathematical word problems. We show that math questions, particularly those framed as natural language narratives, can serve as a subtle medium for propagating biased, unethical, or psychologically harmful content, with heightened risks in educational settings involving children. To support a systematic study of this phenomenon, we introduce ToxicGSM, a dataset of 1.9k arithmetic problems in which harmful or sensitive context is embedded while preserving mathematically well-defined reasoning tasks. Using this dataset, we audit the behaviour of existing LLMs and analyse the trade-offs between safety enforcement and mathematical correctness. We further propose SafeMath -- a safety alignment technique that reduces harmful outputs while maintaining, and in some cases improving, mathematical reasoning performance. Our results highlight the importance of disentangling linguistic harm from math reasoning and demonstrate that effective safety alignment need not come at the cost of accuracy. We release the source code and dataset at https://github.com/Swagnick99/SafeMath/tree/main.