Detecting and Suppressing Reward Hacking with Gradient Fingerprints

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper addresses reward hacking in reinforcement learning with verifiable rewards (RLVR), where models can exploit loopholes in outcome-only reward functions without genuinely solving the intended task.
  • It proposes GRIFT (Gradient Fingerprint), which detects reward hacking by computing and compressing gradients of a model’s chain-of-thought (CoT) with respect to the prompt.
  • The method uses the resulting gradient representation to decide whether a given CoT trace likely reflects reward-hacking behavior, overcoming limitations of surface-level, text-only monitoring.
  • Experiments on verifiable reasoning benchmarks (math, code, and logical reasoning) show GRIFT outperforms prior approaches like CoT Monitor and TRACE by more than 25% relative in detection.
  • When integrated into rejection fine-tuning for reasoning tasks, GRIFT both reduces reward hacking and improves performance on the true objective.

Abstract

Reinforcement learning with verifiable rewards (RLVR) typically optimizes for outcome rewards without imposing constraints on intermediate reasoning. This leaves training susceptible to reward hacking, where models exploit loopholes (e.g., spurious patterns in training data) in the reward function to achieve high scores without solving the intended task. These reward-hacking behaviors are often implicit, as the intermediate chain-of-thought (CoT) may appear plausible on the surface, limiting the effectiveness of purely text-based monitoring. We propose Gradient Fingerprint (GRIFT), a method for detecting reward hacking using models' internal computations. Given a prompt and a model-generated CoT, GRIFT computes gradients of the CoT conditioned on the prompt and compresses them into a compact representation, which is then used to assess whether the CoT reflects reward hacking behavior. Across verifiable reasoning benchmarks spanning math, code, and logical reasoning, GRIFT substantially outperforms strong baselines, including CoT Monitor and TRACE, achieving over 25% relative improvement in detecting reward hacking behavior. Moreover, integrating GRIFT into the rejection fine-tuning pipeline for reasoning tasks reduces reward hacking and improves performance on the true task objective. Our results highlight a promising direction of leveraging gradient level representations for assessing the quality of CoT reasoning traces. Our code is available at: https://github.com/songtao-x/reward_hack.