DPrivBench: Benchmarking LLMs' Reasoning for Differential Privacy

arXiv cs.LG / 4/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes using large language models (LLMs) to automate the difficult reasoning required to design and verify differential privacy (DP) algorithms for non-expert practitioners.
  • It introduces DPrivBench, a new benchmark where each task asks whether a function/algorithm satisfies a specified DP guarantee under given assumptions, with coverage across many DP topics and difficulty levels.
  • The benchmark is designed to prevent “shortcut” answers via trivial pattern matching, aiming to test genuine DP reasoning.
  • Experimental results indicate that even the strongest current models perform well on textbook DP mechanisms, but struggle substantially with advanced DP algorithms, exposing large gaps in current automated DP reasoning.
  • The authors conduct analytic and failure-mode studies and outline directions to improve automated DP reasoning, positioning DPrivBench as a foundation for future methods and evaluation.

Abstract

Differential privacy (DP) has a wide range of applications for protecting data privacy, but designing and verifying DP algorithms requires expert-level reasoning, creating a high barrier for non-expert practitioners. Prior works either rely on specialized verification languages that demand substantial domain expertise or remain semi-automated and require human-in-the-loop guidance. In this work, we investigate whether large language models (LLMs) can automate DP reasoning. We introduce DPrivBench, a benchmark in which each instance asks whether a function or algorithm satisfies a stated DP guarantee under specified assumptions. The benchmark is carefully designed to cover a broad range of DP topics, span diverse difficulty levels, and resist shortcut reasoning through trivial pattern matching. Experiments show that while the strongest models handle textbook mechanisms well, all models struggle with advanced algorithms, revealing substantial gaps in current DP reasoning capabilities. Through further analytic study and failure-mode analysis, we identify several promising directions for improving automated DP reasoning. Our benchmark provides a solid foundation for developing and evaluating such methods, and complements existing benchmarks for mathematical reasoning.