DeonticBench: A Benchmark for Reasoning over Rules

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DeonticBench, a new benchmark targeting deontic reasoning for LLMs—reasoning about obligations, permissions, and prohibitions from explicit rules in long-context, high-stakes domains.
  • DeonticBench contains 6,232 tasks spanning U.S. federal taxes, airline baggage policies, U.S. immigration administration, and state housing law, with options for both natural-language reasoning and solver-assisted workflows.
  • The benchmark supports an optional symbolic pipeline where models translate statutes and case facts into executable Prolog, producing formal interpretations and an explicit program trace; reference Prolog programs are released for all instances.
  • Results show that even frontier LLMs and coding models achieve best hard-subset performance around 44.4% (SARA Numeric) and 46.6 macro-F1 (Housing), indicating significant room for improvement in rule-grounded reasoning.
  • The authors study supervised fine-tuning and reinforcement learning for symbolic program generation, finding that training improves Prolog generation quality but current RL approaches still do not solve tasks reliably.

Abstract

Reasoning with complex, context-specific rules remains challenging for large language models (LLMs). In legal and policy settings, this manifests as deontic reasoning: reasoning about obligations, permissions, and prohibitions under explicit rules. While many recent benchmarks emphasize short-context mathematical reasoning, fewer focus on long-context, high-stakes deontic reasoning. To address this gap, we introduce DEONTICBENCH, a benchmark of 6,232 tasks across U.S. federal taxes, airline baggage policies, U.S. immigration administration, and U.S. state housing law. These tasks can be approached in multiple ways, including direct reasoning in language or with the aid of symbolic computation. Besides free-form chain-of-thought reasoning, DEONTICBENCH enables an optional solver-based workflow in which models translate statutes and case facts into executable Prolog, leading to formal problem interpretations and an explicit program trace. We release reference Prolog programs for all instances. Across frontier LLMs and coding models, best hard-subset performance reaches only 44.4% on SARA Numeric and 46.6 macro-F1 on Housing. We further study training with supervised fine-tuning and reinforcement learning for symbolic program generation. Although training improves Prolog generation quality, current RL methods still fail to solve these tasks reliably. Overall, DEONTICBENCH provides a benchmark for studying context-grounded rule reasoning in real-world domains under both symbolic and non-symbolic settings.