Beyond Case Law: Evaluating Structure-Aware Retrieval and Safety in Statute-Centric Legal QA

arXiv cs.AI / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing Legal QA benchmarks largely target case law, missing key difficulties of statute-centric regulatory reasoning where evidence is scattered across hierarchical documents.
  • It introduces SearchFireSafety, a new benchmark designed to test both structure-aware retrieval (graph/hierarchy guided) and safety behaviors like citation-aware abstention when context is insufficient.
  • The benchmark uses a dual evaluation approach with real-world citation-requiring questions and synthetic partial-context cases to specifically measure hallucination and refusal.
  • Experiments on multiple large language models indicate that graph-guided retrieval improves performance, but also exposes a safety trade-off: domain-adapted models may hallucinate more when crucial statutory evidence is missing.
  • The work concludes that future benchmarks should jointly assess hierarchical retrieval quality and model safety for statute-centric legal QA scenarios.

Abstract

Legal QA benchmarks have predominantly focused on case law, overlooking the unique challenges of statute-centric regulatory reasoning. In statutory domains, relevant evidence is distributed across hierarchically linked documents, creating a statutory retrieval gap where conventional retrievers fail and models often hallucinate under incomplete context. We introduce SearchFireSafety, a structure- and safety-aware benchmark for statute-centric legal QA. Instantiated on fire-safety regulations as a representative case, the benchmark evaluates whether models can retrieve hierarchically fragmented evidence and safely abstain when statutory context is insufficient. SearchFireSafety adopts a dual-source evaluation framework combining real-world questions that require citation-aware retrieval and synthetic partial-context scenarios that stress-test hallucination and refusal behavior. Experiments across multiple large language models show that graph-guided retrieval substantially improves performance, but also reveal a critical safety trade-off: domain-adapted models are more likely to hallucinate when key statutory evidence is missing. Our findings highlight the need for benchmarks that jointly evaluate hierarchical retrieval and model safety in statute-centric regulatory settings.