LePREC: Reasoning as Classification over Structured Factors for Assessing Relevance of Legal Issues

arXiv cs.CL / 4/22/2026

📰 NewsModels & Research

Key Points

  • The study introduces LePREC, a neuro-symbolic framework aimed at improving how LLMs identify legal issues as a relevance assessment problem.
  • Using a dataset built from 769 real Malaysian Contract Act court cases (with GPT-4o for fact extraction and candidate issue generation, then expert annotation), the authors find that LLM-generated issue candidates have only 62% precision—highlighting a key bottleneck in legal issue identification.
  • LePREC combines an LLM-based neural component that converts legal text into question–answer pairs of analytical factors with a symbolic component that applies sparse linear models to learn interpretable, algebraic feature weights.
  • Experiments report a 30–40% improvement over strong LLM baselines (including GPT-4o and Claude), suggesting that factor-to-issue correlation-based analysis can be more data-efficient for deciding legal issue relevance.

Abstract

More than half of the global population struggles to meet their civil justice needs due to limited legal resources. While Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, significant challenges remain even at the foundational step of legal issue identification. To investigate LLMs' capabilities in this task, we constructed a dataset from 769 real-world Malaysian Contract Act court cases, using GPT-4o to extract facts and generate candidate legal issues, annotated by senior legal experts, which reveals a critical limitation: while LLMs generate diverse issue candidates, their precision remains inadequate (GPT-4o achieves only 62%). To address this gap, we propose LePREC (Legal Professional-inspired Reasoning Elicitation and Classification), a neuro-symbolic framework combining neural generation with structured statistical reasoning. LePREC consists of: (1) a neuro component leverages LLMs to transform legal descriptions into question-answer pairs representing diverse analytical factors, and (2) a symbolic component applies sparse linear models over these discrete features, learning explicit algebraic weights that identify the most informative reasoning factors. Unlike end-to-end neural approaches, LePREC achieves interpretability through transparent feature weighting while maintaining data efficiency through correlation-based statistical classification. Experiments show a 30-40% improvement over advanced LLM baselines, including GPT-4o and Claude, confirming that correlation-based factor-issue analysis offers a more data-efficient solution for relevance decisions.