Expert Evaluation of LLM's Open-Ended Legal Reasoning on the Japanese Bar Exam Writing Task

arXiv cs.AI / 4/28/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper evaluates how well LLMs perform open-ended Japanese legal reasoning, focusing on bar-exam-style writing rather than only multiple-choice benchmarks.
  • It introduces what is described as the first dataset for the Japanese jurisdiction, built from the writing component of the Japanese Bar Examination that requires extracting legal issues and composing structured free-text arguments.
  • Legal experts manually evaluate the LLM-generated responses, identifying limitations and challenges in the models’ reasoning capabilities.
  • The study also performs manual hallucination analysis to determine when and how models introduce content that is not supported by statutes or legal precedents.
  • The authors report that real exam questions, model outputs, and expert evaluations together map current “milestones” for LLMs in Japanese legal tasks, with the dataset and resources planned for online release.

Abstract

Large language models (LLMs) have shown strong performance on legal benchmarks, including multiple-choice components of bar exams. However, their capacity for generating open-ended legal reasoning in realistic scenarios remains insufficiently explored. Notably, to our best knowledge, there are no prior studies or datasets addressing this issue in the Japanese context. This study presents the first dataset designed to evaluate the open-ended legal reasoning performance of LLMs within the Japanese jurisdiction. The dataset is based on the writing component of the Japanese bar examination, which requires examinees to identify multiple legal issues from long narratives and to construct structured legal arguments in free text format. Our key contribution is the manual evaluation of LLMs' generated responses by legal experts, which reveals limitations and challenges in legal reasoning. Moreover, we conducted a manual analysis of hallucinations to characterize when and how the models introduce content not supported by precedent or law. Our real exam questions, model-generated responses, and expert evaluations reveal the milestones of current LLMs in the Japanese legal domain. Our dataset and relevant resources will be available online.