LJ-Bench: Ontology-Based Benchmark for U.S. Crime

arXiv cs.LG / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LJ-Bench, a new benchmark that evaluates how robust large language models are against a broad set of illegal/crime-related queries.
  • LJ-Bench is grounded in an ontology of crime concepts derived from the Model Penal Code and instantiated using California law, enabling legally grounded and structured coverage.
  • The benchmark covers 76 distinct crime types arranged taxonomically, supporting systematic testing across many categories rather than a small set of isolated illegal activities.
  • Experiments indicate LLMs are more vulnerable to attacks targeting societal harm than to those directly affecting individuals.
  • The LJ-Bench benchmark, LJ-Ontology, and implementation code are released publicly to support reproducible research and the development of safer, more trustworthy LLMs.

Abstract

The potential of Large Language Models (LLMs) to provide harmful information remains a significant concern due to the vast breadth of illegal queries they may encounter. Unfortunately, existing benchmarks only focus on a handful types of illegal activities, and are not grounded in legal works. In this work, we introduce an ontology of crime-related concepts grounded in the legal frameworks of Model Panel Code, which serves as an influential reference for criminal law and has been adopted by many U.S. states, and instantiated using Californian Law. This structured knowledge forms the foundation for LJ-Bench, the first comprehensive benchmark designed to evaluate LLM robustness against a wide range of illegal activities. Spanning 76 distinct crime types organized taxonomically, LJ-Bench enables systematic assessment of diverse attacks, revealing valuable insights into LLM vulnerabilities across various crime categories: LLMs exhibit heightened susceptibility to attacks targeting societal harm rather than those directly impacting individuals. Our benchmark aims to facilitate the development of more robust and trustworthy LLMs. The LJ-Bench benchmark and LJ-Ontology, along with experiments implementation for reproducibility are publicly available at https://github.com/AndreaTseng/LJ-Bench.