ML-Bench&Guard: Policy-Grounded Multilingual Safety Benchmark and Guardrail for Large Language Models

arXiv cs.CL / 5/4/2026

📰 NewsModels & Research

Key Points

  • The paper introduces ML-Bench, a policy-grounded multilingual safety benchmark for 14 languages built directly from regional regulations rather than generic risk taxonomies or translation-based approaches.
  • ML-Bench derives risk categories and fine-grained rules from jurisdiction-specific legal texts to produce evaluation data that better reflects local cultural and legal requirements.
  • Based on ML-Bench, the authors develop ML-Guard, a diffusion LLM (dLLM)-based guardrail model that performs multilingual safety judgments and policy-conditioned compliance assessment.
  • ML-Guard is offered in two variants: a 1.5B lightweight model for fast safe/unsafe checks and a 7B model for more capable, customized compliance checking with detailed explanations.
  • Experiments across 11 existing guardrail baselines and multiple multilingual safety benchmarks show ML-Guard consistently outperforms prior methods, aiming to support regulation-aware and culturally aligned guardrail systems.

Abstract

As Large Language Models (LLMs) are increasingly deployed in cross-linguistic contexts, ensuring safety in diverse regulatory and cultural environments has become a critical challenge. However, existing multilingual benchmarks largely rely on general risk taxonomies and machine translation, which confines guardrail models to these predefined categories and hinders their ability to align with region-specific regulations and cultural nuances. To bridge these gaps, we introduce ML-Bench, a policy-grounded multilingual safety benchmark covering 14 languages. ML-Bench is constructed directly from regional regulations, where risk categories and fine-grained rules derived from jurisdiction-specific legal texts are directly used to guide the generation of multilingual safety data, enabling culturally and legally aligned evaluation across languages. Building on ML-Bench, we develop ML-Guard, a Diffusion Large Language Model (dLLM)-based guardrail model that supports multilingual safety judgment and policy-conditioned compliance assessment. ML-Guard has two variants, one 1.5B lightweight model for fast `safe/unsafe' checking and a more capable 7B model for customized compliance checking with detailed explanations. We conduct extensive experiments against 11 strong guardrail baselines across 6 existing multilingual safety benchmarks and our ML-Bench, and show that ML-Guard consistently outperforms prior methods. We hope that ML-Bench and ML-Guard can help advance the development of regulation-aware and culturally aligned multilingual guardrail systems.