Enhancing Legal LLMs through Metadata-Enriched RAG Pipelines and Direct Preference Optimization

arXiv cs.CL / 3/23/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The authors introduce Metadata Enriched Hybrid RAG to improve document-level retrieval in legal LLMs and address lexical redundancy in legal corpora.
  • They apply Direct Preference Optimization (DPO) to enforce safe refusals when context is inadequate, reducing unsafe or hallucinated outputs.
  • The approach aims to improve grounding, reliability, and safety for legal language models, especially small, privately deployed models that must protect data privacy.
  • The work targets long-form legal documents where standard LLMs degrade, presenting a path to more trustworthy and private legal AI deployments.

Abstract

Large Language Models (LLMs) perform well in short contexts but degrade on long legal documents, often producing hallucinations such as incorrect clauses or precedents. In the legal domain, where precision is critical, such errors undermine reliability and trust. Retrieval Augmented Generation (RAG) helps ground outputs but remains limited in legal settings, especially with small, locally deployed models required for data privacy. We identify two failure modes: retrieval errors due to lexical redundancy in legal corpora, and decoding errors where models generate answers despite insufficient context. To address this, we propose Metadata Enriched Hybrid RAG to improve document level retrieval, and apply Direct Preference Optimization (DPO) to enforce safe refusal when context is inadequate. Together, these methods improve grounding, reliability, and safety in legal language models.