LLMORPH: Automated Metamorphic Testing of Large Language Models

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces LLMORPH, an automated metamorphic testing tool for Large Language Models that aims to find incorrect behaviors without requiring human-labeled oracle data.
  • LLMORPH applies Metamorphic Testing using Metamorphic Relations to generate follow-up inputs and detect inconsistencies between source and output behaviors.
  • The authors describe the tool’s design and implementation and show that it can be extended to different LLMs, NLP tasks, and custom sets of metamorphic relations.
  • In evaluation, LLMORPH used 36 metamorphic relations across four NLP benchmarks, running 561,000+ test executions on GPT-4, LLAMA3, and HERMES 2.
  • The results indicate that metamorphic testing can effectively and automatically expose reliability issues in LLM-driven NLP systems, supporting robustness evaluation efforts for researchers and developers.

Abstract

Automated testing is essential for evaluating and improving the reliability of Large Language Models (LLMs), yet the lack of automated oracles for verifying output correctness remains a key challenge. We present LLMORPH, an automated testing tool specifically designed for LLMs performing NLP tasks, which leverages Metamorphic Testing (MT) to uncover faulty behaviors without relying on human-labeled data. MT uses Metamorphic Relations (MRs) to generate follow-up inputs from source test input, enabling detection of inconsistencies in model outputs without the need of expensive labelled data. LLMORPH is aimed at researchers and developers who want to evaluate the robustness of LLM-based NLP systems. In this paper, we detail the design, implementation, and practical usage of LLMORPH, demonstrating how it can be easily extended to any LLM, NLP task, and set of MRs. In our evaluation, we applied 36 MRs across four NLP benchmarks, testing three state-of-the-art LLMs: GPT-4, LLAMA3, and HERMES 2. This produced over 561,000 test executions. Results demonstrate LLMORPH's effectiveness in automatically exposing inconsistencies.