I needed to know if the cheaper model was good enough. So I built an LLM-as-a-Judge pipeline

Dev.to / 4/6/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The article presents a config-driven LLM-as-a-judge evaluation pipeline to decide whether a cheaper model is “good enough” for a specific workflow rather than relying only on generic benchmarks.
  • It runs a 3-stage process—candidate inference with format/schema checks, scoring by a separate LLM across 9 metrics, and aggregation into JSON/Markdown comparison reports with win rates and confidence intervals.
  • Key evaluation design choices include a 3-layer judge (format, content, expression evaluated separately), majority-vote judge runs to reduce noise, and blinding to randomize candidate label positions.
  • The pipeline supports multiple vendors/endpoints (OpenAI, Azure OpenAI, Gemini, and OpenAI-compatible local servers) and allows mixing models (e.g., local candidates with an external judge).
  • It also includes a “consistency mode” that switches from quality scoring to output-stability measurement when inference repeats are increased, and emphasizes rubric customization without code changes.

Benchmarks are useful, but they don't really tell me whether a prompt change or cheaper model is good enough for my own workflow.

I kept running into that, so I ended up building a config-driven eval pipeline: run test cases, check format/schema, use a separate LLM as judge, then generate comparison reports.

What it does

3-stage pipeline:

  1. Inference — Run your test cases against candidate models (format and schema validation runs automatically)
  2. Judge — A separate LLM scores outputs on 9 metrics (accuracy, faithfulness, completeness, etc.)
  3. Compare — Aggregate scores into a comparison report (JSON + Markdown)

Key design choices:

  • 3-layer judge architecture — Format, content, and expression are evaluated in separate LLM calls with no shared context. This prevents a formatting issue from biasing content scores.
  • Pairwise + absolute + hybrid modes — Compare two models head-to-head, score them independently, or both.
  • Majority vote aggregation — Run the judge multiple times and take the majority to reduce noise.
  • Blinding — Candidate labels are randomized to prevent position bias.
  • Consistency mode — Set inference_repeats >= 2 and the pipeline automatically switches to measuring output stability instead of quality.

Multi-vendor support:

  • OpenAI, Azure OpenAI, Gemini (native REST), and any OpenAI-compatible endpoint (LM Studio, vLLM, etc.)
  • Mix and match — e.g., judge with GPT, candidates on local models

What the output looks like

You get a comparison-report.json with win rates, per-metric mean scores, confidence intervals, and critical issue counts. Plus a Markdown report for quick reading.

Evaluation Results — Sample Run

The rubric is a standalone Markdown file with score anchors (1/3/5), bias guards, and critical issue rules. You can customize evaluation criteria by editing the rubric alone — no code changes needed.

What it's NOT

  • Not a benchmark suite — you bring your own test cases
  • Not a model training tool — it evaluates outputs, not weights
  • Not an agent framework — it's a batch evaluation pipeline

Tech stack

Python >= 3.11, Pydantic, Typer CLI. Three commands to run: uv sync, configure .env, uv run llm-judge run-all.

Repo: archminor/llm-as-a-judge

Curious to hear how other people are handling production LLM evals.