ActuBench: A Multi-Agent LLM Pipeline for Generation and Evaluation of Actuarial Reasoning Tasks

arXiv cs.CL / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • ActuBench introduces a multi-agent LLM pipeline that automatically generates and evaluates actuarial reasoning assessment items mapped to the IAA Education Syllabus.
  • The system splits LLM duties into specialized roles (drafting, distractor construction, independent verification with one-shot repair loops, plus cost-optimized summarization and topic labeling).
  • Results cover 50 language models across eight providers using two benchmarks (100 hardest MCQs and 100 open-ended items scored by an LLM judge), and the paper reports three main findings.
  • Independent verification is crucial, locally hosted open-weight inference can achieve strong cost-performance, and rankings diverge between MCQ evaluation and LLM-judge evaluation—necessitating Judge-mode at the frontier.
  • A browsable web interface publishes the generated items, per-model responses, and a full leaderboard for inspection without needing to check out a repository.

Abstract

We present ActuBench, a multi-agent LLM pipeline for the automated generation and evaluation of advanced actuarial assessment items aligned with the International Actuarial Association (IAA) Education Syllabus. The pipeline separates four LLM roles by adapter: one agent drafts items, one constructs distractors, a third independently verifies both stages and drives bounded one-shot repair loops, and a cost-optimized auxiliary agent handles Wikipedia-note summarization and topic labelling. The items, per-model responses and complete leaderboard are published as a browsable web interface at https://actubench.de/en/, allowing readers and practitioners to inspect individual items without a repository checkout. We evaluate 50 language models from eight providers on two complementary benchmarks -- 100 empirically hardest multiple-choice items and 100 open-ended items scored by an LLM judge -- and report three headline findings. First, multi-agent verification is load-bearing: the independent verifier flags a majority of drafted items on first pass, most of which the one-shot repair loop resolves. Second, locally-hosted open-weights inference sits on the cost-performance Pareto front: a Gemma~4 model running on consumer hardware and a Cerebras-hosted 120B open-weights model dominate the near-zero-cost region, with the latter within one item of the top of the leaderboard. Third, MCQ and LLM-as-Judge rankings differ meaningfully: the MCQ scaffold inflates the performance ceiling, and Judge-mode evaluation is needed to discriminate at the frontier.