AI Navigate

Omanic: Towards Step-wise Evaluation of Multi-hop Reasoning in Large Language Models

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Omanic introduces an open-domain multi-hop QA resource with decomposed sub-questions and intermediate answers to enable step-wise analysis of reasoning.
  • The dataset comprises OmanicSynth (10,296 machine-generated training examples) and OmanicBench (967 expert-reviewed evaluation examples) designed to diagnose reasoning processes.
  • State-of-the-art LLMs achieve 73.11% multiple-choice accuracy on OmanicBench, indicating the task's difficulty and the need for step-level annotations.
  • Supervised fine-tuning on OmanicSynth yields substantial transfer gains across six reasoning and math benchmarks, validating the dataset's usefulness for reasoning-capability transfer.
  • The data and code are released publicly at HuggingFace and GitHub (https://huggingface.co/datasets/li-lab/Omanic, https://github.com/XiaojieGu/Omanic).

Abstract

Reasoning-focused large language models (LLMs) have advanced in many NLP tasks, yet their evaluation remains challenging: final answers alone do not expose the intermediate reasoning steps, making it difficult to determine whether a model truly reasons correctly and where failures occur, while existing multi-hop QA benchmarks lack step-level annotations for diagnosing reasoning failures. To address this gap, we propose Omanic, an open-domain multi-hop QA resource that provides decomposed sub-questions and intermediate answers as structural annotations for analyzing reasoning processes. It contains 10,296 machine-generated training examples (OmanicSynth) and 967 expert-reviewed human-annotated evaluation examples (OmanicBench). Systematic evaluations show that state-of-the-art LLMs achieve only 73.11% multiple-choice accuracy on OmanicBench, confirming its high difficulty. Stepwise analysis reveals that CoT's performance hinges on factual completeness, with its gains diminishing under knowledge gaps and errors amplifying in later hops. Additionally, supervised fine-tuning on OmanicSynth brings substantial transfer gains (7.41 average points) across six reasoning and math benchmarks, validating the dataset's quality and further supporting the effectiveness of OmanicSynth as supervision for reasoning-capability transfer. We release the data at https://huggingface.co/datasets/li-lab/Omanic and the code at https://github.com/XiaojieGu/Omanic.