AI Navigate

TopoBench: Benchmarking LLMs on Hard Topological Reasoning

arXiv cs.AI / 3/13/2026

📰 NewsModels & Research

Key Points

  • TopoBench introduces a benchmark suite with six puzzle families across three difficulty levels to evaluate LLMs on hard topological reasoning tasks.
  • The study finds frontier LLMs solve fewer than a quarter of hard instances, with two families nearly unsolved, highlighting current limitations in this reasoning domain.
  • The authors annotate 750 chain-of-thought traces to identify four causal failure modes, such as premature commitment and constraint forgetting, contributing to errors in solving puzzles.
  • Interventions show that certain error patterns directly impact performance, while repeated reasoning is a benign byproduct of search, pointing to bottlenecks in constraint extraction from spatial representations.
  • They explore mitigation strategies including prompt guidance, cell-aligned grid representations, and tool-based constraint checking, with code and data available on GitHub.

Abstract

Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. These interventions show that certain error patterns like premature commitment and constraint forgetting have a direct impact on the ability to solve the puzzle, while repeated reasoning is a benign effect of search. Finally we study mitigation strategies including prompt guidance, cell-aligned grid representations and tool-based constraint checking, finding that the bottleneck lies in extracting constraints from spatial representations and not in reasoning over them. Code and data are available at github.com/mayug/topobench-benchmark.