Compositional Neuro-Symbolic Reasoning

arXiv cs.AI / 4/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • 研究はARC(Abstraction and Reasoning Corpus)における構造化抽象ベース推論の一般化を評価し、純粋なニューラル手法の限界と純粋なシンボリック手法の知覚グラウンディング課題を整理しています。
  • 提案手法は、グリッドからオブジェクトレベルの構造を抽出し、DSL(ドメイン固有言語)に基づく原子的パターンの変換候補をニューラルな事前知識で提案し、さらに複数例の整合性で仮説をフィルタする「神経記号」アーキテクチャです。
  • 実装として、単位パターンに着想を得たコンポーショナル推論フレームワークを用い、LLMにオブジェクト表現と変換提案を組み合わせています。
  • ARC-AGI-2で、ベースLLM性能は公開評価セットで16%→24.4%に改善し、ARC Lang Solverと組み合わせた場合は30.8%まで向上しました。

Abstract

We study structured abstraction-based reasoning for the Abstraction and Reasoning Corpus (ARC) and compare its generalization to test-time approaches. Purely neural architectures lack reliable combinatorial generalization, while strictly symbolic systems struggle with perceptual grounding. We therefore propose a neuro-symbolic architecture that extracts object-level structure from grids, uses neural priors to propose candidate transformations from a fixed domain-specific language (DSL) of atomic patterns, and filters hypotheses using cross-example consistency. Instantiated as a compositional reasoning framework based on unit patterns inspired by human visual abstraction, the system augments large language models (LLMs) with object representations and transformation proposals. On ARC-AGI-2, it improves base LLM performance from 16% to 24.4% on the public evaluation set, and to 30.8% when combined with ARC Lang Solver via a meta-classifier. These results demonstrate that separating perception, neural-guided transformation proposal, and symbolic consistency filtering improves generalization without task-specific finetuning or reinforcement learning, while reducing reliance on brute-force search and sampling-based test-time scaling. We open-source the ARC-AGI-2 Reasoner code (https://github.com/CoreThink-AI/arc-agi-2-reasoner).