AI Navigate

継続的な経験駆動型実行によるディープタブラ研究

arXiv cs.AI / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文は、階層的かつ双方向のヘッダーを持つ非構造化テーブルに対する複雑な長期的推論を伴うディープタブラ研究(DTR)の課題に取り組む。
  • クエリ理解とテーブル操作実行を結合した閉ループの意思決定プロセスとしてタブラ推論を扱う新規のエージェントフレームワークを提案。
  • フレームワークは、意味マッピングのための階層的メタグラフ構築、経路選択のための期待認識ポリシー、歴史的実行結果を活用して継続的改善を可能にするシャム構造メモリを含む。
  • 実験により、困難な非構造化タブラベンチマークでの有効性を示し、長期的タブラタスクにおける戦略的計画と実行の分離の重要性を強調。

Computer Science > Artificial Intelligence

arXiv:2603.09151 (cs)
[Submitted on 10 Mar 2026]

Title:Deep Tabular Research via Continual Experience-Driven Execution

View a PDF of the paper titled Deep Tabular Research via Continual Experience-Driven Execution, by Junnan Dong and 7 other authors
View PDF
Abstract:Large language models often struggle with complex long-horizon analytical tasks over unstructured tables, which typically feature hierarchical and bidirectional headers and non-canonical layouts. We formalize this challenge as Deep Tabular Research (DTR), requiring multi-step reasoning over interdependent table regions. To address DTR, we propose a novel agentic framework that treats tabular reasoning as a closed-loop decision-making process. We carefully design a coupled query and table comprehension for path decision making and operational execution. Specifically, (i) DTR first constructs a hierarchical meta graph to capture bidirectional semantics, mapping natural language queries into an operation-level search space; (ii) To navigate this space, we introduce an expectation-aware selection policy that prioritizes high-utility execution paths; (iii) Crucially, historical execution outcomes are synthesized into a siamese structured memory, i.e., parameterized updates and abstracted texts, enabling continual refinement. Extensive experiments on challenging unstructured tabular benchmarks verify the effectiveness and highlight the necessity of separating strategic planning from low-level execution for long-horizon tabular reasoning.
Comments:
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09151 [cs.AI]
  (or arXiv:2603.09151v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09151
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Junnan Dong [view email]
[v1] Tue, 10 Mar 2026 03:42:54 UTC (621 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Deep Tabular Research via Continual Experience-Driven Execution, by Junnan Dong and 7 other authors
  • View PDF
  • TeX Source
Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.